repo_name stringlengths 6 67 | path stringlengths 5 185 | copies stringlengths 1 3 | size stringlengths 4 6 | content stringlengths 1.02k 962k | license stringclasses 15 values |
|---|---|---|---|---|---|
tequa/ammisoft | ammimain/WinPython-64bit-2.7.13.1Zero/python-2.7.13.amd64/Lib/site-packages/numpy/doc/creation.py | 52 | 5507 | """
==============
Array Creation
==============
Introduction
============
There are 5 general mechanisms for creating arrays:
1) Conversion from other Python structures (e.g., lists, tuples)
2) Intrinsic numpy array array creation objects (e.g., arange, ones, zeros,
etc.)
3) Reading arrays from disk, either from standard or custom formats
4) Creating arrays from raw bytes through the use of strings or buffers
5) Use of special library functions (e.g., random)
This section will not cover means of replicating, joining, or otherwise
expanding or mutating existing arrays. Nor will it cover creating object
arrays or structured arrays. Both of those are covered in their own sections.
Converting Python array_like Objects to NumPy Arrays
====================================================
In general, numerical data arranged in an array-like structure in Python can
be converted to arrays through the use of the array() function. The most
obvious examples are lists and tuples. See the documentation for array() for
details for its use. Some objects may support the array-protocol and allow
conversion to arrays this way. A simple way to find out if the object can be
converted to a numpy array using array() is simply to try it interactively and
see if it works! (The Python Way).
Examples: ::
>>> x = np.array([2,3,1,0])
>>> x = np.array([2, 3, 1, 0])
>>> x = np.array([[1,2.0],[0,0],(1+1j,3.)]) # note mix of tuple and lists,
and types
>>> x = np.array([[ 1.+0.j, 2.+0.j], [ 0.+0.j, 0.+0.j], [ 1.+1.j, 3.+0.j]])
Intrinsic NumPy Array Creation
==============================
NumPy has built-in functions for creating arrays from scratch:
zeros(shape) will create an array filled with 0 values with the specified
shape. The default dtype is float64.
``>>> np.zeros((2, 3))
array([[ 0., 0., 0.], [ 0., 0., 0.]])``
ones(shape) will create an array filled with 1 values. It is identical to
zeros in all other respects.
arange() will create arrays with regularly incrementing values. Check the
docstring for complete information on the various ways it can be used. A few
examples will be given here: ::
>>> np.arange(10)
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> np.arange(2, 10, dtype=np.float)
array([ 2., 3., 4., 5., 6., 7., 8., 9.])
>>> np.arange(2, 3, 0.1)
array([ 2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9])
Note that there are some subtleties regarding the last usage that the user
should be aware of that are described in the arange docstring.
linspace() will create arrays with a specified number of elements, and
spaced equally between the specified beginning and end values. For
example: ::
>>> np.linspace(1., 4., 6)
array([ 1. , 1.6, 2.2, 2.8, 3.4, 4. ])
The advantage of this creation function is that one can guarantee the
number of elements and the starting and end point, which arange()
generally will not do for arbitrary start, stop, and step values.
indices() will create a set of arrays (stacked as a one-higher dimensioned
array), one per dimension with each representing variation in that dimension.
An example illustrates much better than a verbal description: ::
>>> np.indices((3,3))
array([[[0, 0, 0], [1, 1, 1], [2, 2, 2]], [[0, 1, 2], [0, 1, 2], [0, 1, 2]]])
This is particularly useful for evaluating functions of multiple dimensions on
a regular grid.
Reading Arrays From Disk
========================
This is presumably the most common case of large array creation. The details,
of course, depend greatly on the format of data on disk and so this section
can only give general pointers on how to handle various formats.
Standard Binary Formats
-----------------------
Various fields have standard formats for array data. The following lists the
ones with known python libraries to read them and return numpy arrays (there
may be others for which it is possible to read and convert to numpy arrays so
check the last section as well)
::
HDF5: PyTables
FITS: PyFITS
Examples of formats that cannot be read directly but for which it is not hard to
convert are those formats supported by libraries like PIL (able to read and
write many image formats such as jpg, png, etc).
Common ASCII Formats
------------------------
Comma Separated Value files (CSV) are widely used (and an export and import
option for programs like Excel). There are a number of ways of reading these
files in Python. There are CSV functions in Python and functions in pylab
(part of matplotlib).
More generic ascii files can be read using the io package in scipy.
Custom Binary Formats
---------------------
There are a variety of approaches one can use. If the file has a relatively
simple format then one can write a simple I/O library and use the numpy
fromfile() function and .tofile() method to read and write numpy arrays
directly (mind your byteorder though!) If a good C or C++ library exists that
read the data, one can wrap that library with a variety of techniques though
that certainly is much more work and requires significantly more advanced
knowledge to interface with C or C++.
Use of Special Libraries
------------------------
There are libraries that can be used to generate arrays for special purposes
and it isn't possible to enumerate all of them. The most common uses are use
of the many array generation functions in random that can generate arrays of
random values, and some utility functions to generate special matrices (e.g.
diagonal).
"""
from __future__ import division, absolute_import, print_function
| bsd-3-clause |
bsipocz/seaborn | seaborn/tests/test_axisgrid.py | 11 | 41072 | import warnings
import numpy as np
import pandas as pd
from scipy import stats
import matplotlib as mpl
import matplotlib.pyplot as plt
from distutils.version import LooseVersion
import nose.tools as nt
import numpy.testing as npt
from numpy.testing.decorators import skipif
import pandas.util.testing as tm
from . import PlotTestCase
from .. import axisgrid as ag
from .. import rcmod
from ..palettes import color_palette
from ..distributions import kdeplot
from ..categorical import pointplot
from ..linearmodels import pairplot
from ..utils import categorical_order
rs = np.random.RandomState(0)
old_matplotlib = LooseVersion(mpl.__version__) < "1.4"
class TestFacetGrid(PlotTestCase):
df = pd.DataFrame(dict(x=rs.normal(size=60),
y=rs.gamma(4, size=60),
a=np.repeat(list("abc"), 20),
b=np.tile(list("mn"), 30),
c=np.tile(list("tuv"), 20),
d=np.tile(list("abcdefghij"), 6)))
def test_self_data(self):
g = ag.FacetGrid(self.df)
nt.assert_is(g.data, self.df)
def test_self_fig(self):
g = ag.FacetGrid(self.df)
nt.assert_is_instance(g.fig, plt.Figure)
def test_self_axes(self):
g = ag.FacetGrid(self.df, row="a", col="b", hue="c")
for ax in g.axes.flat:
nt.assert_is_instance(ax, plt.Axes)
def test_axes_array_size(self):
g1 = ag.FacetGrid(self.df)
nt.assert_equal(g1.axes.shape, (1, 1))
g2 = ag.FacetGrid(self.df, row="a")
nt.assert_equal(g2.axes.shape, (3, 1))
g3 = ag.FacetGrid(self.df, col="b")
nt.assert_equal(g3.axes.shape, (1, 2))
g4 = ag.FacetGrid(self.df, hue="c")
nt.assert_equal(g4.axes.shape, (1, 1))
g5 = ag.FacetGrid(self.df, row="a", col="b", hue="c")
nt.assert_equal(g5.axes.shape, (3, 2))
for ax in g5.axes.flat:
nt.assert_is_instance(ax, plt.Axes)
def test_single_axes(self):
g1 = ag.FacetGrid(self.df)
nt.assert_is_instance(g1.ax, plt.Axes)
g2 = ag.FacetGrid(self.df, row="a")
with nt.assert_raises(AttributeError):
g2.ax
g3 = ag.FacetGrid(self.df, col="a")
with nt.assert_raises(AttributeError):
g3.ax
g4 = ag.FacetGrid(self.df, col="a", row="b")
with nt.assert_raises(AttributeError):
g4.ax
def test_col_wrap(self):
g = ag.FacetGrid(self.df, col="d")
nt.assert_equal(g.axes.shape, (1, 10))
nt.assert_is(g.facet_axis(0, 8), g.axes[0, 8])
g_wrap = ag.FacetGrid(self.df, col="d", col_wrap=4)
nt.assert_equal(g_wrap.axes.shape, (10,))
nt.assert_is(g_wrap.facet_axis(0, 8), g_wrap.axes[8])
nt.assert_equal(g_wrap._ncol, 4)
nt.assert_equal(g_wrap._nrow, 3)
with nt.assert_raises(ValueError):
g = ag.FacetGrid(self.df, row="b", col="d", col_wrap=4)
df = self.df.copy()
df.loc[df.d == "j"] = np.nan
g_missing = ag.FacetGrid(df, col="d")
nt.assert_equal(g_missing.axes.shape, (1, 9))
g_missing_wrap = ag.FacetGrid(df, col="d", col_wrap=4)
nt.assert_equal(g_missing_wrap.axes.shape, (9,))
def test_normal_axes(self):
null = np.empty(0, object).flat
g = ag.FacetGrid(self.df)
npt.assert_array_equal(g._bottom_axes, g.axes.flat)
npt.assert_array_equal(g._not_bottom_axes, null)
npt.assert_array_equal(g._left_axes, g.axes.flat)
npt.assert_array_equal(g._not_left_axes, null)
npt.assert_array_equal(g._inner_axes, null)
g = ag.FacetGrid(self.df, col="c")
npt.assert_array_equal(g._bottom_axes, g.axes.flat)
npt.assert_array_equal(g._not_bottom_axes, null)
npt.assert_array_equal(g._left_axes, g.axes[:, 0].flat)
npt.assert_array_equal(g._not_left_axes, g.axes[:, 1:].flat)
npt.assert_array_equal(g._inner_axes, null)
g = ag.FacetGrid(self.df, row="c")
npt.assert_array_equal(g._bottom_axes, g.axes[-1, :].flat)
npt.assert_array_equal(g._not_bottom_axes, g.axes[:-1, :].flat)
npt.assert_array_equal(g._left_axes, g.axes.flat)
npt.assert_array_equal(g._not_left_axes, null)
npt.assert_array_equal(g._inner_axes, null)
g = ag.FacetGrid(self.df, col="a", row="c")
npt.assert_array_equal(g._bottom_axes, g.axes[-1, :].flat)
npt.assert_array_equal(g._not_bottom_axes, g.axes[:-1, :].flat)
npt.assert_array_equal(g._left_axes, g.axes[:, 0].flat)
npt.assert_array_equal(g._not_left_axes, g.axes[:, 1:].flat)
npt.assert_array_equal(g._inner_axes, g.axes[:-1, 1:].flat)
def test_wrapped_axes(self):
null = np.empty(0, object).flat
g = ag.FacetGrid(self.df, col="a", col_wrap=2)
npt.assert_array_equal(g._bottom_axes,
g.axes[np.array([1, 2])].flat)
npt.assert_array_equal(g._not_bottom_axes, g.axes[:1].flat)
npt.assert_array_equal(g._left_axes, g.axes[np.array([0, 2])].flat)
npt.assert_array_equal(g._not_left_axes, g.axes[np.array([1])].flat)
npt.assert_array_equal(g._inner_axes, null)
def test_figure_size(self):
g = ag.FacetGrid(self.df, row="a", col="b")
npt.assert_array_equal(g.fig.get_size_inches(), (6, 9))
g = ag.FacetGrid(self.df, row="a", col="b", size=6)
npt.assert_array_equal(g.fig.get_size_inches(), (12, 18))
g = ag.FacetGrid(self.df, col="c", size=4, aspect=.5)
npt.assert_array_equal(g.fig.get_size_inches(), (6, 4))
def test_figure_size_with_legend(self):
g1 = ag.FacetGrid(self.df, col="a", hue="c", size=4, aspect=.5)
npt.assert_array_equal(g1.fig.get_size_inches(), (6, 4))
g1.add_legend()
nt.assert_greater(g1.fig.get_size_inches()[0], 6)
g2 = ag.FacetGrid(self.df, col="a", hue="c", size=4, aspect=.5,
legend_out=False)
npt.assert_array_equal(g2.fig.get_size_inches(), (6, 4))
g2.add_legend()
npt.assert_array_equal(g2.fig.get_size_inches(), (6, 4))
def test_legend_data(self):
g1 = ag.FacetGrid(self.df, hue="a")
g1.map(plt.plot, "x", "y")
g1.add_legend()
palette = color_palette(n_colors=3)
nt.assert_equal(g1._legend.get_title().get_text(), "a")
a_levels = sorted(self.df.a.unique())
lines = g1._legend.get_lines()
nt.assert_equal(len(lines), len(a_levels))
for line, hue in zip(lines, palette):
nt.assert_equal(line.get_color(), hue)
labels = g1._legend.get_texts()
nt.assert_equal(len(labels), len(a_levels))
for label, level in zip(labels, a_levels):
nt.assert_equal(label.get_text(), level)
def test_legend_data_missing_level(self):
g1 = ag.FacetGrid(self.df, hue="a", hue_order=list("azbc"))
g1.map(plt.plot, "x", "y")
g1.add_legend()
b, g, r, p = color_palette(n_colors=4)
palette = [b, r, p]
nt.assert_equal(g1._legend.get_title().get_text(), "a")
a_levels = sorted(self.df.a.unique())
lines = g1._legend.get_lines()
nt.assert_equal(len(lines), len(a_levels))
for line, hue in zip(lines, palette):
nt.assert_equal(line.get_color(), hue)
labels = g1._legend.get_texts()
nt.assert_equal(len(labels), 4)
for label, level in zip(labels, list("azbc")):
nt.assert_equal(label.get_text(), level)
def test_get_boolean_legend_data(self):
self.df["b_bool"] = self.df.b == "m"
g1 = ag.FacetGrid(self.df, hue="b_bool")
g1.map(plt.plot, "x", "y")
g1.add_legend()
palette = color_palette(n_colors=2)
nt.assert_equal(g1._legend.get_title().get_text(), "b_bool")
b_levels = list(map(str, categorical_order(self.df.b_bool)))
lines = g1._legend.get_lines()
nt.assert_equal(len(lines), len(b_levels))
for line, hue in zip(lines, palette):
nt.assert_equal(line.get_color(), hue)
labels = g1._legend.get_texts()
nt.assert_equal(len(labels), len(b_levels))
for label, level in zip(labels, b_levels):
nt.assert_equal(label.get_text(), level)
def test_legend_options(self):
g1 = ag.FacetGrid(self.df, hue="b")
g1.map(plt.plot, "x", "y")
g1.add_legend()
def test_legendout_with_colwrap(self):
g = ag.FacetGrid(self.df, col="d", hue='b',
col_wrap=4, legend_out=False)
g.map(plt.plot, "x", "y", linewidth=3)
g.add_legend()
def test_subplot_kws(self):
g = ag.FacetGrid(self.df, subplot_kws=dict(axisbg="blue"))
for ax in g.axes.flat:
nt.assert_equal(ax.get_axis_bgcolor(), "blue")
@skipif(old_matplotlib)
def test_gridspec_kws(self):
ratios = [3, 1, 2]
sizes = [0.46, 0.15, 0.31]
gskws = dict(width_ratios=ratios, height_ratios=ratios)
g = ag.FacetGrid(self.df, col='c', row='a', gridspec_kws=gskws)
# clear out all ticks
for ax in g.axes.flat:
ax.set_xticks([])
ax.set_yticks([])
g.fig.tight_layout()
widths, heights = np.meshgrid(sizes, sizes)
for n, ax in enumerate(g.axes.flat):
npt.assert_almost_equal(
ax.get_position().width,
widths.flatten()[n],
decimal=2
)
npt.assert_almost_equal(
ax.get_position().height,
heights.flatten()[n],
decimal=2
)
@skipif(old_matplotlib)
def test_gridspec_kws_col_wrap(self):
ratios = [3, 1, 2, 1, 1]
sizes = [0.46, 0.15, 0.31]
gskws = dict(width_ratios=ratios)
with warnings.catch_warnings():
warnings.resetwarnings()
warnings.simplefilter("always")
npt.assert_warns(UserWarning, ag.FacetGrid, self.df, col='d',
col_wrap=5, gridspec_kws=gskws)
@skipif(not old_matplotlib)
def test_gridsic_kws_old_mpl(self):
ratios = [3, 1, 2]
sizes = [0.46, 0.15, 0.31]
gskws = dict(width_ratios=ratios, height_ratios=ratios)
with warnings.catch_warnings():
warnings.resetwarnings()
warnings.simplefilter("always")
npt.assert_warns(UserWarning, ag.FacetGrid, self.df, col='c',
row='a', gridspec_kws=gskws)
def test_data_generator(self):
g = ag.FacetGrid(self.df, row="a")
d = list(g.facet_data())
nt.assert_equal(len(d), 3)
tup, data = d[0]
nt.assert_equal(tup, (0, 0, 0))
nt.assert_true((data["a"] == "a").all())
tup, data = d[1]
nt.assert_equal(tup, (1, 0, 0))
nt.assert_true((data["a"] == "b").all())
g = ag.FacetGrid(self.df, row="a", col="b")
d = list(g.facet_data())
nt.assert_equal(len(d), 6)
tup, data = d[0]
nt.assert_equal(tup, (0, 0, 0))
nt.assert_true((data["a"] == "a").all())
nt.assert_true((data["b"] == "m").all())
tup, data = d[1]
nt.assert_equal(tup, (0, 1, 0))
nt.assert_true((data["a"] == "a").all())
nt.assert_true((data["b"] == "n").all())
tup, data = d[2]
nt.assert_equal(tup, (1, 0, 0))
nt.assert_true((data["a"] == "b").all())
nt.assert_true((data["b"] == "m").all())
g = ag.FacetGrid(self.df, hue="c")
d = list(g.facet_data())
nt.assert_equal(len(d), 3)
tup, data = d[1]
nt.assert_equal(tup, (0, 0, 1))
nt.assert_true((data["c"] == "u").all())
def test_map(self):
g = ag.FacetGrid(self.df, row="a", col="b", hue="c")
g.map(plt.plot, "x", "y", linewidth=3)
lines = g.axes[0, 0].lines
nt.assert_equal(len(lines), 3)
line1, _, _ = lines
nt.assert_equal(line1.get_linewidth(), 3)
x, y = line1.get_data()
mask = (self.df.a == "a") & (self.df.b == "m") & (self.df.c == "t")
npt.assert_array_equal(x, self.df.x[mask])
npt.assert_array_equal(y, self.df.y[mask])
def test_map_dataframe(self):
g = ag.FacetGrid(self.df, row="a", col="b", hue="c")
plot = lambda x, y, data=None, **kws: plt.plot(data[x], data[y], **kws)
g.map_dataframe(plot, "x", "y", linestyle="--")
lines = g.axes[0, 0].lines
nt.assert_equal(len(lines), 3)
line1, _, _ = lines
nt.assert_equal(line1.get_linestyle(), "--")
x, y = line1.get_data()
mask = (self.df.a == "a") & (self.df.b == "m") & (self.df.c == "t")
npt.assert_array_equal(x, self.df.x[mask])
npt.assert_array_equal(y, self.df.y[mask])
def test_set(self):
g = ag.FacetGrid(self.df, row="a", col="b")
xlim = (-2, 5)
ylim = (3, 6)
xticks = [-2, 0, 3, 5]
yticks = [3, 4.5, 6]
g.set(xlim=xlim, ylim=ylim, xticks=xticks, yticks=yticks)
for ax in g.axes.flat:
npt.assert_array_equal(ax.get_xlim(), xlim)
npt.assert_array_equal(ax.get_ylim(), ylim)
npt.assert_array_equal(ax.get_xticks(), xticks)
npt.assert_array_equal(ax.get_yticks(), yticks)
def test_set_titles(self):
g = ag.FacetGrid(self.df, row="a", col="b")
g.map(plt.plot, "x", "y")
# Test the default titles
nt.assert_equal(g.axes[0, 0].get_title(), "a = a | b = m")
nt.assert_equal(g.axes[0, 1].get_title(), "a = a | b = n")
nt.assert_equal(g.axes[1, 0].get_title(), "a = b | b = m")
# Test a provided title
g.set_titles("{row_var} == {row_name} \/ {col_var} == {col_name}")
nt.assert_equal(g.axes[0, 0].get_title(), "a == a \/ b == m")
nt.assert_equal(g.axes[0, 1].get_title(), "a == a \/ b == n")
nt.assert_equal(g.axes[1, 0].get_title(), "a == b \/ b == m")
# Test a single row
g = ag.FacetGrid(self.df, col="b")
g.map(plt.plot, "x", "y")
# Test the default titles
nt.assert_equal(g.axes[0, 0].get_title(), "b = m")
nt.assert_equal(g.axes[0, 1].get_title(), "b = n")
# test with dropna=False
g = ag.FacetGrid(self.df, col="b", hue="b", dropna=False)
g.map(plt.plot, 'x', 'y')
def test_set_titles_margin_titles(self):
g = ag.FacetGrid(self.df, row="a", col="b", margin_titles=True)
g.map(plt.plot, "x", "y")
# Test the default titles
nt.assert_equal(g.axes[0, 0].get_title(), "b = m")
nt.assert_equal(g.axes[0, 1].get_title(), "b = n")
nt.assert_equal(g.axes[1, 0].get_title(), "")
# Test the row "titles"
nt.assert_equal(g.axes[0, 1].texts[0].get_text(), "a = a")
nt.assert_equal(g.axes[1, 1].texts[0].get_text(), "a = b")
# Test a provided title
g.set_titles(col_template="{col_var} == {col_name}")
nt.assert_equal(g.axes[0, 0].get_title(), "b == m")
nt.assert_equal(g.axes[0, 1].get_title(), "b == n")
nt.assert_equal(g.axes[1, 0].get_title(), "")
def test_set_ticklabels(self):
g = ag.FacetGrid(self.df, row="a", col="b")
g.map(plt.plot, "x", "y")
xlab = [l.get_text() + "h" for l in g.axes[1, 0].get_xticklabels()]
ylab = [l.get_text() for l in g.axes[1, 0].get_yticklabels()]
g.set_xticklabels(xlab)
g.set_yticklabels(rotation=90)
got_x = [l.get_text() + "h" for l in g.axes[1, 1].get_xticklabels()]
got_y = [l.get_text() for l in g.axes[0, 0].get_yticklabels()]
npt.assert_array_equal(got_x, xlab)
npt.assert_array_equal(got_y, ylab)
x, y = np.arange(10), np.arange(10)
df = pd.DataFrame(np.c_[x, y], columns=["x", "y"])
g = ag.FacetGrid(df).map(pointplot, "x", "y")
g.set_xticklabels(step=2)
got_x = [int(l.get_text()) for l in g.axes[0, 0].get_xticklabels()]
npt.assert_array_equal(x[::2], got_x)
g = ag.FacetGrid(self.df, col="d", col_wrap=5)
g.map(plt.plot, "x", "y")
g.set_xticklabels(rotation=45)
g.set_yticklabels(rotation=75)
for ax in g._bottom_axes:
for l in ax.get_xticklabels():
nt.assert_equal(l.get_rotation(), 45)
for ax in g._left_axes:
for l in ax.get_yticklabels():
nt.assert_equal(l.get_rotation(), 75)
def test_set_axis_labels(self):
g = ag.FacetGrid(self.df, row="a", col="b")
g.map(plt.plot, "x", "y")
xlab = 'xx'
ylab = 'yy'
g.set_axis_labels(xlab, ylab)
got_x = [ax.get_xlabel() for ax in g.axes[-1, :]]
got_y = [ax.get_ylabel() for ax in g.axes[:, 0]]
npt.assert_array_equal(got_x, xlab)
npt.assert_array_equal(got_y, ylab)
def test_axis_lims(self):
g = ag.FacetGrid(self.df, row="a", col="b", xlim=(0, 4), ylim=(-2, 3))
nt.assert_equal(g.axes[0, 0].get_xlim(), (0, 4))
nt.assert_equal(g.axes[0, 0].get_ylim(), (-2, 3))
def test_data_orders(self):
g = ag.FacetGrid(self.df, row="a", col="b", hue="c")
nt.assert_equal(g.row_names, list("abc"))
nt.assert_equal(g.col_names, list("mn"))
nt.assert_equal(g.hue_names, list("tuv"))
nt.assert_equal(g.axes.shape, (3, 2))
g = ag.FacetGrid(self.df, row="a", col="b", hue="c",
row_order=list("bca"),
col_order=list("nm"),
hue_order=list("vtu"))
nt.assert_equal(g.row_names, list("bca"))
nt.assert_equal(g.col_names, list("nm"))
nt.assert_equal(g.hue_names, list("vtu"))
nt.assert_equal(g.axes.shape, (3, 2))
g = ag.FacetGrid(self.df, row="a", col="b", hue="c",
row_order=list("bcda"),
col_order=list("nom"),
hue_order=list("qvtu"))
nt.assert_equal(g.row_names, list("bcda"))
nt.assert_equal(g.col_names, list("nom"))
nt.assert_equal(g.hue_names, list("qvtu"))
nt.assert_equal(g.axes.shape, (4, 3))
def test_palette(self):
rcmod.set()
g = ag.FacetGrid(self.df, hue="c")
nt.assert_equal(g._colors, color_palette(n_colors=3))
g = ag.FacetGrid(self.df, hue="d")
nt.assert_equal(g._colors, color_palette("husl", 10))
g = ag.FacetGrid(self.df, hue="c", palette="Set2")
nt.assert_equal(g._colors, color_palette("Set2", 3))
dict_pal = dict(t="red", u="green", v="blue")
list_pal = color_palette(["red", "green", "blue"], 3)
g = ag.FacetGrid(self.df, hue="c", palette=dict_pal)
nt.assert_equal(g._colors, list_pal)
list_pal = color_palette(["green", "blue", "red"], 3)
g = ag.FacetGrid(self.df, hue="c", hue_order=list("uvt"),
palette=dict_pal)
nt.assert_equal(g._colors, list_pal)
def test_hue_kws(self):
kws = dict(marker=["o", "s", "D"])
g = ag.FacetGrid(self.df, hue="c", hue_kws=kws)
g.map(plt.plot, "x", "y")
for line, marker in zip(g.axes[0, 0].lines, kws["marker"]):
nt.assert_equal(line.get_marker(), marker)
def test_dropna(self):
df = self.df.copy()
hasna = pd.Series(np.tile(np.arange(6), 10), dtype=np.float)
hasna[hasna == 5] = np.nan
df["hasna"] = hasna
g = ag.FacetGrid(df, dropna=False, row="hasna")
nt.assert_equal(g._not_na.sum(), 60)
g = ag.FacetGrid(df, dropna=True, row="hasna")
nt.assert_equal(g._not_na.sum(), 50)
class TestPairGrid(PlotTestCase):
rs = np.random.RandomState(sum(map(ord, "PairGrid")))
df = pd.DataFrame(dict(x=rs.normal(size=80),
y=rs.randint(0, 4, size=(80)),
z=rs.gamma(3, size=80),
a=np.repeat(list("abcd"), 20),
b=np.repeat(list("abcdefgh"), 10)))
def test_self_data(self):
g = ag.PairGrid(self.df)
nt.assert_is(g.data, self.df)
def test_ignore_datelike_data(self):
df = self.df.copy()
df['date'] = pd.date_range('2010-01-01', periods=len(df), freq='d')
result = ag.PairGrid(self.df).data
expected = df.drop('date', axis=1)
tm.assert_frame_equal(result, expected)
def test_self_fig(self):
g = ag.PairGrid(self.df)
nt.assert_is_instance(g.fig, plt.Figure)
def test_self_axes(self):
g = ag.PairGrid(self.df)
for ax in g.axes.flat:
nt.assert_is_instance(ax, plt.Axes)
def test_default_axes(self):
g = ag.PairGrid(self.df)
nt.assert_equal(g.axes.shape, (3, 3))
nt.assert_equal(g.x_vars, ["x", "y", "z"])
nt.assert_equal(g.y_vars, ["x", "y", "z"])
nt.assert_true(g.square_grid)
def test_specific_square_axes(self):
vars = ["z", "x"]
g = ag.PairGrid(self.df, vars=vars)
nt.assert_equal(g.axes.shape, (len(vars), len(vars)))
nt.assert_equal(g.x_vars, vars)
nt.assert_equal(g.y_vars, vars)
nt.assert_true(g.square_grid)
def test_specific_nonsquare_axes(self):
x_vars = ["x", "y"]
y_vars = ["z", "y", "x"]
g = ag.PairGrid(self.df, x_vars=x_vars, y_vars=y_vars)
nt.assert_equal(g.axes.shape, (len(y_vars), len(x_vars)))
nt.assert_equal(g.x_vars, x_vars)
nt.assert_equal(g.y_vars, y_vars)
nt.assert_true(not g.square_grid)
x_vars = ["x", "y"]
y_vars = "z"
g = ag.PairGrid(self.df, x_vars=x_vars, y_vars=y_vars)
nt.assert_equal(g.axes.shape, (len(y_vars), len(x_vars)))
nt.assert_equal(g.x_vars, list(x_vars))
nt.assert_equal(g.y_vars, list(y_vars))
nt.assert_true(not g.square_grid)
def test_specific_square_axes_with_array(self):
vars = np.array(["z", "x"])
g = ag.PairGrid(self.df, vars=vars)
nt.assert_equal(g.axes.shape, (len(vars), len(vars)))
nt.assert_equal(g.x_vars, list(vars))
nt.assert_equal(g.y_vars, list(vars))
nt.assert_true(g.square_grid)
def test_specific_nonsquare_axes_with_array(self):
x_vars = np.array(["x", "y"])
y_vars = np.array(["z", "y", "x"])
g = ag.PairGrid(self.df, x_vars=x_vars, y_vars=y_vars)
nt.assert_equal(g.axes.shape, (len(y_vars), len(x_vars)))
nt.assert_equal(g.x_vars, list(x_vars))
nt.assert_equal(g.y_vars, list(y_vars))
nt.assert_true(not g.square_grid)
def test_size(self):
g1 = ag.PairGrid(self.df, size=3)
npt.assert_array_equal(g1.fig.get_size_inches(), (9, 9))
g2 = ag.PairGrid(self.df, size=4, aspect=.5)
npt.assert_array_equal(g2.fig.get_size_inches(), (6, 12))
g3 = ag.PairGrid(self.df, y_vars=["z"], x_vars=["x", "y"],
size=2, aspect=2)
npt.assert_array_equal(g3.fig.get_size_inches(), (8, 2))
def test_map(self):
vars = ["x", "y", "z"]
g1 = ag.PairGrid(self.df)
g1.map(plt.scatter)
for i, axes_i in enumerate(g1.axes):
for j, ax in enumerate(axes_i):
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
g2 = ag.PairGrid(self.df, "a")
g2.map(plt.scatter)
for i, axes_i in enumerate(g2.axes):
for j, ax in enumerate(axes_i):
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
for k, k_level in enumerate("abcd"):
x_in_k = x_in[self.df.a == k_level]
y_in_k = y_in[self.df.a == k_level]
x_out, y_out = ax.collections[k].get_offsets().T
npt.assert_array_equal(x_in_k, x_out)
npt.assert_array_equal(y_in_k, y_out)
def test_map_nonsquare(self):
x_vars = ["x"]
y_vars = ["y", "z"]
g = ag.PairGrid(self.df, x_vars=x_vars, y_vars=y_vars)
g.map(plt.scatter)
x_in = self.df.x
for i, i_var in enumerate(y_vars):
ax = g.axes[i, 0]
y_in = self.df[i_var]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
def test_map_lower(self):
vars = ["x", "y", "z"]
g = ag.PairGrid(self.df)
g.map_lower(plt.scatter)
for i, j in zip(*np.tril_indices_from(g.axes, -1)):
ax = g.axes[i, j]
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
for i, j in zip(*np.triu_indices_from(g.axes)):
ax = g.axes[i, j]
nt.assert_equal(len(ax.collections), 0)
def test_map_upper(self):
vars = ["x", "y", "z"]
g = ag.PairGrid(self.df)
g.map_upper(plt.scatter)
for i, j in zip(*np.triu_indices_from(g.axes, 1)):
ax = g.axes[i, j]
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
for i, j in zip(*np.tril_indices_from(g.axes)):
ax = g.axes[i, j]
nt.assert_equal(len(ax.collections), 0)
@skipif(old_matplotlib)
def test_map_diag(self):
g1 = ag.PairGrid(self.df)
g1.map_diag(plt.hist)
for ax in g1.diag_axes:
nt.assert_equal(len(ax.patches), 10)
g2 = ag.PairGrid(self.df)
g2.map_diag(plt.hist, bins=15)
for ax in g2.diag_axes:
nt.assert_equal(len(ax.patches), 15)
g3 = ag.PairGrid(self.df, hue="a")
g3.map_diag(plt.hist)
for ax in g3.diag_axes:
nt.assert_equal(len(ax.patches), 40)
@skipif(old_matplotlib)
def test_map_diag_and_offdiag(self):
vars = ["x", "y", "z"]
g = ag.PairGrid(self.df)
g.map_offdiag(plt.scatter)
g.map_diag(plt.hist)
for ax in g.diag_axes:
nt.assert_equal(len(ax.patches), 10)
for i, j in zip(*np.triu_indices_from(g.axes, 1)):
ax = g.axes[i, j]
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
for i, j in zip(*np.tril_indices_from(g.axes, -1)):
ax = g.axes[i, j]
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
for i, j in zip(*np.diag_indices_from(g.axes)):
ax = g.axes[i, j]
nt.assert_equal(len(ax.collections), 0)
def test_palette(self):
rcmod.set()
g = ag.PairGrid(self.df, hue="a")
nt.assert_equal(g.palette, color_palette(n_colors=4))
g = ag.PairGrid(self.df, hue="b")
nt.assert_equal(g.palette, color_palette("husl", 8))
g = ag.PairGrid(self.df, hue="a", palette="Set2")
nt.assert_equal(g.palette, color_palette("Set2", 4))
dict_pal = dict(a="red", b="green", c="blue", d="purple")
list_pal = color_palette(["red", "green", "blue", "purple"], 4)
g = ag.PairGrid(self.df, hue="a", palette=dict_pal)
nt.assert_equal(g.palette, list_pal)
list_pal = color_palette(["purple", "blue", "red", "green"], 4)
g = ag.PairGrid(self.df, hue="a", hue_order=list("dcab"),
palette=dict_pal)
nt.assert_equal(g.palette, list_pal)
def test_hue_kws(self):
kws = dict(marker=["o", "s", "d", "+"])
g = ag.PairGrid(self.df, hue="a", hue_kws=kws)
g.map(plt.plot)
for line, marker in zip(g.axes[0, 0].lines, kws["marker"]):
nt.assert_equal(line.get_marker(), marker)
g = ag.PairGrid(self.df, hue="a", hue_kws=kws,
hue_order=list("dcab"))
g.map(plt.plot)
for line, marker in zip(g.axes[0, 0].lines, kws["marker"]):
nt.assert_equal(line.get_marker(), marker)
@skipif(old_matplotlib)
def test_hue_order(self):
order = list("dcab")
g = ag.PairGrid(self.df, hue="a", hue_order=order)
g.map(plt.plot)
for line, level in zip(g.axes[1, 0].lines, order):
x, y = line.get_xydata().T
npt.assert_array_equal(x, self.df.loc[self.df.a == level, "x"])
npt.assert_array_equal(y, self.df.loc[self.df.a == level, "y"])
plt.close("all")
g = ag.PairGrid(self.df, hue="a", hue_order=order)
g.map_diag(plt.plot)
for line, level in zip(g.axes[0, 0].lines, order):
x, y = line.get_xydata().T
npt.assert_array_equal(x, self.df.loc[self.df.a == level, "x"])
npt.assert_array_equal(y, self.df.loc[self.df.a == level, "x"])
plt.close("all")
g = ag.PairGrid(self.df, hue="a", hue_order=order)
g.map_lower(plt.plot)
for line, level in zip(g.axes[1, 0].lines, order):
x, y = line.get_xydata().T
npt.assert_array_equal(x, self.df.loc[self.df.a == level, "x"])
npt.assert_array_equal(y, self.df.loc[self.df.a == level, "y"])
plt.close("all")
g = ag.PairGrid(self.df, hue="a", hue_order=order)
g.map_upper(plt.plot)
for line, level in zip(g.axes[0, 1].lines, order):
x, y = line.get_xydata().T
npt.assert_array_equal(x, self.df.loc[self.df.a == level, "y"])
npt.assert_array_equal(y, self.df.loc[self.df.a == level, "x"])
plt.close("all")
@skipif(old_matplotlib)
def test_hue_order_missing_level(self):
order = list("dcaeb")
g = ag.PairGrid(self.df, hue="a", hue_order=order)
g.map(plt.plot)
for line, level in zip(g.axes[1, 0].lines, order):
x, y = line.get_xydata().T
npt.assert_array_equal(x, self.df.loc[self.df.a == level, "x"])
npt.assert_array_equal(y, self.df.loc[self.df.a == level, "y"])
plt.close("all")
g = ag.PairGrid(self.df, hue="a", hue_order=order)
g.map_diag(plt.plot)
for line, level in zip(g.axes[0, 0].lines, order):
x, y = line.get_xydata().T
npt.assert_array_equal(x, self.df.loc[self.df.a == level, "x"])
npt.assert_array_equal(y, self.df.loc[self.df.a == level, "x"])
plt.close("all")
g = ag.PairGrid(self.df, hue="a", hue_order=order)
g.map_lower(plt.plot)
for line, level in zip(g.axes[1, 0].lines, order):
x, y = line.get_xydata().T
npt.assert_array_equal(x, self.df.loc[self.df.a == level, "x"])
npt.assert_array_equal(y, self.df.loc[self.df.a == level, "y"])
plt.close("all")
g = ag.PairGrid(self.df, hue="a", hue_order=order)
g.map_upper(plt.plot)
for line, level in zip(g.axes[0, 1].lines, order):
x, y = line.get_xydata().T
npt.assert_array_equal(x, self.df.loc[self.df.a == level, "y"])
npt.assert_array_equal(y, self.df.loc[self.df.a == level, "x"])
plt.close("all")
def test_nondefault_index(self):
df = self.df.copy().set_index("b")
vars = ["x", "y", "z"]
g1 = ag.PairGrid(df)
g1.map(plt.scatter)
for i, axes_i in enumerate(g1.axes):
for j, ax in enumerate(axes_i):
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
g2 = ag.PairGrid(df, "a")
g2.map(plt.scatter)
for i, axes_i in enumerate(g2.axes):
for j, ax in enumerate(axes_i):
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
for k, k_level in enumerate("abcd"):
x_in_k = x_in[self.df.a == k_level]
y_in_k = y_in[self.df.a == k_level]
x_out, y_out = ax.collections[k].get_offsets().T
npt.assert_array_equal(x_in_k, x_out)
npt.assert_array_equal(y_in_k, y_out)
@skipif(old_matplotlib)
def test_pairplot(self):
vars = ["x", "y", "z"]
g = pairplot(self.df)
for ax in g.diag_axes:
nt.assert_equal(len(ax.patches), 10)
for i, j in zip(*np.triu_indices_from(g.axes, 1)):
ax = g.axes[i, j]
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
for i, j in zip(*np.tril_indices_from(g.axes, -1)):
ax = g.axes[i, j]
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
for i, j in zip(*np.diag_indices_from(g.axes)):
ax = g.axes[i, j]
nt.assert_equal(len(ax.collections), 0)
@skipif(old_matplotlib)
def test_pairplot_reg(self):
vars = ["x", "y", "z"]
g = pairplot(self.df, kind="reg")
for ax in g.diag_axes:
nt.assert_equal(len(ax.patches), 10)
for i, j in zip(*np.triu_indices_from(g.axes, 1)):
ax = g.axes[i, j]
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
nt.assert_equal(len(ax.lines), 1)
nt.assert_equal(len(ax.collections), 2)
for i, j in zip(*np.tril_indices_from(g.axes, -1)):
ax = g.axes[i, j]
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
nt.assert_equal(len(ax.lines), 1)
nt.assert_equal(len(ax.collections), 2)
for i, j in zip(*np.diag_indices_from(g.axes)):
ax = g.axes[i, j]
nt.assert_equal(len(ax.collections), 0)
@skipif(old_matplotlib)
def test_pairplot_kde(self):
vars = ["x", "y", "z"]
g = pairplot(self.df, diag_kind="kde")
for ax in g.diag_axes:
nt.assert_equal(len(ax.lines), 1)
for i, j in zip(*np.triu_indices_from(g.axes, 1)):
ax = g.axes[i, j]
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
for i, j in zip(*np.tril_indices_from(g.axes, -1)):
ax = g.axes[i, j]
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
for i, j in zip(*np.diag_indices_from(g.axes)):
ax = g.axes[i, j]
nt.assert_equal(len(ax.collections), 0)
@skipif(old_matplotlib)
def test_pairplot_markers(self):
vars = ["x", "y", "z"]
markers = ["o", "x", "s", "d"]
g = pairplot(self.df, hue="a", vars=vars, markers=markers)
nt.assert_equal(g.hue_kws["marker"], markers)
plt.close("all")
with nt.assert_raises(ValueError):
g = pairplot(self.df, hue="a", vars=vars, markers=markers[:-2])
class TestJointGrid(PlotTestCase):
rs = np.random.RandomState(sum(map(ord, "JointGrid")))
x = rs.randn(100)
y = rs.randn(100)
x_na = x.copy()
x_na[10] = np.nan
x_na[20] = np.nan
data = pd.DataFrame(dict(x=x, y=y, x_na=x_na))
def test_margin_grid_from_arrays(self):
g = ag.JointGrid(self.x, self.y)
npt.assert_array_equal(g.x, self.x)
npt.assert_array_equal(g.y, self.y)
def test_margin_grid_from_series(self):
g = ag.JointGrid(self.data.x, self.data.y)
npt.assert_array_equal(g.x, self.x)
npt.assert_array_equal(g.y, self.y)
def test_margin_grid_from_dataframe(self):
g = ag.JointGrid("x", "y", self.data)
npt.assert_array_equal(g.x, self.x)
npt.assert_array_equal(g.y, self.y)
def test_margin_grid_axis_labels(self):
g = ag.JointGrid("x", "y", self.data)
xlabel, ylabel = g.ax_joint.get_xlabel(), g.ax_joint.get_ylabel()
nt.assert_equal(xlabel, "x")
nt.assert_equal(ylabel, "y")
g.set_axis_labels("x variable", "y variable")
xlabel, ylabel = g.ax_joint.get_xlabel(), g.ax_joint.get_ylabel()
nt.assert_equal(xlabel, "x variable")
nt.assert_equal(ylabel, "y variable")
def test_dropna(self):
g = ag.JointGrid("x_na", "y", self.data, dropna=False)
nt.assert_equal(len(g.x), len(self.x_na))
g = ag.JointGrid("x_na", "y", self.data, dropna=True)
nt.assert_equal(len(g.x), pd.notnull(self.x_na).sum())
def test_axlims(self):
lim = (-3, 3)
g = ag.JointGrid("x", "y", self.data, xlim=lim, ylim=lim)
nt.assert_equal(g.ax_joint.get_xlim(), lim)
nt.assert_equal(g.ax_joint.get_ylim(), lim)
nt.assert_equal(g.ax_marg_x.get_xlim(), lim)
nt.assert_equal(g.ax_marg_y.get_ylim(), lim)
def test_marginal_ticks(self):
g = ag.JointGrid("x", "y", self.data)
nt.assert_true(~len(g.ax_marg_x.get_xticks()))
nt.assert_true(~len(g.ax_marg_y.get_yticks()))
def test_bivariate_plot(self):
g = ag.JointGrid("x", "y", self.data)
g.plot_joint(plt.plot)
x, y = g.ax_joint.lines[0].get_xydata().T
npt.assert_array_equal(x, self.x)
npt.assert_array_equal(y, self.y)
def test_univariate_plot(self):
g = ag.JointGrid("x", "x", self.data)
g.plot_marginals(kdeplot)
_, y1 = g.ax_marg_x.lines[0].get_xydata().T
y2, _ = g.ax_marg_y.lines[0].get_xydata().T
npt.assert_array_equal(y1, y2)
def test_plot(self):
g = ag.JointGrid("x", "x", self.data)
g.plot(plt.plot, kdeplot)
x, y = g.ax_joint.lines[0].get_xydata().T
npt.assert_array_equal(x, self.x)
npt.assert_array_equal(y, self.x)
_, y1 = g.ax_marg_x.lines[0].get_xydata().T
y2, _ = g.ax_marg_y.lines[0].get_xydata().T
npt.assert_array_equal(y1, y2)
def test_annotate(self):
g = ag.JointGrid("x", "y", self.data)
rp = stats.pearsonr(self.x, self.y)
g.annotate(stats.pearsonr)
annotation = g.ax_joint.legend_.texts[0].get_text()
nt.assert_equal(annotation, "pearsonr = %.2g; p = %.2g" % rp)
g.annotate(stats.pearsonr, stat="correlation")
annotation = g.ax_joint.legend_.texts[0].get_text()
nt.assert_equal(annotation, "correlation = %.2g; p = %.2g" % rp)
def rsquared(x, y):
return stats.pearsonr(x, y)[0] ** 2
r2 = rsquared(self.x, self.y)
g.annotate(rsquared)
annotation = g.ax_joint.legend_.texts[0].get_text()
nt.assert_equal(annotation, "rsquared = %.2g" % r2)
template = "{stat} = {val:.3g} (p = {p:.3g})"
g.annotate(stats.pearsonr, template=template)
annotation = g.ax_joint.legend_.texts[0].get_text()
nt.assert_equal(annotation, template.format(stat="pearsonr",
val=rp[0], p=rp[1]))
def test_space(self):
g = ag.JointGrid("x", "y", self.data, space=0)
joint_bounds = g.ax_joint.bbox.bounds
marg_x_bounds = g.ax_marg_x.bbox.bounds
marg_y_bounds = g.ax_marg_y.bbox.bounds
nt.assert_equal(joint_bounds[2], marg_x_bounds[2])
nt.assert_equal(joint_bounds[3], marg_y_bounds[3])
| bsd-3-clause |
PrashntS/scikit-learn | sklearn/datasets/svmlight_format.py | 79 | 15976 | """This module implements a loader and dumper for the svmlight format
This format is a text-based format, with one sample per line. It does
not store zero valued features hence is suitable for sparse dataset.
The first element of each line can be used to store a target variable to
predict.
This format is used as the default format for both svmlight and the
libsvm command line programs.
"""
# Authors: Mathieu Blondel <mathieu@mblondel.org>
# Lars Buitinck <L.J.Buitinck@uva.nl>
# Olivier Grisel <olivier.grisel@ensta.org>
# License: BSD 3 clause
from contextlib import closing
import io
import os.path
import numpy as np
import scipy.sparse as sp
from ._svmlight_format import _load_svmlight_file
from .. import __version__
from ..externals import six
from ..externals.six import u, b
from ..externals.six.moves import range, zip
from ..utils import check_array
from ..utils.fixes import frombuffer_empty
def load_svmlight_file(f, n_features=None, dtype=np.float64,
multilabel=False, zero_based="auto", query_id=False):
"""Load datasets in the svmlight / libsvm format into sparse CSR matrix
This format is a text-based format, with one sample per line. It does
not store zero valued features hence is suitable for sparse dataset.
The first element of each line can be used to store a target variable
to predict.
This format is used as the default format for both svmlight and the
libsvm command line programs.
Parsing a text based source can be expensive. When working on
repeatedly on the same dataset, it is recommended to wrap this
loader with joblib.Memory.cache to store a memmapped backup of the
CSR results of the first call and benefit from the near instantaneous
loading of memmapped structures for the subsequent calls.
In case the file contains a pairwise preference constraint (known
as "qid" in the svmlight format) these are ignored unless the
query_id parameter is set to True. These pairwise preference
constraints can be used to constraint the combination of samples
when using pairwise loss functions (as is the case in some
learning to rank problems) so that only pairs with the same
query_id value are considered.
This implementation is written in Cython and is reasonably fast.
However, a faster API-compatible loader is also available at:
https://github.com/mblondel/svmlight-loader
Parameters
----------
f : {str, file-like, int}
(Path to) a file to load. If a path ends in ".gz" or ".bz2", it will
be uncompressed on the fly. If an integer is passed, it is assumed to
be a file descriptor. A file-like or file descriptor will not be closed
by this function. A file-like object must be opened in binary mode.
n_features : int or None
The number of features to use. If None, it will be inferred. This
argument is useful to load several files that are subsets of a
bigger sliced dataset: each subset might not have examples of
every feature, hence the inferred shape might vary from one
slice to another.
multilabel : boolean, optional, default False
Samples may have several labels each (see
http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multilabel.html)
zero_based : boolean or "auto", optional, default "auto"
Whether column indices in f are zero-based (True) or one-based
(False). If column indices are one-based, they are transformed to
zero-based to match Python/NumPy conventions.
If set to "auto", a heuristic check is applied to determine this from
the file contents. Both kinds of files occur "in the wild", but they
are unfortunately not self-identifying. Using "auto" or True should
always be safe.
query_id : boolean, default False
If True, will return the query_id array for each file.
dtype : numpy data type, default np.float64
Data type of dataset to be loaded. This will be the data type of the
output numpy arrays ``X`` and ``y``.
Returns
-------
X: scipy.sparse matrix of shape (n_samples, n_features)
y: ndarray of shape (n_samples,), or, in the multilabel a list of
tuples of length n_samples.
query_id: array of shape (n_samples,)
query_id for each sample. Only returned when query_id is set to
True.
See also
--------
load_svmlight_files: similar function for loading multiple files in this
format, enforcing the same number of features/columns on all of them.
Examples
--------
To use joblib.Memory to cache the svmlight file::
from sklearn.externals.joblib import Memory
from sklearn.datasets import load_svmlight_file
mem = Memory("./mycache")
@mem.cache
def get_data():
data = load_svmlight_file("mysvmlightfile")
return data[0], data[1]
X, y = get_data()
"""
return tuple(load_svmlight_files([f], n_features, dtype, multilabel,
zero_based, query_id))
def _gen_open(f):
if isinstance(f, int): # file descriptor
return io.open(f, "rb", closefd=False)
elif not isinstance(f, six.string_types):
raise TypeError("expected {str, int, file-like}, got %s" % type(f))
_, ext = os.path.splitext(f)
if ext == ".gz":
import gzip
return gzip.open(f, "rb")
elif ext == ".bz2":
from bz2 import BZ2File
return BZ2File(f, "rb")
else:
return open(f, "rb")
def _open_and_load(f, dtype, multilabel, zero_based, query_id):
if hasattr(f, "read"):
actual_dtype, data, ind, indptr, labels, query = \
_load_svmlight_file(f, dtype, multilabel, zero_based, query_id)
# XXX remove closing when Python 2.7+/3.1+ required
else:
with closing(_gen_open(f)) as f:
actual_dtype, data, ind, indptr, labels, query = \
_load_svmlight_file(f, dtype, multilabel, zero_based, query_id)
# convert from array.array, give data the right dtype
if not multilabel:
labels = frombuffer_empty(labels, np.float64)
data = frombuffer_empty(data, actual_dtype)
indices = frombuffer_empty(ind, np.intc)
indptr = np.frombuffer(indptr, dtype=np.intc) # never empty
query = frombuffer_empty(query, np.intc)
data = np.asarray(data, dtype=dtype) # no-op for float{32,64}
return data, indices, indptr, labels, query
def load_svmlight_files(files, n_features=None, dtype=np.float64,
multilabel=False, zero_based="auto", query_id=False):
"""Load dataset from multiple files in SVMlight format
This function is equivalent to mapping load_svmlight_file over a list of
files, except that the results are concatenated into a single, flat list
and the samples vectors are constrained to all have the same number of
features.
In case the file contains a pairwise preference constraint (known
as "qid" in the svmlight format) these are ignored unless the
query_id parameter is set to True. These pairwise preference
constraints can be used to constraint the combination of samples
when using pairwise loss functions (as is the case in some
learning to rank problems) so that only pairs with the same
query_id value are considered.
Parameters
----------
files : iterable over {str, file-like, int}
(Paths of) files to load. If a path ends in ".gz" or ".bz2", it will
be uncompressed on the fly. If an integer is passed, it is assumed to
be a file descriptor. File-likes and file descriptors will not be
closed by this function. File-like objects must be opened in binary
mode.
n_features: int or None
The number of features to use. If None, it will be inferred from the
maximum column index occurring in any of the files.
This can be set to a higher value than the actual number of features
in any of the input files, but setting it to a lower value will cause
an exception to be raised.
multilabel: boolean, optional
Samples may have several labels each (see
http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multilabel.html)
zero_based: boolean or "auto", optional
Whether column indices in f are zero-based (True) or one-based
(False). If column indices are one-based, they are transformed to
zero-based to match Python/NumPy conventions.
If set to "auto", a heuristic check is applied to determine this from
the file contents. Both kinds of files occur "in the wild", but they
are unfortunately not self-identifying. Using "auto" or True should
always be safe.
query_id: boolean, defaults to False
If True, will return the query_id array for each file.
dtype : numpy data type, default np.float64
Data type of dataset to be loaded. This will be the data type of the
output numpy arrays ``X`` and ``y``.
Returns
-------
[X1, y1, ..., Xn, yn]
where each (Xi, yi) pair is the result from load_svmlight_file(files[i]).
If query_id is set to True, this will return instead [X1, y1, q1,
..., Xn, yn, qn] where (Xi, yi, qi) is the result from
load_svmlight_file(files[i])
Notes
-----
When fitting a model to a matrix X_train and evaluating it against a
matrix X_test, it is essential that X_train and X_test have the same
number of features (X_train.shape[1] == X_test.shape[1]). This may not
be the case if you load the files individually with load_svmlight_file.
See also
--------
load_svmlight_file
"""
r = [_open_and_load(f, dtype, multilabel, bool(zero_based), bool(query_id))
for f in files]
if (zero_based is False
or zero_based == "auto" and all(np.min(tmp[1]) > 0 for tmp in r)):
for ind in r:
indices = ind[1]
indices -= 1
n_f = max(ind[1].max() for ind in r) + 1
if n_features is None:
n_features = n_f
elif n_features < n_f:
raise ValueError("n_features was set to {},"
" but input file contains {} features"
.format(n_features, n_f))
result = []
for data, indices, indptr, y, query_values in r:
shape = (indptr.shape[0] - 1, n_features)
X = sp.csr_matrix((data, indices, indptr), shape)
X.sort_indices()
result += X, y
if query_id:
result.append(query_values)
return result
def _dump_svmlight(X, y, f, multilabel, one_based, comment, query_id):
is_sp = int(hasattr(X, "tocsr"))
if X.dtype.kind == 'i':
value_pattern = u("%d:%d")
else:
value_pattern = u("%d:%.16g")
if y.dtype.kind == 'i':
label_pattern = u("%d")
else:
label_pattern = u("%.16g")
line_pattern = u("%s")
if query_id is not None:
line_pattern += u(" qid:%d")
line_pattern += u(" %s\n")
if comment:
f.write(b("# Generated by dump_svmlight_file from scikit-learn %s\n"
% __version__))
f.write(b("# Column indices are %s-based\n"
% ["zero", "one"][one_based]))
f.write(b("#\n"))
f.writelines(b("# %s\n" % line) for line in comment.splitlines())
for i in range(X.shape[0]):
if is_sp:
span = slice(X.indptr[i], X.indptr[i + 1])
row = zip(X.indices[span], X.data[span])
else:
nz = X[i] != 0
row = zip(np.where(nz)[0], X[i, nz])
s = " ".join(value_pattern % (j + one_based, x) for j, x in row)
if multilabel:
nz_labels = np.where(y[i] != 0)[0]
labels_str = ",".join(label_pattern % j for j in nz_labels)
else:
labels_str = label_pattern % y[i]
if query_id is not None:
feat = (labels_str, query_id[i], s)
else:
feat = (labels_str, s)
f.write((line_pattern % feat).encode('ascii'))
def dump_svmlight_file(X, y, f, zero_based=True, comment=None, query_id=None,
multilabel=False):
"""Dump the dataset in svmlight / libsvm file format.
This format is a text-based format, with one sample per line. It does
not store zero valued features hence is suitable for sparse dataset.
The first element of each line can be used to store a target variable
to predict.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape = [n_samples] or [n_samples, n_labels]
Target values. Class labels must be an integer or float, or array-like
objects of integer or float for multilabel classifications.
f : string or file-like in binary mode
If string, specifies the path that will contain the data.
If file-like, data will be written to f. f should be opened in binary
mode.
zero_based : boolean, optional
Whether column indices should be written zero-based (True) or one-based
(False).
comment : string, optional
Comment to insert at the top of the file. This should be either a
Unicode string, which will be encoded as UTF-8, or an ASCII byte
string.
If a comment is given, then it will be preceded by one that identifies
the file as having been dumped by scikit-learn. Note that not all
tools grok comments in SVMlight files.
query_id : array-like, shape = [n_samples]
Array containing pairwise preference constraints (qid in svmlight
format).
multilabel: boolean, optional
Samples may have several labels each (see
http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multilabel.html)
"""
if comment is not None:
# Convert comment string to list of lines in UTF-8.
# If a byte string is passed, then check whether it's ASCII;
# if a user wants to get fancy, they'll have to decode themselves.
# Avoid mention of str and unicode types for Python 3.x compat.
if isinstance(comment, bytes):
comment.decode("ascii") # just for the exception
else:
comment = comment.encode("utf-8")
if six.b("\0") in comment:
raise ValueError("comment string contains NUL byte")
y = np.asarray(y)
if y.ndim != 1 and not multilabel:
raise ValueError("expected y of shape (n_samples,), got %r"
% (y.shape,))
Xval = check_array(X, accept_sparse='csr')
if Xval.shape[0] != y.shape[0]:
raise ValueError("X.shape[0] and y.shape[0] should be the same, got"
" %r and %r instead." % (Xval.shape[0], y.shape[0]))
# We had some issues with CSR matrices with unsorted indices (e.g. #1501),
# so sort them here, but first make sure we don't modify the user's X.
# TODO We can do this cheaper; sorted_indices copies the whole matrix.
if Xval is X and hasattr(Xval, "sorted_indices"):
X = Xval.sorted_indices()
else:
X = Xval
if hasattr(X, "sort_indices"):
X.sort_indices()
if query_id is not None:
query_id = np.asarray(query_id)
if query_id.shape[0] != y.shape[0]:
raise ValueError("expected query_id of shape (n_samples,), got %r"
% (query_id.shape,))
one_based = not zero_based
if hasattr(f, "write"):
_dump_svmlight(X, y, f, multilabel, one_based, comment, query_id)
else:
with open(f, "wb") as f:
_dump_svmlight(X, y, f, multilabel, one_based, comment, query_id)
| bsd-3-clause |
patvarilly/units_and_physics | docs/sphinxext/numpydoc/tests/test_docscrape.py | 2 | 15295 | # -*- encoding:utf-8 -*-
import sys, os
sys.path.append(os.path.join(os.path.dirname(__file__), '..'))
from docscrape import NumpyDocString, FunctionDoc, ClassDoc
from docscrape_sphinx import SphinxDocString, SphinxClassDoc
from nose.tools import *
doc_txt = '''\
numpy.multivariate_normal(mean, cov, shape=None, spam=None)
Draw values from a multivariate normal distribution with specified
mean and covariance.
The multivariate normal or Gaussian distribution is a generalisation
of the one-dimensional normal distribution to higher dimensions.
Parameters
----------
mean : (N,) ndarray
Mean of the N-dimensional distribution.
.. math::
(1+2+3)/3
cov : (N,N) ndarray
Covariance matrix of the distribution.
shape : tuple of ints
Given a shape of, for example, (m,n,k), m*n*k samples are
generated, and packed in an m-by-n-by-k arrangement. Because
each sample is N-dimensional, the output shape is (m,n,k,N).
Returns
-------
out : ndarray
The drawn samples, arranged according to `shape`. If the
shape given is (m,n,...), then the shape of `out` is is
(m,n,...,N).
In other words, each entry ``out[i,j,...,:]`` is an N-dimensional
value drawn from the distribution.
Other Parameters
----------------
spam : parrot
A parrot off its mortal coil.
Raises
------
RuntimeError
Some error
Warns
-----
RuntimeWarning
Some warning
Warnings
--------
Certain warnings apply.
Notes
-----
Instead of specifying the full covariance matrix, popular
approximations include:
- Spherical covariance (`cov` is a multiple of the identity matrix)
- Diagonal covariance (`cov` has non-negative elements only on the diagonal)
This geometrical property can be seen in two dimensions by plotting
generated data-points:
>>> mean = [0,0]
>>> cov = [[1,0],[0,100]] # diagonal covariance, points lie on x or y-axis
>>> x,y = multivariate_normal(mean,cov,5000).T
>>> plt.plot(x,y,'x'); plt.axis('equal'); plt.show()
Note that the covariance matrix must be symmetric and non-negative
definite.
References
----------
.. [1] A. Papoulis, "Probability, Random Variables, and Stochastic
Processes," 3rd ed., McGraw-Hill Companies, 1991
.. [2] R.O. Duda, P.E. Hart, and D.G. Stork, "Pattern Classification,"
2nd ed., Wiley, 2001.
See Also
--------
some, other, funcs
otherfunc : relationship
Examples
--------
>>> mean = (1,2)
>>> cov = [[1,0],[1,0]]
>>> x = multivariate_normal(mean,cov,(3,3))
>>> print x.shape
(3, 3, 2)
The following is probably true, given that 0.6 is roughly twice the
standard deviation:
>>> print list( (x[0,0,:] - mean) < 0.6 )
[True, True]
.. index:: random
:refguide: random;distributions, random;gauss
'''
doc = NumpyDocString(doc_txt)
def test_signature():
assert doc['Signature'].startswith('numpy.multivariate_normal(')
assert doc['Signature'].endswith('spam=None)')
def test_summary():
assert doc['Summary'][0].startswith('Draw values')
assert doc['Summary'][-1].endswith('covariance.')
def test_extended_summary():
assert doc['Extended Summary'][0].startswith('The multivariate normal')
def test_parameters():
assert_equal(len(doc['Parameters']), 3)
assert_equal([n for n,_,_ in doc['Parameters']], ['mean','cov','shape'])
arg, arg_type, desc = doc['Parameters'][1]
assert_equal(arg_type, '(N,N) ndarray')
assert desc[0].startswith('Covariance matrix')
assert doc['Parameters'][0][-1][-2] == ' (1+2+3)/3'
def test_other_parameters():
assert_equal(len(doc['Other Parameters']), 1)
assert_equal([n for n,_,_ in doc['Other Parameters']], ['spam'])
arg, arg_type, desc = doc['Other Parameters'][0]
assert_equal(arg_type, 'parrot')
assert desc[0].startswith('A parrot off its mortal coil')
def test_returns():
assert_equal(len(doc['Returns']), 1)
arg, arg_type, desc = doc['Returns'][0]
assert_equal(arg, 'out')
assert_equal(arg_type, 'ndarray')
assert desc[0].startswith('The drawn samples')
assert desc[-1].endswith('distribution.')
def test_notes():
assert doc['Notes'][0].startswith('Instead')
assert doc['Notes'][-1].endswith('definite.')
assert_equal(len(doc['Notes']), 17)
def test_references():
assert doc['References'][0].startswith('..')
assert doc['References'][-1].endswith('2001.')
def test_examples():
assert doc['Examples'][0].startswith('>>>')
assert doc['Examples'][-1].endswith('True]')
def test_index():
assert_equal(doc['index']['default'], 'random')
print doc['index']
assert_equal(len(doc['index']), 2)
assert_equal(len(doc['index']['refguide']), 2)
def non_blank_line_by_line_compare(a,b):
a = [l for l in a.split('\n') if l.strip()]
b = [l for l in b.split('\n') if l.strip()]
for n,line in enumerate(a):
if not line == b[n]:
raise AssertionError("Lines %s of a and b differ: "
"\n>>> %s\n<<< %s\n" %
(n,line,b[n]))
def test_str():
non_blank_line_by_line_compare(str(doc),
"""numpy.multivariate_normal(mean, cov, shape=None, spam=None)
Draw values from a multivariate normal distribution with specified
mean and covariance.
The multivariate normal or Gaussian distribution is a generalisation
of the one-dimensional normal distribution to higher dimensions.
Parameters
----------
mean : (N,) ndarray
Mean of the N-dimensional distribution.
.. math::
(1+2+3)/3
cov : (N,N) ndarray
Covariance matrix of the distribution.
shape : tuple of ints
Given a shape of, for example, (m,n,k), m*n*k samples are
generated, and packed in an m-by-n-by-k arrangement. Because
each sample is N-dimensional, the output shape is (m,n,k,N).
Returns
-------
out : ndarray
The drawn samples, arranged according to `shape`. If the
shape given is (m,n,...), then the shape of `out` is is
(m,n,...,N).
In other words, each entry ``out[i,j,...,:]`` is an N-dimensional
value drawn from the distribution.
Other Parameters
----------------
spam : parrot
A parrot off its mortal coil.
Raises
------
RuntimeError :
Some error
Warns
-----
RuntimeWarning :
Some warning
Warnings
--------
Certain warnings apply.
See Also
--------
`some`_, `other`_, `funcs`_
`otherfunc`_
relationship
Notes
-----
Instead of specifying the full covariance matrix, popular
approximations include:
- Spherical covariance (`cov` is a multiple of the identity matrix)
- Diagonal covariance (`cov` has non-negative elements only on the diagonal)
This geometrical property can be seen in two dimensions by plotting
generated data-points:
>>> mean = [0,0]
>>> cov = [[1,0],[0,100]] # diagonal covariance, points lie on x or y-axis
>>> x,y = multivariate_normal(mean,cov,5000).T
>>> plt.plot(x,y,'x'); plt.axis('equal'); plt.show()
Note that the covariance matrix must be symmetric and non-negative
definite.
References
----------
.. [1] A. Papoulis, "Probability, Random Variables, and Stochastic
Processes," 3rd ed., McGraw-Hill Companies, 1991
.. [2] R.O. Duda, P.E. Hart, and D.G. Stork, "Pattern Classification,"
2nd ed., Wiley, 2001.
Examples
--------
>>> mean = (1,2)
>>> cov = [[1,0],[1,0]]
>>> x = multivariate_normal(mean,cov,(3,3))
>>> print x.shape
(3, 3, 2)
The following is probably true, given that 0.6 is roughly twice the
standard deviation:
>>> print list( (x[0,0,:] - mean) < 0.6 )
[True, True]
.. index:: random
:refguide: random;distributions, random;gauss""")
def test_sphinx_str():
sphinx_doc = SphinxDocString(doc_txt)
non_blank_line_by_line_compare(str(sphinx_doc),
"""
.. index:: random
single: random;distributions, random;gauss
Draw values from a multivariate normal distribution with specified
mean and covariance.
The multivariate normal or Gaussian distribution is a generalisation
of the one-dimensional normal distribution to higher dimensions.
:Parameters:
**mean** : (N,) ndarray
Mean of the N-dimensional distribution.
.. math::
(1+2+3)/3
**cov** : (N,N) ndarray
Covariance matrix of the distribution.
**shape** : tuple of ints
Given a shape of, for example, (m,n,k), m*n*k samples are
generated, and packed in an m-by-n-by-k arrangement. Because
each sample is N-dimensional, the output shape is (m,n,k,N).
:Returns:
**out** : ndarray
The drawn samples, arranged according to `shape`. If the
shape given is (m,n,...), then the shape of `out` is is
(m,n,...,N).
In other words, each entry ``out[i,j,...,:]`` is an N-dimensional
value drawn from the distribution.
:Other Parameters:
**spam** : parrot
A parrot off its mortal coil.
:Raises:
**RuntimeError** :
Some error
:Warns:
**RuntimeWarning** :
Some warning
.. warning::
Certain warnings apply.
.. seealso::
:obj:`some`, :obj:`other`, :obj:`funcs`
:obj:`otherfunc`
relationship
.. rubric:: Notes
Instead of specifying the full covariance matrix, popular
approximations include:
- Spherical covariance (`cov` is a multiple of the identity matrix)
- Diagonal covariance (`cov` has non-negative elements only on the diagonal)
This geometrical property can be seen in two dimensions by plotting
generated data-points:
>>> mean = [0,0]
>>> cov = [[1,0],[0,100]] # diagonal covariance, points lie on x or y-axis
>>> x,y = multivariate_normal(mean,cov,5000).T
>>> plt.plot(x,y,'x'); plt.axis('equal'); plt.show()
Note that the covariance matrix must be symmetric and non-negative
definite.
.. rubric:: References
.. [1] A. Papoulis, "Probability, Random Variables, and Stochastic
Processes," 3rd ed., McGraw-Hill Companies, 1991
.. [2] R.O. Duda, P.E. Hart, and D.G. Stork, "Pattern Classification,"
2nd ed., Wiley, 2001.
.. only:: latex
[1]_, [2]_
.. rubric:: Examples
>>> mean = (1,2)
>>> cov = [[1,0],[1,0]]
>>> x = multivariate_normal(mean,cov,(3,3))
>>> print x.shape
(3, 3, 2)
The following is probably true, given that 0.6 is roughly twice the
standard deviation:
>>> print list( (x[0,0,:] - mean) < 0.6 )
[True, True]
""")
doc2 = NumpyDocString("""
Returns array of indices of the maximum values of along the given axis.
Parameters
----------
a : {array_like}
Array to look in.
axis : {None, integer}
If None, the index is into the flattened array, otherwise along
the specified axis""")
def test_parameters_without_extended_description():
assert_equal(len(doc2['Parameters']), 2)
doc3 = NumpyDocString("""
my_signature(*params, **kwds)
Return this and that.
""")
def test_escape_stars():
signature = str(doc3).split('\n')[0]
assert_equal(signature, 'my_signature(\*params, \*\*kwds)')
doc4 = NumpyDocString(
"""a.conj()
Return an array with all complex-valued elements conjugated.""")
def test_empty_extended_summary():
assert_equal(doc4['Extended Summary'], [])
doc5 = NumpyDocString(
"""
a.something()
Raises
------
LinAlgException
If array is singular.
Warns
-----
SomeWarning
If needed
""")
def test_raises():
assert_equal(len(doc5['Raises']), 1)
name,_,desc = doc5['Raises'][0]
assert_equal(name,'LinAlgException')
assert_equal(desc,['If array is singular.'])
def test_warns():
assert_equal(len(doc5['Warns']), 1)
name,_,desc = doc5['Warns'][0]
assert_equal(name,'SomeWarning')
assert_equal(desc,['If needed'])
def test_see_also():
doc6 = NumpyDocString(
"""
z(x,theta)
See Also
--------
func_a, func_b, func_c
func_d : some equivalent func
foo.func_e : some other func over
multiple lines
func_f, func_g, :meth:`func_h`, func_j,
func_k
:obj:`baz.obj_q`
:class:`class_j`: fubar
foobar
""")
assert len(doc6['See Also']) == 12
for func, desc, role in doc6['See Also']:
if func in ('func_a', 'func_b', 'func_c', 'func_f',
'func_g', 'func_h', 'func_j', 'func_k', 'baz.obj_q'):
assert(not desc)
else:
assert(desc)
if func == 'func_h':
assert role == 'meth'
elif func == 'baz.obj_q':
assert role == 'obj'
elif func == 'class_j':
assert role == 'class'
else:
assert role is None
if func == 'func_d':
assert desc == ['some equivalent func']
elif func == 'foo.func_e':
assert desc == ['some other func over', 'multiple lines']
elif func == 'class_j':
assert desc == ['fubar', 'foobar']
def test_see_also_print():
class Dummy(object):
"""
See Also
--------
func_a, func_b
func_c : some relationship
goes here
func_d
"""
pass
obj = Dummy()
s = str(FunctionDoc(obj, role='func'))
assert(':func:`func_a`, :func:`func_b`' in s)
assert(' some relationship' in s)
assert(':func:`func_d`' in s)
doc7 = NumpyDocString("""
Doc starts on second line.
""")
def test_empty_first_line():
assert doc7['Summary'][0].startswith('Doc starts')
def test_no_summary():
str(SphinxDocString("""
Parameters
----------"""))
def test_unicode():
doc = SphinxDocString("""
öäöäöäöäöåååå
öäöäöäööäååå
Parameters
----------
ååå : äää
ööö
Returns
-------
ååå : ööö
äää
""")
assert doc['Summary'][0] == u'öäöäöäöäöåååå'.encode('utf-8')
def test_plot_examples():
cfg = dict(use_plots=True)
doc = SphinxDocString("""
Examples
--------
>>> import matplotlib.pyplot as plt
>>> plt.plot([1,2,3],[4,5,6])
>>> plt.show()
""", config=cfg)
assert 'plot::' in str(doc), str(doc)
doc = SphinxDocString("""
Examples
--------
.. plot::
import matplotlib.pyplot as plt
plt.plot([1,2,3],[4,5,6])
plt.show()
""", config=cfg)
assert str(doc).count('plot::') == 1, str(doc)
def test_class_members():
class Dummy(object):
"""
Dummy class.
"""
def spam(self, a, b):
"""Spam\n\nSpam spam."""
pass
def ham(self, c, d):
"""Cheese\n\nNo cheese."""
pass
for cls in (ClassDoc, SphinxClassDoc):
doc = cls(Dummy, config=dict(show_class_members=False))
assert 'Methods' not in str(doc), (cls, str(doc))
assert 'spam' not in str(doc), (cls, str(doc))
assert 'ham' not in str(doc), (cls, str(doc))
doc = cls(Dummy, config=dict(show_class_members=True))
assert 'Methods' in str(doc), (cls, str(doc))
assert 'spam' in str(doc), (cls, str(doc))
assert 'ham' in str(doc), (cls, str(doc))
if cls is SphinxClassDoc:
assert '.. autosummary::' in str(doc), str(doc)
if __name__ == "__main__":
import nose
nose.run()
| gpl-3.0 |
nhejazi/scikit-learn | examples/classification/plot_lda_qda.py | 32 | 5476 | """
====================================================================
Linear and Quadratic Discriminant Analysis with covariance ellipsoid
====================================================================
This example plots the covariance ellipsoids of each class and
decision boundary learned by LDA and QDA. The ellipsoids display
the double standard deviation for each class. With LDA, the
standard deviation is the same for all the classes, while each
class has its own standard deviation with QDA.
"""
print(__doc__)
from scipy import linalg
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib import colors
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
# #############################################################################
# Colormap
cmap = colors.LinearSegmentedColormap(
'red_blue_classes',
{'red': [(0, 1, 1), (1, 0.7, 0.7)],
'green': [(0, 0.7, 0.7), (1, 0.7, 0.7)],
'blue': [(0, 0.7, 0.7), (1, 1, 1)]})
plt.cm.register_cmap(cmap=cmap)
# #############################################################################
# Generate datasets
def dataset_fixed_cov():
'''Generate 2 Gaussians samples with the same covariance matrix'''
n, dim = 300, 2
np.random.seed(0)
C = np.array([[0., -0.23], [0.83, .23]])
X = np.r_[np.dot(np.random.randn(n, dim), C),
np.dot(np.random.randn(n, dim), C) + np.array([1, 1])]
y = np.hstack((np.zeros(n), np.ones(n)))
return X, y
def dataset_cov():
'''Generate 2 Gaussians samples with different covariance matrices'''
n, dim = 300, 2
np.random.seed(0)
C = np.array([[0., -1.], [2.5, .7]]) * 2.
X = np.r_[np.dot(np.random.randn(n, dim), C),
np.dot(np.random.randn(n, dim), C.T) + np.array([1, 4])]
y = np.hstack((np.zeros(n), np.ones(n)))
return X, y
# #############################################################################
# Plot functions
def plot_data(lda, X, y, y_pred, fig_index):
splot = plt.subplot(2, 2, fig_index)
if fig_index == 1:
plt.title('Linear Discriminant Analysis')
plt.ylabel('Data with\n fixed covariance')
elif fig_index == 2:
plt.title('Quadratic Discriminant Analysis')
elif fig_index == 3:
plt.ylabel('Data with\n varying covariances')
tp = (y == y_pred) # True Positive
tp0, tp1 = tp[y == 0], tp[y == 1]
X0, X1 = X[y == 0], X[y == 1]
X0_tp, X0_fp = X0[tp0], X0[~tp0]
X1_tp, X1_fp = X1[tp1], X1[~tp1]
alpha = 0.5
# class 0: dots
plt.plot(X0_tp[:, 0], X0_tp[:, 1], 'o', alpha=alpha,
color='red', markeredgecolor='k')
plt.plot(X0_fp[:, 0], X0_fp[:, 1], '*', alpha=alpha,
color='#990000', markeredgecolor='k') # dark red
# class 1: dots
plt.plot(X1_tp[:, 0], X1_tp[:, 1], 'o', alpha=alpha,
color='blue', markeredgecolor='k')
plt.plot(X1_fp[:, 0], X1_fp[:, 1], '*', alpha=alpha,
color='#000099', markeredgecolor='k') # dark blue
# class 0 and 1 : areas
nx, ny = 200, 100
x_min, x_max = plt.xlim()
y_min, y_max = plt.ylim()
xx, yy = np.meshgrid(np.linspace(x_min, x_max, nx),
np.linspace(y_min, y_max, ny))
Z = lda.predict_proba(np.c_[xx.ravel(), yy.ravel()])
Z = Z[:, 1].reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap='red_blue_classes',
norm=colors.Normalize(0., 1.))
plt.contour(xx, yy, Z, [0.5], linewidths=2., colors='k')
# means
plt.plot(lda.means_[0][0], lda.means_[0][1],
'o', color='black', markersize=10, markeredgecolor='k')
plt.plot(lda.means_[1][0], lda.means_[1][1],
'o', color='black', markersize=10, markeredgecolor='k')
return splot
def plot_ellipse(splot, mean, cov, color):
v, w = linalg.eigh(cov)
u = w[0] / linalg.norm(w[0])
angle = np.arctan(u[1] / u[0])
angle = 180 * angle / np.pi # convert to degrees
# filled Gaussian at 2 standard deviation
ell = mpl.patches.Ellipse(mean, 2 * v[0] ** 0.5, 2 * v[1] ** 0.5,
180 + angle, facecolor=color,
edgecolor='yellow',
linewidth=2, zorder=2)
ell.set_clip_box(splot.bbox)
ell.set_alpha(0.5)
splot.add_artist(ell)
splot.set_xticks(())
splot.set_yticks(())
def plot_lda_cov(lda, splot):
plot_ellipse(splot, lda.means_[0], lda.covariance_, 'red')
plot_ellipse(splot, lda.means_[1], lda.covariance_, 'blue')
def plot_qda_cov(qda, splot):
plot_ellipse(splot, qda.means_[0], qda.covariances_[0], 'red')
plot_ellipse(splot, qda.means_[1], qda.covariances_[1], 'blue')
for i, (X, y) in enumerate([dataset_fixed_cov(), dataset_cov()]):
# Linear Discriminant Analysis
lda = LinearDiscriminantAnalysis(solver="svd", store_covariance=True)
y_pred = lda.fit(X, y).predict(X)
splot = plot_data(lda, X, y, y_pred, fig_index=2 * i + 1)
plot_lda_cov(lda, splot)
plt.axis('tight')
# Quadratic Discriminant Analysis
qda = QuadraticDiscriminantAnalysis(store_covariances=True)
y_pred = qda.fit(X, y).predict(X)
splot = plot_data(qda, X, y, y_pred, fig_index=2 * i + 2)
plot_qda_cov(qda, splot)
plt.axis('tight')
plt.suptitle('Linear Discriminant Analysis vs Quadratic Discriminant'
'Analysis')
plt.show()
| bsd-3-clause |
jmanday/Master | TFM/scripts/matching-FlannBased.py | 1 | 4521 | # -*- coding: utf-8 -*-
#########################################################################
### Jesus Garcia Manday
### matching-FlannBased.py
### @Descripcion: script para calcular el matching entre dos conjuntos de
### de descriptores de dos imágenes usando el algoritmo
### Flann en el que se entrena una colección de descriptores
### de entrenamiento y llama a sus métodos de búsqueda más
### cercano para encontar los mejores resultados
#########################################################################
import os
import sys
import numpy as np
import cv2
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
import csv
PATH_DATABASES_TRAIN_IMAGES = "/Users/jesusgarciamanday/Documents/Master/TFM/databases/train-images3/" #fuente de datos de imágenes segmentadas para comparar
PATH_DATABASES_QUERY_IMAGES = "/Users/jesusgarciamanday/Documents/Master/TFM/databases/query-images3/" #fuente de datos de imágenes a clasificar
class DataMatching:
def __init__(self, imageSegmented, imageClassifier, value):
self.imageSegmented = imageSegmented
self.imageClassifier = imageClassifier
self.value = value
def getNameFile(file):
fileName = ""
if (len(file.split("R")) > 1):
fileName = file.split("R")[0]
else:
if (len(file.split("L")) > 1):
fileName = file.split("L")[0]
return fileName
def matchingFlannBased(filesTrainImages, filesQueryImages):
valuesDataMatching = []
results = []
filesTrainImages.sort()
filesQueryImages.sort()
# Initiate SIFT detector
sift = cv2.xfeatures2d.SIFT_create()
for fImgQuery in filesQueryImages:
nMatch = 0
index = 0
firstImage = ""
imgQuery = cv2.imread(PATH_DATABASES_QUERY_IMAGES + fImgQuery,0)
nameImgQuery = getNameFile(fImgQuery)
for fImgTrain in filesTrainImages:
imgSeg = cv2.imread(PATH_DATABASES_TRAIN_IMAGES + fImgTrain,0)
nameImgTrain = getNameFile(fImgTrain)
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(imgQuery,None)
kp2, des2 = sift.detectAndCompute(imgSeg,None)
# FLANN parameters
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=100) # or pass empty dictionary
flann = cv2.FlannBasedMatcher(index_params,search_params)
matches = flann.knnMatch(des1,des2,k=2)
#print(len(matches[0]))
#max_dist = 0
#min_dist = 100
#for m, n in matches:
# dist = m.distance
# if dist < min_dist:
# min_dist = dist
# if dist > max_dist:
# max_dist = dist
good = []
i = 0
for m,n in matches:
if m.distance < 0.75*n.distance:
good.append([m])
#for i,(m,n) in enumerate(matches):
# if m.distance < max(2*min_dist, 0.02):
# good.append([m])
if ((nameImgTrain == firstImage) or (firstImage == "")):
nMatch = nMatch + len(good)
else:
valuesDataMatching.append({"imageQuery": nameImgQuery, "imageTrain": firstImage, "value": nMatch})
nMatch = len(good)
firstImage = nameImgTrain
firstImage = ""
nMatch = 0
valM = max(valuesDataMatching, key=lambda item:item['value'])
print(valM)
results.append(valM)
valuesDataMatching = []
with open('results2-FlannBased-SIFT.csv', 'w') as csvfile:
filewriter = csv.writer(csvfile, delimiter=',', quotechar='|', quoting=csv.QUOTE_MINIMAL)
filewriter.writerow(['Image Query', 'Image Train', "Value matching"])
for rs in results:
filewriter.writerow([rs['imageQuery'], rs['imageTrain'], rs['value']])
if __name__ == "__main__":
filesTrainImages = os.listdir(PATH_DATABASES_TRAIN_IMAGES)
filesQueryImages = os.listdir(PATH_DATABASES_QUERY_IMAGES)
matchingFlannBased(filesTrainImages, filesQueryImages)
#extra() | apache-2.0 |
saiwing-yeung/scikit-learn | setup.py | 25 | 11732 | #! /usr/bin/env python
#
# Copyright (C) 2007-2009 Cournapeau David <cournape@gmail.com>
# 2010 Fabian Pedregosa <fabian.pedregosa@inria.fr>
# License: 3-clause BSD
import subprocess
descr = """A set of python modules for machine learning and data mining"""
import sys
import os
import shutil
from distutils.command.clean import clean as Clean
from pkg_resources import parse_version
if sys.version_info[0] < 3:
import __builtin__ as builtins
else:
import builtins
# This is a bit (!) hackish: we are setting a global variable so that the main
# sklearn __init__ can detect if it is being loaded by the setup routine, to
# avoid attempting to load components that aren't built yet:
# the numpy distutils extensions that are used by scikit-learn to recursively
# build the compiled extensions in sub-packages is based on the Python import
# machinery.
builtins.__SKLEARN_SETUP__ = True
DISTNAME = 'scikit-learn'
DESCRIPTION = 'A set of python modules for machine learning and data mining'
with open('README.rst') as f:
LONG_DESCRIPTION = f.read()
MAINTAINER = 'Andreas Mueller'
MAINTAINER_EMAIL = 'amueller@ais.uni-bonn.de'
URL = 'http://scikit-learn.org'
LICENSE = 'new BSD'
DOWNLOAD_URL = 'http://sourceforge.net/projects/scikit-learn/files/'
# We can actually import a restricted version of sklearn that
# does not need the compiled code
import sklearn
VERSION = sklearn.__version__
# Optional setuptools features
# We need to import setuptools early, if we want setuptools features,
# as it monkey-patches the 'setup' function
# For some commands, use setuptools
SETUPTOOLS_COMMANDS = set([
'develop', 'release', 'bdist_egg', 'bdist_rpm',
'bdist_wininst', 'install_egg_info', 'build_sphinx',
'egg_info', 'easy_install', 'upload', 'bdist_wheel',
'--single-version-externally-managed',
])
if SETUPTOOLS_COMMANDS.intersection(sys.argv):
import setuptools
extra_setuptools_args = dict(
zip_safe=False, # the package can run out of an .egg file
include_package_data=True,
)
else:
extra_setuptools_args = dict()
# Custom clean command to remove build artifacts
class CleanCommand(Clean):
description = "Remove build artifacts from the source tree"
def run(self):
Clean.run(self)
# Remove c files if we are not within a sdist package
cwd = os.path.abspath(os.path.dirname(__file__))
remove_c_files = not os.path.exists(os.path.join(cwd, 'PKG-INFO'))
if remove_c_files:
cython_hash_file = os.path.join(cwd, 'cythonize.dat')
if os.path.exists(cython_hash_file):
os.unlink(cython_hash_file)
print('Will remove generated .c files')
if os.path.exists('build'):
shutil.rmtree('build')
for dirpath, dirnames, filenames in os.walk('sklearn'):
for filename in filenames:
if any(filename.endswith(suffix) for suffix in
(".so", ".pyd", ".dll", ".pyc")):
os.unlink(os.path.join(dirpath, filename))
continue
extension = os.path.splitext(filename)[1]
if remove_c_files and extension in ['.c', '.cpp']:
pyx_file = str.replace(filename, extension, '.pyx')
if os.path.exists(os.path.join(dirpath, pyx_file)):
os.unlink(os.path.join(dirpath, filename))
for dirname in dirnames:
if dirname == '__pycache__':
shutil.rmtree(os.path.join(dirpath, dirname))
cmdclass = {'clean': CleanCommand}
# Optional wheelhouse-uploader features
# To automate release of binary packages for scikit-learn we need a tool
# to download the packages generated by travis and appveyor workers (with
# version number matching the current release) and upload them all at once
# to PyPI at release time.
# The URL of the artifact repositories are configured in the setup.cfg file.
WHEELHOUSE_UPLOADER_COMMANDS = set(['fetch_artifacts', 'upload_all'])
if WHEELHOUSE_UPLOADER_COMMANDS.intersection(sys.argv):
import wheelhouse_uploader.cmd
cmdclass.update(vars(wheelhouse_uploader.cmd))
def configuration(parent_package='', top_path=None):
if os.path.exists('MANIFEST'):
os.remove('MANIFEST')
from numpy.distutils.misc_util import Configuration
config = Configuration(None, parent_package, top_path)
# Avoid non-useful msg:
# "Ignoring attempt to set 'name' (from ... "
config.set_options(ignore_setup_xxx_py=True,
assume_default_configuration=True,
delegate_options_to_subpackages=True,
quiet=True)
config.add_subpackage('sklearn')
return config
scipy_min_version = '0.9'
numpy_min_version = '1.6.1'
def get_scipy_status():
"""
Returns a dictionary containing a boolean specifying whether SciPy
is up-to-date, along with the version string (empty string if
not installed).
"""
scipy_status = {}
try:
import scipy
scipy_version = scipy.__version__
scipy_status['up_to_date'] = parse_version(
scipy_version) >= parse_version(scipy_min_version)
scipy_status['version'] = scipy_version
except ImportError:
scipy_status['up_to_date'] = False
scipy_status['version'] = ""
return scipy_status
def get_numpy_status():
"""
Returns a dictionary containing a boolean specifying whether NumPy
is up-to-date, along with the version string (empty string if
not installed).
"""
numpy_status = {}
try:
import numpy
numpy_version = numpy.__version__
numpy_status['up_to_date'] = parse_version(
numpy_version) >= parse_version(numpy_min_version)
numpy_status['version'] = numpy_version
except ImportError:
numpy_status['up_to_date'] = False
numpy_status['version'] = ""
return numpy_status
def generate_cython():
cwd = os.path.abspath(os.path.dirname(__file__))
print("Cythonizing sources")
p = subprocess.call([sys.executable, os.path.join(cwd,
'build_tools',
'cythonize.py'),
'sklearn'],
cwd=cwd)
if p != 0:
raise RuntimeError("Running cythonize failed!")
def setup_package():
metadata = dict(name=DISTNAME,
maintainer=MAINTAINER,
maintainer_email=MAINTAINER_EMAIL,
description=DESCRIPTION,
license=LICENSE,
url=URL,
version=VERSION,
download_url=DOWNLOAD_URL,
long_description=LONG_DESCRIPTION,
classifiers=['Intended Audience :: Science/Research',
'Intended Audience :: Developers',
'License :: OSI Approved',
'Programming Language :: C',
'Programming Language :: Python',
'Topic :: Software Development',
'Topic :: Scientific/Engineering',
'Operating System :: Microsoft :: Windows',
'Operating System :: POSIX',
'Operating System :: Unix',
'Operating System :: MacOS',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
],
cmdclass=cmdclass,
**extra_setuptools_args)
if len(sys.argv) == 1 or (
len(sys.argv) >= 2 and ('--help' in sys.argv[1:] or
sys.argv[1] in ('--help-commands',
'egg_info',
'--version',
'clean'))):
# For these actions, NumPy is not required, nor Cythonization
#
# They are required to succeed without Numpy for example when
# pip is used to install Scikit-learn when Numpy is not yet present in
# the system.
try:
from setuptools import setup
except ImportError:
from distutils.core import setup
metadata['version'] = VERSION
else:
numpy_status = get_numpy_status()
numpy_req_str = "scikit-learn requires NumPy >= {0}.\n".format(
numpy_min_version)
scipy_status = get_scipy_status()
scipy_req_str = "scikit-learn requires SciPy >= {0}.\n".format(
scipy_min_version)
instructions = ("Installation instructions are available on the "
"scikit-learn website: "
"http://scikit-learn.org/stable/install.html\n")
if numpy_status['up_to_date'] is False:
if numpy_status['version']:
raise ImportError("Your installation of Numerical Python "
"(NumPy) {0} is out-of-date.\n{1}{2}"
.format(numpy_status['version'],
numpy_req_str, instructions))
else:
raise ImportError("Numerical Python (NumPy) is not "
"installed.\n{0}{1}"
.format(numpy_req_str, instructions))
if scipy_status['up_to_date'] is False:
if scipy_status['version']:
raise ImportError("Your installation of Scientific Python "
"(SciPy) {0} is out-of-date.\n{1}{2}"
.format(scipy_status['version'],
scipy_req_str, instructions))
else:
raise ImportError("Scientific Python (SciPy) is not "
"installed.\n{0}{1}"
.format(scipy_req_str, instructions))
from numpy.distutils.core import setup
metadata['configuration'] = configuration
if len(sys.argv) >= 2 and sys.argv[1] not in 'config':
# Cythonize if needed
print('Generating cython files')
cwd = os.path.abspath(os.path.dirname(__file__))
if not os.path.exists(os.path.join(cwd, 'PKG-INFO')):
# Generate Cython sources, unless building from source release
generate_cython()
# Clean left-over .so file
for dirpath, dirnames, filenames in os.walk(
os.path.join(cwd, 'sklearn')):
for filename in filenames:
extension = os.path.splitext(filename)[1]
if extension in (".so", ".pyd", ".dll"):
pyx_file = str.replace(filename, extension, '.pyx')
print(pyx_file)
if not os.path.exists(os.path.join(dirpath, pyx_file)):
os.unlink(os.path.join(dirpath, filename))
setup(**metadata)
if __name__ == "__main__":
setup_package()
| bsd-3-clause |
TomAugspurger/pandas | pandas/tests/extension/base/interface.py | 2 | 2982 | import numpy as np
from pandas.core.dtypes.common import is_extension_array_dtype
from pandas.core.dtypes.dtypes import ExtensionDtype
import pandas as pd
import pandas._testing as tm
from .base import BaseExtensionTests
class BaseInterfaceTests(BaseExtensionTests):
"""Tests that the basic interface is satisfied."""
# ------------------------------------------------------------------------
# Interface
# ------------------------------------------------------------------------
def test_len(self, data):
assert len(data) == 100
def test_size(self, data):
assert data.size == 100
def test_ndim(self, data):
assert data.ndim == 1
def test_can_hold_na_valid(self, data):
# GH-20761
assert data._can_hold_na is True
def test_memory_usage(self, data):
s = pd.Series(data)
result = s.memory_usage(index=False)
assert result == s.nbytes
def test_array_interface(self, data):
result = np.array(data)
assert result[0] == data[0]
result = np.array(data, dtype=object)
expected = np.array(list(data), dtype=object)
tm.assert_numpy_array_equal(result, expected)
def test_is_extension_array_dtype(self, data):
assert is_extension_array_dtype(data)
assert is_extension_array_dtype(data.dtype)
assert is_extension_array_dtype(pd.Series(data))
assert isinstance(data.dtype, ExtensionDtype)
def test_no_values_attribute(self, data):
# GH-20735: EA's with .values attribute give problems with internal
# code, disallowing this for now until solved
assert not hasattr(data, "values")
assert not hasattr(data, "_values")
def test_is_numeric_honored(self, data):
result = pd.Series(data)
assert result._mgr.blocks[0].is_numeric is data.dtype._is_numeric
def test_isna_extension_array(self, data_missing):
# If your `isna` returns an ExtensionArray, you must also implement
# _reduce. At the *very* least, you must implement any and all
na = data_missing.isna()
if is_extension_array_dtype(na):
assert na._reduce("any")
assert na.any()
assert not na._reduce("all")
assert not na.all()
assert na.dtype._is_boolean
def test_copy(self, data):
# GH#27083 removing deep keyword from EA.copy
assert data[0] != data[1]
result = data.copy()
data[1] = data[0]
assert result[1] != result[0]
def test_view(self, data):
# view with no dtype should return a shallow copy, *not* the same
# object
assert data[1] != data[0]
result = data.view()
assert result is not data
assert type(result) == type(data)
result[1] = result[0]
assert data[1] == data[0]
# check specifically that the `dtype` kwarg is accepted
data.view(dtype=None)
| bsd-3-clause |
huzq/scikit-learn | sklearn/metrics/_plot/tests/test_plot_roc_curve.py | 3 | 7954 | import pytest
import numpy as np
from numpy.testing import assert_allclose
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import plot_roc_curve
from sklearn.metrics import RocCurveDisplay
from sklearn.metrics import roc_curve
from sklearn.metrics import auc
from sklearn.datasets import load_iris
from sklearn.datasets import load_breast_cancer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.base import ClassifierMixin
from sklearn.exceptions import NotFittedError
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.utils import shuffle
from sklearn.compose import make_column_transformer
# TODO: Remove when https://github.com/numpy/numpy/issues/14397 is resolved
pytestmark = pytest.mark.filterwarnings(
"ignore:In future, it will be an error for 'np.bool_':DeprecationWarning:"
"matplotlib.*")
@pytest.fixture(scope="module")
def data():
return load_iris(return_X_y=True)
@pytest.fixture(scope="module")
def data_binary(data):
X, y = data
return X[y < 2], y[y < 2]
def test_plot_roc_curve_error_non_binary(pyplot, data):
X, y = data
clf = DecisionTreeClassifier()
clf.fit(X, y)
msg = "DecisionTreeClassifier should be a binary classifier"
with pytest.raises(ValueError, match=msg):
plot_roc_curve(clf, X, y)
@pytest.mark.parametrize(
"response_method, msg",
[("predict_proba", "response method predict_proba is not defined in "
"MyClassifier"),
("decision_function", "response method decision_function is not defined "
"in MyClassifier"),
("auto", "response method decision_function or predict_proba is not "
"defined in MyClassifier"),
("bad_method", "response_method must be 'predict_proba', "
"'decision_function' or 'auto'")])
def test_plot_roc_curve_error_no_response(pyplot, data_binary, response_method,
msg):
X, y = data_binary
class MyClassifier(ClassifierMixin):
def fit(self, X, y):
self.classes_ = [0, 1]
return self
clf = MyClassifier().fit(X, y)
with pytest.raises(ValueError, match=msg):
plot_roc_curve(clf, X, y, response_method=response_method)
@pytest.mark.parametrize("response_method",
["predict_proba", "decision_function"])
@pytest.mark.parametrize("with_sample_weight", [True, False])
@pytest.mark.parametrize("drop_intermediate", [True, False])
@pytest.mark.parametrize("with_strings", [True, False])
def test_plot_roc_curve(pyplot, response_method, data_binary,
with_sample_weight, drop_intermediate,
with_strings):
X, y = data_binary
pos_label = None
if with_strings:
y = np.array(["c", "b"])[y]
pos_label = "c"
if with_sample_weight:
rng = np.random.RandomState(42)
sample_weight = rng.randint(1, 4, size=(X.shape[0]))
else:
sample_weight = None
lr = LogisticRegression()
lr.fit(X, y)
viz = plot_roc_curve(lr, X, y, alpha=0.8, sample_weight=sample_weight,
drop_intermediate=drop_intermediate)
y_pred = getattr(lr, response_method)(X)
if y_pred.ndim == 2:
y_pred = y_pred[:, 1]
fpr, tpr, _ = roc_curve(y, y_pred, sample_weight=sample_weight,
drop_intermediate=drop_intermediate,
pos_label=pos_label)
assert_allclose(viz.roc_auc, auc(fpr, tpr))
assert_allclose(viz.fpr, fpr)
assert_allclose(viz.tpr, tpr)
assert viz.estimator_name == "LogisticRegression"
# cannot fail thanks to pyplot fixture
import matplotlib as mpl # noqal
assert isinstance(viz.line_, mpl.lines.Line2D)
assert viz.line_.get_alpha() == 0.8
assert isinstance(viz.ax_, mpl.axes.Axes)
assert isinstance(viz.figure_, mpl.figure.Figure)
expected_label = "LogisticRegression (AUC = {:0.2f})".format(viz.roc_auc)
assert viz.line_.get_label() == expected_label
expected_pos_label = 1 if pos_label is None else pos_label
expected_ylabel = f"True Positive Rate (Positive label: " \
f"{expected_pos_label})"
expected_xlabel = f"False Positive Rate (Positive label: " \
f"{expected_pos_label})"
assert viz.ax_.get_ylabel() == expected_ylabel
assert viz.ax_.get_xlabel() == expected_xlabel
@pytest.mark.parametrize(
"clf", [LogisticRegression(),
make_pipeline(StandardScaler(), LogisticRegression()),
make_pipeline(make_column_transformer((StandardScaler(), [0, 1])),
LogisticRegression())])
def test_roc_curve_not_fitted_errors(pyplot, data_binary, clf):
X, y = data_binary
with pytest.raises(NotFittedError):
plot_roc_curve(clf, X, y)
clf.fit(X, y)
disp = plot_roc_curve(clf, X, y)
assert clf.__class__.__name__ in disp.line_.get_label()
assert disp.estimator_name == clf.__class__.__name__
def test_plot_roc_curve_estimator_name_multiple_calls(pyplot, data_binary):
# non-regression test checking that the `name` used when calling
# `plot_roc_curve` is used as well when calling `disp.plot()`
X, y = data_binary
clf_name = "my hand-crafted name"
clf = LogisticRegression().fit(X, y)
disp = plot_roc_curve(clf, X, y, name=clf_name)
assert disp.estimator_name == clf_name
pyplot.close("all")
disp.plot()
assert clf_name in disp.line_.get_label()
pyplot.close("all")
clf_name = "another_name"
disp.plot(name=clf_name)
assert clf_name in disp.line_.get_label()
@pytest.mark.parametrize(
"roc_auc, estimator_name, expected_label",
[
(0.9, None, "AUC = 0.90"),
(None, "my_est", "my_est"),
(0.8, "my_est2", "my_est2 (AUC = 0.80)")
]
)
def test_default_labels(pyplot, roc_auc, estimator_name,
expected_label):
fpr = np.array([0, 0.5, 1])
tpr = np.array([0, 0.5, 1])
disp = RocCurveDisplay(fpr=fpr, tpr=tpr, roc_auc=roc_auc,
estimator_name=estimator_name).plot()
assert disp.line_.get_label() == expected_label
@pytest.mark.parametrize(
"response_method", ["predict_proba", "decision_function"]
)
def test_plot_roc_curve_pos_label(pyplot, response_method):
# check that we can provide the positive label and display the proper
# statistics
X, y = load_breast_cancer(return_X_y=True)
# create an highly imbalanced
idx_positive = np.flatnonzero(y == 1)
idx_negative = np.flatnonzero(y == 0)
idx_selected = np.hstack([idx_negative, idx_positive[:25]])
X, y = X[idx_selected], y[idx_selected]
X, y = shuffle(X, y, random_state=42)
# only use 2 features to make the problem even harder
X = X[:, :2]
y = np.array(
["cancer" if c == 1 else "not cancer" for c in y], dtype=object
)
X_train, X_test, y_train, y_test = train_test_split(
X, y, stratify=y, random_state=0,
)
classifier = LogisticRegression()
classifier.fit(X_train, y_train)
# sanity check to be sure the positive class is classes_[0] and that we
# are betrayed by the class imbalance
assert classifier.classes_.tolist() == ["cancer", "not cancer"]
disp = plot_roc_curve(
classifier, X_test, y_test, pos_label="cancer",
response_method=response_method
)
roc_auc_limit = 0.95679
assert disp.roc_auc == pytest.approx(roc_auc_limit)
assert np.trapz(disp.tpr, disp.fpr) == pytest.approx(roc_auc_limit)
disp = plot_roc_curve(
classifier, X_test, y_test,
response_method=response_method,
)
assert disp.roc_auc == pytest.approx(roc_auc_limit)
assert np.trapz(disp.tpr, disp.fpr) == pytest.approx(roc_auc_limit)
| bsd-3-clause |
drewdru/AOI | controllers/segmentationController.py | 1 | 18817 | """
@package segmentationController
Controller for qml Segmentation
"""
import sys
import os
import numpy
import matplotlib.pyplot as plt
import random
import time
import math
sys.path.append(os.path.abspath(os.path.dirname(__file__) + '/' + '../..'))
from imageProcessor import colorModel, histogramService, imageService, imageComparison
from imageSegmentation import egbis, gabor, gaborSegment, kMeans, sphc
from roadLaneFinding import detectRoadLane
from imageFilters import filters
from PyQt5.QtCore import QCoreApplication, QDir
from PyQt5.QtCore import QObject, pyqtSlot, QVariant #, QVariantList
from PyQt5.QtQml import QJSValue
from PIL import Image, ImageChops
class SegmentationController(QObject):
""" Controller for binarize view """
def __init__(self, appDir=None):
QObject.__init__(self)
self.appDir = QDir.currentPath() if appDir is None else appDir
self.histogramService = histogramService.HistogramService(self.appDir)
self.imageService = imageService.ImageService(self.appDir)
self.maskList = []
self.createMaskList(3, 3)
@pyqtSlot(int, int)
def createMaskList(self, apertureWidth, apertureHeight):
if len(self.maskList) == apertureHeight and len(self.maskList) > 0:
if len(self.maskList[0]) == apertureWidth:
return
self.maskList = []
for height in range(apertureHeight):
modelRow = []
for width in range(apertureWidth):
modelRow.append(False)
self.maskList.append(modelRow)
@pyqtSlot(int, int, bool)
def updateCellMaskList(self, y, x, value):
self.maskList[x][y] = value
@pyqtSlot(str, int, bool, float, float, float, float, str, str)
def EfficientGraphBasedImageSegmentation(self, colorModelTag, currentImageChannelIndex, isOriginalImage,
sigma, neighborhood, k, min_comp_size, xPix, yPix):
"""
EfficientGraphBasedImageSegmentation
"""
if xPix == '' or yPix == '':
pixMouse=None
else:
pixMouse=(int(xPix), int(yPix))
outImagePath, imgPath = self.imageService.getImagePath(isOriginalImage)
if imgPath is None:
return
img = self.imageService.openImage(isOriginalImage)
if img is None:
return
segmentCount = 0
methodTimer = time.time()
if colorModelTag == 'RGB':
methodTimer = time.time()
data1, data2, segmentCount, forest = egbis.segmentateRun(sigma, neighborhood, k, min_comp_size,
img, outImagePath, pixMouse)
methodTimer = time.time() - methodTimer
img = self.imageService.openImage(False)
if img is None:
return
self.histogramService.saveHistogram(img=img, model=colorModelTag)
if colorModelTag == 'YUV':
colorModel.rgbToYuv(img.load(), img.size)
methodTimer = time.time()
data1, data2, segmentCount, forest = egbis.segmentateRun(sigma, neighborhood, k, min_comp_size,
img, outImagePath, pixMouse)
methodTimer = time.time() - methodTimer
img = self.imageService.openImage(False)
if img is None:
return
# img.show()
self.histogramService.saveHistogram(img=img, model=colorModelTag)
# colorModel.yuvToRgb(img.load(), img.size)
if colorModelTag == 'HSL':
data = numpy.asarray(img, dtype="float")
data = colorModel.rgbToHsl(data)
methodTimer = time.time()
data1, data2, segmentCount, forest = egbis.segmentateRun(sigma, neighborhood, k, min_comp_size,
data, outImagePath, pixMouse)
methodTimer = time.time() - methodTimer
self.histogramService.saveHistogram(data=data, model=colorModelTag)
timerTemp = time.time()
data = colorModel.hslToRgb(data)
img = Image.fromarray(numpy.asarray(numpy.clip(data, 0, 255), dtype="uint8"))
methodTimer = time.time() - timerTemp + methodTimer
logFile = '{}/temp/log/EfficientGraphBasedImageSegmentation.log'.format(self.appDir)
with open(logFile, "a+") as text_file:
text_file.write("Timer: {}: {}\n".format(colorModelTag, methodTimer))
# img.save('{}/temp/processingImage.png'.format(self.appDir))
imageComparison.calculateSegmentationCriterias(logFile, data1, data2, segmentCount)
@pyqtSlot(str, int, bool)
def GaborEdge(self, colorModelTag, currentImageChannelIndex, isOriginalImage):
"""
GaborEdge
"""
outImagePath, imgPath = self.imageService.getImagePath(isOriginalImage)
if imgPath is None:
return
img = self.imageService.openImage(isOriginalImage)
if img is None:
return
methodTimer = time.time()
if colorModelTag == 'RGB':
methodTimer = time.time()
gabor.doGabor(imgPath, outImagePath)
methodTimer = time.time() - methodTimer
img = self.imageService.openImage(False)
if img is None:
return
self.histogramService.saveHistogram(img=img, model=colorModelTag)
if colorModelTag == 'YUV':
colorModel.rgbToYuv(img.load(), img.size)
methodTimer = time.time()
gabor.doGabor(imgPath, outImagePath)
methodTimer = time.time() - methodTimer
img = self.imageService.openImage(False)
if img is None:
return
# img.show()
self.histogramService.saveHistogram(img=img, model=colorModelTag)
# colorModel.yuvToRgb(img.load(), img.size)
if colorModelTag == 'HSL':
data = numpy.asarray(img, dtype="float")
data = colorModel.rgbToHsl(data)
methodTimer = time.time()
gabor.doGabor(imgPath, outImagePath)
methodTimer = time.time() - methodTimer
self.histogramService.saveHistogram(data=data, model=colorModelTag)
timerTemp = time.time()
data = colorModel.hslToRgb(data)
img = Image.fromarray(numpy.asarray(numpy.clip(data, 0, 255), dtype="uint8"))
methodTimer = time.time() - timerTemp + methodTimer
logFile = '{}/temp/log/LaplacianFindEdge.log'.format(self.appDir)
with open(logFile, "a+") as text_file:
text_file.write("Timer: {}: {}\n".format(colorModelTag, methodTimer))
# img.save('{}/temp/processingImage.png'.format(self.appDir))
imageComparison.calculateImageDifference(colorModelTag, logFile)
@pyqtSlot(str, int, bool)
def GaborSegmentation(self, colorModelTag, currentImageChannelIndex, isOriginalImage):
"""
GaborSegmentation
"""
outImagePath, imgPath = self.imageService.getImagePath(isOriginalImage)
if imgPath is None:
return
img = self.imageService.openImage(isOriginalImage)
if img is None:
return
methodTimer = time.time()
if colorModelTag == 'RGB':
methodTimer = time.time()
gaborSegment.doSegment(imgPath, outImagePath)
methodTimer = time.time() - methodTimer
img = self.imageService.openImage(False)
if img is None:
return
self.histogramService.saveHistogram(img=img, model=colorModelTag)
if colorModelTag == 'YUV':
colorModel.rgbToYuv(img.load(), img.size)
methodTimer = time.time()
gaborSegment.doSegment(imgPath, outImagePath)
methodTimer = time.time() - methodTimer
img = self.imageService.openImage(False)
if img is None:
return
# img.show()
self.histogramService.saveHistogram(img=img, model=colorModelTag)
# colorModel.yuvToRgb(img.load(), img.size)
if colorModelTag == 'HSL':
data = numpy.asarray(img, dtype="float")
data = colorModel.rgbToHsl(data)
methodTimer = time.time()
gaborSegment.doSegment(imgPath, outImagePath)
methodTimer = time.time() - methodTimer
self.histogramService.saveHistogram(data=data, model=colorModelTag)
timerTemp = time.time()
data = colorModel.hslToRgb(data)
img = Image.fromarray(numpy.asarray(numpy.clip(data, 0, 255), dtype="uint8"))
methodTimer = time.time() - timerTemp + methodTimer
logFile = '{}/temp/log/morphDilation.log'.format(self.appDir)
with open(logFile, "a+") as text_file:
text_file.write("Timer: {}: {}\n".format(colorModelTag, methodTimer))
# img.save('{}/temp/processingImage.png'.format(self.appDir))
imageComparison.calculateImageDifference(colorModelTag, logFile)
@pyqtSlot(str, int, bool, int)
def KMeans(self, colorModelTag, currentImageChannelIndex, isOriginalImage, countOfClusters):
"""
GaborSegmentation
"""
outImagePath, imgPath = self.imageService.getImagePath(isOriginalImage)
if imgPath is None:
return
img = self.imageService.openImage(isOriginalImage)
if img is None:
return
methodTimer = time.time()
if colorModelTag == 'RGB':
methodTimer = time.time()
kMeans.doKMeans(imgPath, outImagePath, countOfClusters)
methodTimer = time.time() - methodTimer
img = self.imageService.openImage(False)
if img is None:
return
self.histogramService.saveHistogram(img=img, model=colorModelTag)
if colorModelTag == 'YUV':
colorModel.rgbToYuv(img.load(), img.size)
methodTimer = time.time()
kMeans.doKMeans(imgPath, outImagePath, countOfClusters)
methodTimer = time.time() - methodTimer
img = self.imageService.openImage(False)
if img is None:
return
# img.show()
self.histogramService.saveHistogram(img=img, model=colorModelTag)
# colorModel.yuvToRgb(img.load(), img.size)
if colorModelTag == 'HSL':
data = numpy.asarray(img, dtype="float")
data = colorModel.rgbToHsl(data)
methodTimer = time.time()
kMeans.doKMeans(imgPath, outImagePath, countOfClusters)
methodTimer = time.time() - methodTimer
self.histogramService.saveHistogram(data=data, model=colorModelTag)
timerTemp = time.time()
data = colorModel.hslToRgb(data)
img = Image.fromarray(numpy.asarray(numpy.clip(data, 0, 255), dtype="uint8"))
methodTimer = time.time() - timerTemp + methodTimer
logFile = '{}/temp/log/morphDilation.log'.format(self.appDir)
with open(logFile, "a+") as text_file:
text_file.write("Timer: {}: {}\n".format(colorModelTag, methodTimer))
# img.save('{}/temp/processingImage.png'.format(self.appDir))
imageComparison.calculateImageDifference(colorModelTag, logFile)
@pyqtSlot(str, int, bool, int, float, int, float, str, str)
def segSPHC(self, colorModelTag, currentImageChannelIndex, isOriginalImage,
numSegments, Sigma, segmentsToMerge, distance_limit, xPix, yPix):
"""
segSPHC
"""
if xPix == '' or yPix == '':
pixMouse=None
else:
pixMouse=(int(xPix), int(yPix))
segmentCount = 0
outImagePath, imgPath = self.imageService.getImagePath(isOriginalImage)
if imgPath is None:
return
img = self.imageService.openImage(isOriginalImage)
if img is None:
return
methodTimer = time.time()
if colorModelTag == 'RGB':
methodTimer = time.time()
data1, data2, segmentCount, segm_dict = sphc.doSPHC(imgPath, outImagePath, numSegments, Sigma,
segmentsToMerge, distance_limit, pixMouse)
methodTimer = time.time() - methodTimer
img = self.imageService.openImage(False)
if img is None:
return
self.histogramService.saveHistogram(img=img, model=colorModelTag)
if colorModelTag == 'YUV':
colorModel.rgbToYuv(img.load(), img.size)
methodTimer = time.time()
data1, data2, segmentCount, segm_dict = sphc.doSPHC(imgPath, outImagePath, numSegments, Sigma,
segmentsToMerge, distance_limit, pixMouse)
methodTimer = time.time() - methodTimer
img = self.imageService.openImage(False)
if img is None:
return
# img.show()
self.histogramService.saveHistogram(img=img, model=colorModelTag)
# colorModel.yuvToRgb(img.load(), img.size)
if colorModelTag == 'HSL':
data = numpy.asarray(img, dtype="float")
data = colorModel.rgbToHsl(data)
methodTimer = time.time()
data1, data2, segmentCount, segm_dict = sphc.doSPHC(imgPath, outImagePath, numSegments, Sigma,
segmentsToMerge, distance_limit, pixMouse)
methodTimer = time.time() - methodTimer
self.histogramService.saveHistogram(data=data, model=colorModelTag)
timerTemp = time.time()
data = colorModel.hslToRgb(data)
img = Image.fromarray(numpy.asarray(numpy.clip(data, 0, 255), dtype="uint8"))
methodTimer = time.time() - timerTemp + methodTimer
logFile = '{}/temp/log/segSPHC.log'.format(self.appDir)
with open(logFile, "a+") as text_file:
text_file.write("Timer: {}: {}\n".format(colorModelTag, methodTimer))
# img.save('{}/temp/processingImage.png'.format(self.appDir))
imageComparison.calculateSegmentationCriterias(logFile, data1, data2, segmentCount)
@pyqtSlot(str, int, bool, float, float, float, float, str, str, int, float, int, float, str, str)
def CompareEGBISandSPHC(self, colorModelTag, currentImageChannelIndex, isOriginalImage,
sigmaEGBIS, neighborhoodEGBIS, kEGBIS, min_comp_sizeEGBIS, xPixEGBIS, yPixEGBIS,
numSegmentsSPHC, SigmaSPHC, segmentsToMergeSPHC, distance_limitSPHC, xPixSPHC, yPixSPHC):
"""
CompareEGBISandSPHC
"""
pixMouseEGBIS=None
pixMouseSPHC=None
# if xPixEGBIS == '' or yPixEGBIS == '':
# pixMouseEGBIS=None
# else:
# pixMouseEGBIS=(int(xPixEGBIS), int(yPixEGBIS))
outImagePath, imgPath = self.imageService.getImagePath(isOriginalImage)
if imgPath is None:
return
img = self.imageService.openImage(isOriginalImage)
if img is None:
return
segmentCountEGBIS = 0
segmentCountSPHC = 0
methodTimer = time.time()
if colorModelTag == 'RGB':
methodTimer = time.time()
data1EGBIS, data2EGBIS, segmentCountEGBIS, forest = egbis.segmentateRun(sigmaEGBIS, neighborhoodEGBIS, kEGBIS, min_comp_sizeEGBIS,
img, outImagePath, pixMouseEGBIS)
data1SPHC, data2SPHC, segmentCountSPHC, segm_dict = sphc.doSPHC(imgPath, outImagePath, numSegmentsSPHC, SigmaSPHC,
segmentsToMergeSPHC, distance_limitSPHC, pixMouseSPHC, isTest=True)
methodTimer = time.time() - methodTimer
img = self.imageService.openImage(True)
if img is None:
return
self.histogramService.saveHistogram(img=img, model=colorModelTag)
if colorModelTag == 'YUV':
colorModel.rgbToYuv(img.load(), img.size)
methodTimer = time.time()
data1EGBIS, data2EGBIS, segmentCountEGBIS, forest = egbis.segmentateRun(sigmaEGBIS, neighborhoodEGBIS, kEGBIS, min_comp_sizeEGBIS,
img, outImagePath, pixMouseEGBIS)
data1SPHC, data2SPHC, segmentCountSPHC, segm_dict = sphc.doSPHC(imgPath, outImagePath, numSegmentsSPHC, SigmaSPHC,
segmentsToMergeSPHC, distance_limitSPHC, pixMouseSPHC, isTest=True)
methodTimer = time.time() - methodTimer
img = self.imageService.openImage(True)
if img is None:
return
# img.show()
self.histogramService.saveHistogram(img=img, model=colorModelTag)
# colorModel.yuvToRgb(img.load(), img.size)
if colorModelTag == 'HSL':
data = numpy.asarray(img, dtype="float")
data = colorModel.rgbToHsl(data)
methodTimer = time.time()
data1EGBIS, data2EGBIS, segmentCountEGBIS, forest = egbis.segmentateRun(sigmaEGBIS, neighborhoodEGBIS, kEGBIS, min_comp_sizeEGBIS,
data, outImagePath, pixMouseEGBIS)
data1SPHC, data2SPHC, segmentCountSPHC, segm_dict = sphc.doSPHC(imgPath, outImagePath, numSegmentsSPHC, SigmaSPHC,
segmentsToMergeSPHC, distance_limitSPHC, pixMouseSPHC, isTest=True)
methodTimer = time.time() - methodTimer
self.histogramService.saveHistogram(data=data, model=colorModelTag)
timerTemp = time.time()
data = colorModel.hslToRgb(data)
img = Image.fromarray(numpy.asarray(numpy.clip(data, 0, 255), dtype="uint8"))
methodTimer = time.time() - timerTemp + methodTimer
logFile = '{}/temp/log/CompareEGBISandSPHC.log'.format(self.appDir)
with open(logFile, "a+") as text_file:
text_file.write("Timer: {}: {}\n".format(colorModelTag, methodTimer))
# img.save('{}/temp/processingImage.png'.format(self.appDir))
imageComparison.calculateSegmentationDifferences(logFile, data1EGBIS, data2EGBIS, segmentCountEGBIS, forest, data1SPHC, data2SPHC, segmentCountSPHC, segm_dict)
@pyqtSlot(str, int, bool)
def detectRoadLane(self, colorModelTag, currentImageChannelIndex, isOriginalImage):
outImagePath, imgPath = self.imageService.getImagePath(isOriginalImage)
if imgPath is None:
return
methodTimer = time.time()
detectRoadLane.doFindLane(imgPath, outImagePath)
methodTimer = time.time() - methodTimer
img = self.imageService.openImage(False)
self.histogramService.saveHistogram(img=img, model=colorModelTag)
logFile = '{}/temp/log/detectRoadLane.log'.format(self.appDir)
with open(logFile, "a+") as text_file:
text_file.write("Timer: {}: {}\n".format(colorModelTag, methodTimer))
imageComparison.calculateImageDifference(colorModelTag, logFile)
| gpl-3.0 |
jkarnows/scikit-learn | examples/cluster/plot_lena_segmentation.py | 271 | 2444 | """
=========================================
Segmenting the picture of Lena in regions
=========================================
This example uses :ref:`spectral_clustering` on a graph created from
voxel-to-voxel difference on an image to break this image into multiple
partly-homogeneous regions.
This procedure (spectral clustering on an image) is an efficient
approximate solution for finding normalized graph cuts.
There are two options to assign labels:
* with 'kmeans' spectral clustering will cluster samples in the embedding space
using a kmeans algorithm
* whereas 'discrete' will iteratively search for the closest partition
space to the embedding space.
"""
print(__doc__)
# Author: Gael Varoquaux <gael.varoquaux@normalesup.org>, Brian Cheung
# License: BSD 3 clause
import time
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
from sklearn.feature_extraction import image
from sklearn.cluster import spectral_clustering
lena = sp.misc.lena()
# Downsample the image by a factor of 4
lena = lena[::2, ::2] + lena[1::2, ::2] + lena[::2, 1::2] + lena[1::2, 1::2]
lena = lena[::2, ::2] + lena[1::2, ::2] + lena[::2, 1::2] + lena[1::2, 1::2]
# Convert the image into a graph with the value of the gradient on the
# edges.
graph = image.img_to_graph(lena)
# Take a decreasing function of the gradient: an exponential
# The smaller beta is, the more independent the segmentation is of the
# actual image. For beta=1, the segmentation is close to a voronoi
beta = 5
eps = 1e-6
graph.data = np.exp(-beta * graph.data / lena.std()) + eps
# Apply spectral clustering (this step goes much faster if you have pyamg
# installed)
N_REGIONS = 11
###############################################################################
# Visualize the resulting regions
for assign_labels in ('kmeans', 'discretize'):
t0 = time.time()
labels = spectral_clustering(graph, n_clusters=N_REGIONS,
assign_labels=assign_labels,
random_state=1)
t1 = time.time()
labels = labels.reshape(lena.shape)
plt.figure(figsize=(5, 5))
plt.imshow(lena, cmap=plt.cm.gray)
for l in range(N_REGIONS):
plt.contour(labels == l, contours=1,
colors=[plt.cm.spectral(l / float(N_REGIONS)), ])
plt.xticks(())
plt.yticks(())
plt.title('Spectral clustering: %s, %.2fs' % (assign_labels, (t1 - t0)))
plt.show()
| bsd-3-clause |
siliconsmiley/QGIS | python/plugins/processing/algs/qgis/QGISAlgorithmProvider.py | 5 | 9868 | # -*- coding: utf-8 -*-
"""
***************************************************************************
QGISAlgorithmProvider.py
---------------------
Date : December 2012
Copyright : (C) 2012 by Victor Olaya
Email : volayaf at gmail dot com
***************************************************************************
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
***************************************************************************
"""
__author__ = 'Victor Olaya'
__date__ = 'December 2012'
__copyright__ = '(C) 2012, Victor Olaya'
# This will get replaced with a git SHA1 when you do a git archive
__revision__ = '$Format:%H$'
import os
try:
import matplotlib.pyplot
hasMatplotlib = True
except:
hasMatplotlib = False
from PyQt4.QtGui import QIcon
from processing.core.AlgorithmProvider import AlgorithmProvider
from processing.script.ScriptUtils import ScriptUtils
from RegularPoints import RegularPoints
from SymmetricalDifference import SymmetricalDifference
from VectorSplit import VectorSplit
from VectorGrid import VectorGrid
from RandomExtract import RandomExtract
from RandomExtractWithinSubsets import RandomExtractWithinSubsets
from ExtractByLocation import ExtractByLocation
from PointsInPolygon import PointsInPolygon
from PointsInPolygonUnique import PointsInPolygonUnique
from PointsInPolygonWeighted import PointsInPolygonWeighted
from SumLines import SumLines
from BasicStatisticsNumbers import BasicStatisticsNumbers
from BasicStatisticsStrings import BasicStatisticsStrings
from NearestNeighbourAnalysis import NearestNeighbourAnalysis
from LinesIntersection import LinesIntersection
from MeanCoords import MeanCoords
from PointDistance import PointDistance
from UniqueValues import UniqueValues
from ReprojectLayer import ReprojectLayer
from ExportGeometryInfo import ExportGeometryInfo
from Centroids import Centroids
from Delaunay import Delaunay
from VoronoiPolygons import VoronoiPolygons
from DensifyGeometries import DensifyGeometries
from MultipartToSingleparts import MultipartToSingleparts
from SimplifyGeometries import SimplifyGeometries
from LinesToPolygons import LinesToPolygons
from PolygonsToLines import PolygonsToLines
from SinglePartsToMultiparts import SinglePartsToMultiparts
from ExtractNodes import ExtractNodes
from ConvexHull import ConvexHull
from FixedDistanceBuffer import FixedDistanceBuffer
from VariableDistanceBuffer import VariableDistanceBuffer
from Clip import Clip
from Difference import Difference
from Dissolve import Dissolve
from Intersection import Intersection
from ExtentFromLayer import ExtentFromLayer
from RandomSelection import RandomSelection
from RandomSelectionWithinSubsets import RandomSelectionWithinSubsets
from SelectByLocation import SelectByLocation
from Union import Union
from DensifyGeometriesInterval import DensifyGeometriesInterval
from Eliminate import Eliminate
from SpatialJoin import SpatialJoin
from DeleteColumn import DeleteColumn
from DeleteHoles import DeleteHoles
from DeleteDuplicateGeometries import DeleteDuplicateGeometries
from TextToFloat import TextToFloat
from ExtractByAttribute import ExtractByAttribute
from SelectByAttribute import SelectByAttribute
from Grid import Grid
from Gridify import Gridify
from HubDistance import HubDistance
from HubLines import HubLines
from Merge import Merge
from GeometryConvert import GeometryConvert
from ConcaveHull import ConcaveHull
from Polygonize import Polygonize
from RasterLayerStatistics import RasterLayerStatistics
from StatisticsByCategories import StatisticsByCategories
from EquivalentNumField import EquivalentNumField
from AddTableField import AddTableField
from FieldsCalculator import FieldsCalculator
from SaveSelectedFeatures import SaveSelectedFeatures
from Explode import Explode
from AutoincrementalField import AutoincrementalField
from FieldPyculator import FieldsPyculator
from JoinAttributes import JoinAttributes
from CreateConstantRaster import CreateConstantRaster
from PointsLayerFromTable import PointsLayerFromTable
from PointsDisplacement import PointsDisplacement
from ZonalStatistics import ZonalStatistics
from PointsFromPolygons import PointsFromPolygons
from PointsFromLines import PointsFromLines
from RandomPointsExtent import RandomPointsExtent
from RandomPointsLayer import RandomPointsLayer
from RandomPointsPolygonsFixed import RandomPointsPolygonsFixed
from RandomPointsPolygonsVariable import RandomPointsPolygonsVariable
from RandomPointsAlongLines import RandomPointsAlongLines
from PointsToPaths import PointsToPaths
from PostGISExecuteSQL import PostGISExecuteSQL
from ImportIntoPostGIS import ImportIntoPostGIS
from SetVectorStyle import SetVectorStyle
from SetRasterStyle import SetRasterStyle
from SelectByExpression import SelectByExpression
from SelectByAttributeSum import SelectByAttributeSum
from HypsometricCurves import HypsometricCurves
from SplitLinesWithLines import SplitLinesWithLines
from FieldsMapper import FieldsMapper
from Datasources2Vrt import Datasources2Vrt
from CheckValidity import CheckValidity
pluginPath = os.path.normpath(os.path.join(
os.path.split(os.path.dirname(__file__))[0], os.pardir))
class QGISAlgorithmProvider(AlgorithmProvider):
_icon = QIcon(os.path.join(pluginPath, 'images', 'qgis.png'))
def __init__(self):
AlgorithmProvider.__init__(self)
self.alglist = [SumLines(), PointsInPolygon(),
PointsInPolygonWeighted(), PointsInPolygonUnique(),
BasicStatisticsStrings(), BasicStatisticsNumbers(),
NearestNeighbourAnalysis(), MeanCoords(),
LinesIntersection(), UniqueValues(), PointDistance(),
ReprojectLayer(), ExportGeometryInfo(), Centroids(),
Delaunay(), VoronoiPolygons(), SimplifyGeometries(),
DensifyGeometries(), DensifyGeometriesInterval(),
MultipartToSingleparts(), SinglePartsToMultiparts(),
PolygonsToLines(), LinesToPolygons(), ExtractNodes(),
Eliminate(), ConvexHull(), FixedDistanceBuffer(),
VariableDistanceBuffer(), Dissolve(), Difference(),
Intersection(), Union(), Clip(), ExtentFromLayer(),
RandomSelection(), RandomSelectionWithinSubsets(),
SelectByLocation(), RandomExtract(), DeleteHoles(),
RandomExtractWithinSubsets(), ExtractByLocation(),
SpatialJoin(), RegularPoints(), SymmetricalDifference(),
VectorSplit(), VectorGrid(), DeleteColumn(),
DeleteDuplicateGeometries(), TextToFloat(),
ExtractByAttribute(), SelectByAttribute(), Grid(),
Gridify(), HubDistance(), HubLines(), Merge(),
GeometryConvert(), AddTableField(), FieldsCalculator(),
SaveSelectedFeatures(), JoinAttributes(),
AutoincrementalField(), Explode(), FieldsPyculator(),
EquivalentNumField(), PointsLayerFromTable(),
StatisticsByCategories(), ConcaveHull(), Polygonize(),
RasterLayerStatistics(), PointsDisplacement(),
ZonalStatistics(), PointsFromPolygons(),
PointsFromLines(), RandomPointsExtent(),
RandomPointsLayer(), RandomPointsPolygonsFixed(),
RandomPointsPolygonsVariable(),
RandomPointsAlongLines(), PointsToPaths(),
PostGISExecuteSQL(), ImportIntoPostGIS(),
SetVectorStyle(), SetRasterStyle(),
SelectByExpression(), HypsometricCurves(),
SplitLinesWithLines(), CreateConstantRaster(),
FieldsMapper(),SelectByAttributeSum(), Datasources2Vrt(),
CheckValidity()
]
if hasMatplotlib:
from VectorLayerHistogram import VectorLayerHistogram
from RasterLayerHistogram import RasterLayerHistogram
from VectorLayerScatterplot import VectorLayerScatterplot
from MeanAndStdDevPlot import MeanAndStdDevPlot
from BarPlot import BarPlot
from PolarPlot import PolarPlot
self.alglist.extend([
VectorLayerHistogram(), RasterLayerHistogram(),
VectorLayerScatterplot(), MeanAndStdDevPlot(), BarPlot(),
PolarPlot(),
])
folder = os.path.join(os.path.dirname(__file__), 'scripts')
scripts = ScriptUtils.loadFromFolder(folder)
for script in scripts:
script.allowEdit = False
self.alglist.extend(scripts)
for alg in self.alglist:
alg._icon = self._icon
def initializeSettings(self):
AlgorithmProvider.initializeSettings(self)
def unload(self):
AlgorithmProvider.unload(self)
def getName(self):
return 'qgis'
def getDescription(self):
return self.tr('QGIS geoalgorithms')
def getIcon(self):
return self._icon
def _loadAlgorithms(self):
self.algs = self.alglist
def supportsNonFileBasedOutput(self):
return True
| gpl-2.0 |
lucabaldini/ximpol | ximpol/examples/grs1915.py | 1 | 5202 | #!/usr/bin/env python
#
# Copyright (C) 2016, the ximpol team.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import pyregion
import numpy
from ximpol import XIMPOL_CONFIG, XIMPOL_DATA, XIMPOL_EXAMPLES
from ximpol import xpColor
from ximpol.utils.logging_ import logger
from ximpol.core.pipeline import xPipeline
from ximpol.evt.binning import xBinnedMap, xBinnedModulationCube
from ximpol.srcmodel.img import xFITSImage
from ximpol.utils.matplotlib_ import pyplot as plt
#Here you need to speficy which configuration file you want, it must coincide with the model for the spin
from ximpol.config.grs1915_105 import pol_degree_spline, pol_angle_spline
from ximpol.config.grs1915_105 import spindegree
from ximpol.utils.os_ import rm
from ximpol.utils.system_ import cmd
"""Script-wide simulation and analysis settings.
"""
base_name = 'grs1915_105_spin%s'%spindegree
CFG_FILE = os.path.join(XIMPOL_CONFIG, 'grs1915_105.py')
OUT_FILE_PATH_BASE = os.path.join(XIMPOL_DATA, base_name)
MCUBE_FILE_PATH = '%s_mcube.fits'%OUT_FILE_PATH_BASE
ANALYSIS_FILE_PATH = '%s_analysis.txt' % OUT_FILE_PATH_BASE
EVT_FILE_PATH = '%s.fits' % OUT_FILE_PATH_BASE
SIM_DURATION = 100000.
#2.0 - 3.0 keV, 3.0 - 4.5 keV, 4.5 keV - 6.0 keV, 6.0 - 8.0
E_BINNING = [2.0, 3.0, 4.5, 6.0, 8.0]
"""Main pipeline object.
"""
PIPELINE = xPipeline(clobber=False)
#Added this method in run() so that we can simulate several runs with different seeds and merge the output files. This is needed for bright sources which have large amount of events in the output file.
def run(repeat=10):
#First simulate the events
file_list = []
for i in range(repeat):
output_file_path = EVT_FILE_PATH.replace('.fits', '_%d.fits' % i)
file_list.append(output_file_path)
PIPELINE.xpobssim(configfile=CFG_FILE, duration=SIM_DURATION,
outfile=output_file_path, seed=i)
file_list = str(file_list).strip('[]').replace('\'', '').replace(' ', '')
if PIPELINE.clobber:
rm(EVT_FILE_PATH)
cmd('ftmerge %s %s' % (file_list, EVT_FILE_PATH))
PIPELINE.xpbin(EVT_FILE_PATH, algorithm='MCUBE', ebinalg='LIST',
ebinning=E_BINNING)
def analyze():
"""Analyze the data.Testing this method, but I must be missing something, it does not work yet.
"""
logger.info('Opening output file %s...' % ANALYSIS_FILE_PATH)
analysis_file = open(ANALYSIS_FILE_PATH, 'w')
_mcube = xBinnedModulationCube(MCUBE_FILE_PATH)
_mcube.fit()
for j, fit in enumerate(_mcube.fit_results):
#_fit_results = _mcube.fit_results[0]
_pol_deg = fit.polarization_degree
_pol_deg_err = fit.polarization_degree_error
_pol_angle = fit.phase
_pol_angle_err = fit.phase_error
_energy_mean = _mcube.emean[j]
_emin = _mcube.emin[j]
_emax = _mcube.emax[j]
_data = (_energy_mean,_emin, _emax,_pol_deg, _pol_deg_err, _pol_angle, _pol_angle_err)
print _data
_fmt = ('%.4e ' * len(_data)).strip()
_fmt = '%s\n' % _fmt
_line = _fmt % _data
analysis_file.write(_line)
analysis_file.close()
def view():
_mcube = xBinnedModulationCube(MCUBE_FILE_PATH)
_mcube.fit()
_fit_results = _mcube.fit_results[0]
plt.figure('Polarization degree')
_mcube.plot_polarization_degree(show=False, color='blue')
pol_degree_spline.plot(color='lightgray',label='Spin %s'%spindegree, show=False)
plt.figtext(0.2, 0.85,'XIPE %s ks'%(SIM_DURATION/1000.),size=18)
#plt.errorbar(_energy_mean, _pol_deg, yerr=_pol_deg_err, color='blue',marker='o')
plt.legend()
plt.figure('Polarization angle')
_mcube.plot_polarization_angle(show=False, color='blue', degree=False)
pol_angle_spline.plot(color='lightgray',label='Spin %s'%spindegree, show=False)
plt.figtext(0.2, 0.85,'XIPE %s ks'%(SIM_DURATION/1000.),size=18)
#plt.errorbar(_energy_mean,_pol_angle, yerr= _pol_angle_err,color='blue',marker='o')
plt.xlim([1,10])
plt.legend()
plt.figure('MDP %s'%base_name)
mdp = _mcube.mdp99[:-1]
emean = _mcube.emean[:-1]
emin = _mcube.emin[:-1]
emax = _mcube.emax[:-1]
width = (emax-emin)/2.
plt.errorbar(emean,mdp,xerr=width, label='MDP99',marker='o',linestyle='--')
plt.figtext(0.2, 0.85,'XIPE %s ks'%(SIM_DURATION/1000.),size=18)
plt.xlim([1,10])
plt.ylabel('MPD 99\%')
plt.xlabel('Energy (keV)')
#plt.legend()
plt.show()
if __name__ == '__main__':
run()
analyze()
view()
| gpl-3.0 |
samzhang111/scikit-learn | sklearn/utils/tests/test_seq_dataset.py | 93 | 2471 | # Author: Tom Dupre la Tour <tom.dupre-la-tour@m4x.org>
#
# License: BSD 3 clause
import numpy as np
import scipy.sparse as sp
from sklearn.utils.seq_dataset import ArrayDataset, CSRDataset
from sklearn.datasets import load_iris
from numpy.testing import assert_array_equal
from nose.tools import assert_equal
iris = load_iris()
X = iris.data.astype(np.float64)
y = iris.target.astype(np.float64)
X_csr = sp.csr_matrix(X)
sample_weight = np.arange(y.size, dtype=np.float64)
def test_seq_dataset():
dataset1 = ArrayDataset(X, y, sample_weight, seed=42)
dataset2 = CSRDataset(X_csr.data, X_csr.indptr, X_csr.indices,
y, sample_weight, seed=42)
for dataset in (dataset1, dataset2):
for i in range(5):
# next sample
xi_, yi, swi, idx = dataset._next_py()
xi = sp.csr_matrix((xi_), shape=(1, X.shape[1]))
assert_array_equal(xi.data, X_csr[idx].data)
assert_array_equal(xi.indices, X_csr[idx].indices)
assert_array_equal(xi.indptr, X_csr[idx].indptr)
assert_equal(yi, y[idx])
assert_equal(swi, sample_weight[idx])
# random sample
xi_, yi, swi, idx = dataset._random_py()
xi = sp.csr_matrix((xi_), shape=(1, X.shape[1]))
assert_array_equal(xi.data, X_csr[idx].data)
assert_array_equal(xi.indices, X_csr[idx].indices)
assert_array_equal(xi.indptr, X_csr[idx].indptr)
assert_equal(yi, y[idx])
assert_equal(swi, sample_weight[idx])
def test_seq_dataset_shuffle():
dataset1 = ArrayDataset(X, y, sample_weight, seed=42)
dataset2 = CSRDataset(X_csr.data, X_csr.indptr, X_csr.indices,
y, sample_weight, seed=42)
# not shuffled
for i in range(5):
_, _, _, idx1 = dataset1._next_py()
_, _, _, idx2 = dataset2._next_py()
assert_equal(idx1, i)
assert_equal(idx2, i)
for i in range(5):
_, _, _, idx1 = dataset1._random_py()
_, _, _, idx2 = dataset2._random_py()
assert_equal(idx1, idx2)
seed = 77
dataset1._shuffle_py(seed)
dataset2._shuffle_py(seed)
for i in range(5):
_, _, _, idx1 = dataset1._next_py()
_, _, _, idx2 = dataset2._next_py()
assert_equal(idx1, idx2)
_, _, _, idx1 = dataset1._random_py()
_, _, _, idx2 = dataset2._random_py()
assert_equal(idx1, idx2)
| bsd-3-clause |
rohanp/scikit-learn | sklearn/metrics/cluster/tests/test_unsupervised.py | 230 | 2823 | import numpy as np
from scipy.sparse import csr_matrix
from sklearn import datasets
from sklearn.metrics.cluster.unsupervised import silhouette_score
from sklearn.metrics import pairwise_distances
from sklearn.utils.testing import assert_false, assert_almost_equal
from sklearn.utils.testing import assert_raises_regexp
def test_silhouette():
# Tests the Silhouette Coefficient.
dataset = datasets.load_iris()
X = dataset.data
y = dataset.target
D = pairwise_distances(X, metric='euclidean')
# Given that the actual labels are used, we can assume that S would be
# positive.
silhouette = silhouette_score(D, y, metric='precomputed')
assert(silhouette > 0)
# Test without calculating D
silhouette_metric = silhouette_score(X, y, metric='euclidean')
assert_almost_equal(silhouette, silhouette_metric)
# Test with sampling
silhouette = silhouette_score(D, y, metric='precomputed',
sample_size=int(X.shape[0] / 2),
random_state=0)
silhouette_metric = silhouette_score(X, y, metric='euclidean',
sample_size=int(X.shape[0] / 2),
random_state=0)
assert(silhouette > 0)
assert(silhouette_metric > 0)
assert_almost_equal(silhouette_metric, silhouette)
# Test with sparse X
X_sparse = csr_matrix(X)
D = pairwise_distances(X_sparse, metric='euclidean')
silhouette = silhouette_score(D, y, metric='precomputed')
assert(silhouette > 0)
def test_no_nan():
# Assert Silhouette Coefficient != nan when there is 1 sample in a class.
# This tests for the condition that caused issue 960.
# Note that there is only one sample in cluster 0. This used to cause the
# silhouette_score to return nan (see bug #960).
labels = np.array([1, 0, 1, 1, 1])
# The distance matrix doesn't actually matter.
D = np.random.RandomState(0).rand(len(labels), len(labels))
silhouette = silhouette_score(D, labels, metric='precomputed')
assert_false(np.isnan(silhouette))
def test_correct_labelsize():
# Assert 1 < n_labels < n_samples
dataset = datasets.load_iris()
X = dataset.data
# n_labels = n_samples
y = np.arange(X.shape[0])
assert_raises_regexp(ValueError,
'Number of labels is %d\. Valid values are 2 '
'to n_samples - 1 \(inclusive\)' % len(np.unique(y)),
silhouette_score, X, y)
# n_labels = 1
y = np.zeros(X.shape[0])
assert_raises_regexp(ValueError,
'Number of labels is %d\. Valid values are 2 '
'to n_samples - 1 \(inclusive\)' % len(np.unique(y)),
silhouette_score, X, y)
| bsd-3-clause |
dimkastan/PyTorch-Spectral-clustering | FiedlerVectorLaplacian.py | 1 | 2627 | """
% -------------------------------------------------------------
% Matlab code
% -------------------------------------------------------------
% grpah partition using the eigenvector corresponding to the second
% smallest eigenvalue
% grpah partition using the eigenvector corresponding to the second
% smallest eigenvalue
t=[randn(500,2)+repmat([-2,-2],500,1) ;randn(500,2)+repmat([2,2],500,1)];
scatter(t(:,1),t(:,2))
W=squareform(pdist(t));
A=W<3; % create adjacency matrix (set connected notes equal to one)
D = sum(A,1);
L = diag(D)-A;
Lsym = diag(D.^-0.5)*L*diag(D.^-0.5);
[u,s,v] = svd(Lsym);
figure; plot(u(:, (end-1)))
F = u(:, (end-1));
plot(F);title('Second smallest non-zero eigenvalue eigenvector');
scatter(t(F<0,1),t(F<0,2),'bo','filled');hold on
scatter(t(F>0,1),t(F>0,2),'go','filled');
"""
# Pytorch equivalent code
import torch
from torch.autograd import Variable
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib.colors as colors
import matplotlib.cm as cm
import matplotlib as mpl
color_map = plt.get_cmap('jet')
def distance_matrix(mat):
d= ((mat.unsqueeze (0)-mat.unsqueeze (1))**2).sum (2)**0.5
return d
# Generate Clusters
mat = torch.cat([torch.randn(500,2)+torch.Tensor([-2,-3]), torch.randn(500,2)+torch.Tensor([2,1])])
plt.scatter(mat[:,0].numpy(),mat[:,1].numpy())
plt.show(block=False)
##-------------------------------------------
# Compute distance matrix and then the Laplacian
##-------------------------------------------
d= distance_matrix(mat);
da=d<2;
plt.figure()
plt.imshow(da.numpy())
plt.show(block=False)
D= ((da.float()).sum(1)).diag()
L = D -da.float()
plt.figure()
plt.title("Laplacian")
plt.imshow(L.numpy())
plt.show(block=False)
Lsym=torch.mm(torch.mm(torch.diag(torch.pow(torch.diag(D),-0.5)),L),torch.diag(torch.pow(torch.diag(D),-0.5)));
plt.figure()
plt.imshow(Lsym.numpy())
plt.title("Symmetric Laplacian")
plt.show(block=False)
[u,s,v]=torch.svd(Lsym)
# plot fiedler vector
plt.figure()
plt.title('Fiedler vector')
plt.plot(u[:,-2].numpy());
plt.show(block=False)
norm = colors.Normalize(vmin=-1, vmax=1)
scalarMap = cm.ScalarMappable( norm=norm , cmap=color_map)
plt.figure()
plt.title('clusters')
for i in range(len(u[:,-2])):
if u[i,-2]<0:
color = scalarMap.to_rgba(-1)
plt.scatter(mat[i,0],mat[i,1], color=color,marker='o')
else:
color = scalarMap.to_rgba(1)
plt.scatter(mat[i,0],mat[i,1], color=color,marker='*')
plt.show(block=False)
raw_input("Press Enter to exit..")
plt.close('all')
| mit |
Garrett-R/scikit-learn | sklearn/datasets/tests/test_svmlight_format.py | 16 | 10538 | from bz2 import BZ2File
import gzip
from io import BytesIO
import numpy as np
import os
import shutil
from tempfile import NamedTemporaryFile
from sklearn.externals.six import b
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import raises
from sklearn.utils.testing import assert_in
import sklearn
from sklearn.datasets import (load_svmlight_file, load_svmlight_files,
dump_svmlight_file)
currdir = os.path.dirname(os.path.abspath(__file__))
datafile = os.path.join(currdir, "data", "svmlight_classification.txt")
multifile = os.path.join(currdir, "data", "svmlight_multilabel.txt")
invalidfile = os.path.join(currdir, "data", "svmlight_invalid.txt")
invalidfile2 = os.path.join(currdir, "data", "svmlight_invalid_order.txt")
def test_load_svmlight_file():
X, y = load_svmlight_file(datafile)
# test X's shape
assert_equal(X.indptr.shape[0], 7)
assert_equal(X.shape[0], 6)
assert_equal(X.shape[1], 21)
assert_equal(y.shape[0], 6)
# test X's non-zero values
for i, j, val in ((0, 2, 2.5), (0, 10, -5.2), (0, 15, 1.5),
(1, 5, 1.0), (1, 12, -3),
(2, 20, 27)):
assert_equal(X[i, j], val)
# tests X's zero values
assert_equal(X[0, 3], 0)
assert_equal(X[0, 5], 0)
assert_equal(X[1, 8], 0)
assert_equal(X[1, 16], 0)
assert_equal(X[2, 18], 0)
# test can change X's values
X[0, 2] *= 2
assert_equal(X[0, 2], 5)
# test y
assert_array_equal(y, [1, 2, 3, 4, 1, 2])
def test_load_svmlight_file_fd():
# test loading from file descriptor
X1, y1 = load_svmlight_file(datafile)
fd = os.open(datafile, os.O_RDONLY)
try:
X2, y2 = load_svmlight_file(fd)
assert_array_equal(X1.data, X2.data)
assert_array_equal(y1, y2)
finally:
os.close(fd)
def test_load_svmlight_file_multilabel():
X, y = load_svmlight_file(multifile, multilabel=True)
assert_equal(y, [(0, 1), (2,), (1, 2)])
def test_load_svmlight_files():
X_train, y_train, X_test, y_test = load_svmlight_files([datafile] * 2,
dtype=np.float32)
assert_array_equal(X_train.toarray(), X_test.toarray())
assert_array_equal(y_train, y_test)
assert_equal(X_train.dtype, np.float32)
assert_equal(X_test.dtype, np.float32)
X1, y1, X2, y2, X3, y3 = load_svmlight_files([datafile] * 3,
dtype=np.float64)
assert_equal(X1.dtype, X2.dtype)
assert_equal(X2.dtype, X3.dtype)
assert_equal(X3.dtype, np.float64)
def test_load_svmlight_file_n_features():
X, y = load_svmlight_file(datafile, n_features=22)
# test X'shape
assert_equal(X.indptr.shape[0], 7)
assert_equal(X.shape[0], 6)
assert_equal(X.shape[1], 22)
# test X's non-zero values
for i, j, val in ((0, 2, 2.5), (0, 10, -5.2),
(1, 5, 1.0), (1, 12, -3)):
assert_equal(X[i, j], val)
# 21 features in file
assert_raises(ValueError, load_svmlight_file, datafile, n_features=20)
def test_load_compressed():
X, y = load_svmlight_file(datafile)
with NamedTemporaryFile(prefix="sklearn-test", suffix=".gz") as tmp:
tmp.close() # necessary under windows
with open(datafile, "rb") as f:
shutil.copyfileobj(f, gzip.open(tmp.name, "wb"))
Xgz, ygz = load_svmlight_file(tmp.name)
assert_array_equal(X.toarray(), Xgz.toarray())
assert_array_equal(y, ygz)
with NamedTemporaryFile(prefix="sklearn-test", suffix=".bz2") as tmp:
tmp.close() # necessary under windows
with open(datafile, "rb") as f:
shutil.copyfileobj(f, BZ2File(tmp.name, "wb"))
Xbz, ybz = load_svmlight_file(tmp.name)
assert_array_equal(X.toarray(), Xbz.toarray())
assert_array_equal(y, ybz)
@raises(ValueError)
def test_load_invalid_file():
load_svmlight_file(invalidfile)
@raises(ValueError)
def test_load_invalid_order_file():
load_svmlight_file(invalidfile2)
@raises(ValueError)
def test_load_zero_based():
f = BytesIO(b("-1 4:1.\n1 0:1\n"))
load_svmlight_file(f, zero_based=False)
def test_load_zero_based_auto():
data1 = b("-1 1:1 2:2 3:3\n")
data2 = b("-1 0:0 1:1\n")
f1 = BytesIO(data1)
X, y = load_svmlight_file(f1, zero_based="auto")
assert_equal(X.shape, (1, 3))
f1 = BytesIO(data1)
f2 = BytesIO(data2)
X1, y1, X2, y2 = load_svmlight_files([f1, f2], zero_based="auto")
assert_equal(X1.shape, (1, 4))
assert_equal(X2.shape, (1, 4))
def test_load_with_qid():
# load svmfile with qid attribute
data = b("""
3 qid:1 1:0.53 2:0.12
2 qid:1 1:0.13 2:0.1
7 qid:2 1:0.87 2:0.12""")
X, y = load_svmlight_file(BytesIO(data), query_id=False)
assert_array_equal(y, [3, 2, 7])
assert_array_equal(X.toarray(), [[.53, .12], [.13, .1], [.87, .12]])
res1 = load_svmlight_files([BytesIO(data)], query_id=True)
res2 = load_svmlight_file(BytesIO(data), query_id=True)
for X, y, qid in (res1, res2):
assert_array_equal(y, [3, 2, 7])
assert_array_equal(qid, [1, 1, 2])
assert_array_equal(X.toarray(), [[.53, .12], [.13, .1], [.87, .12]])
@raises(ValueError)
def test_load_invalid_file2():
load_svmlight_files([datafile, invalidfile, datafile])
@raises(TypeError)
def test_not_a_filename():
# in python 3 integers are valid file opening arguments (taken as unix
# file descriptors)
load_svmlight_file(.42)
@raises(IOError)
def test_invalid_filename():
load_svmlight_file("trou pic nic douille")
def test_dump():
Xs, y = load_svmlight_file(datafile)
Xd = Xs.toarray()
# slicing a csr_matrix can unsort its .indices, so test that we sort
# those correctly
Xsliced = Xs[np.arange(Xs.shape[0])]
for X in (Xs, Xd, Xsliced):
for zero_based in (True, False):
for dtype in [np.float32, np.float64, np.int32]:
f = BytesIO()
# we need to pass a comment to get the version info in;
# LibSVM doesn't grok comments so they're not put in by
# default anymore.
dump_svmlight_file(X.astype(dtype), y, f, comment="test",
zero_based=zero_based)
f.seek(0)
comment = f.readline()
try:
comment = str(comment, "utf-8")
except TypeError: # fails in Python 2.x
pass
assert_in("scikit-learn %s" % sklearn.__version__, comment)
comment = f.readline()
try:
comment = str(comment, "utf-8")
except TypeError: # fails in Python 2.x
pass
assert_in(["one", "zero"][zero_based] + "-based", comment)
X2, y2 = load_svmlight_file(f, dtype=dtype,
zero_based=zero_based)
assert_equal(X2.dtype, dtype)
assert_array_equal(X2.sorted_indices().indices, X2.indices)
if dtype == np.float32:
assert_array_almost_equal(
# allow a rounding error at the last decimal place
Xd.astype(dtype), X2.toarray(), 4)
else:
assert_array_almost_equal(
# allow a rounding error at the last decimal place
Xd.astype(dtype), X2.toarray(), 15)
assert_array_equal(y, y2)
def test_dump_concise():
one = 1
two = 2.1
three = 3.01
exact = 1.000000000000001
# loses the last decimal place
almost = 1.0000000000000001
X = [[one, two, three, exact, almost],
[1e9, 2e18, 3e27, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]]
y = [one, two, three, exact, almost]
f = BytesIO()
dump_svmlight_file(X, y, f)
f.seek(0)
# make sure it's using the most concise format possible
assert_equal(f.readline(),
b("1 0:1 1:2.1 2:3.01 3:1.000000000000001 4:1\n"))
assert_equal(f.readline(), b("2.1 0:1000000000 1:2e+18 2:3e+27\n"))
assert_equal(f.readline(), b("3.01 \n"))
assert_equal(f.readline(), b("1.000000000000001 \n"))
assert_equal(f.readline(), b("1 \n"))
f.seek(0)
# make sure it's correct too :)
X2, y2 = load_svmlight_file(f)
assert_array_almost_equal(X, X2.toarray())
assert_array_equal(y, y2)
def test_dump_comment():
X, y = load_svmlight_file(datafile)
X = X.toarray()
f = BytesIO()
ascii_comment = "This is a comment\nspanning multiple lines."
dump_svmlight_file(X, y, f, comment=ascii_comment, zero_based=False)
f.seek(0)
X2, y2 = load_svmlight_file(f, zero_based=False)
assert_array_almost_equal(X, X2.toarray())
assert_array_equal(y, y2)
# XXX we have to update this to support Python 3.x
utf8_comment = b("It is true that\n\xc2\xbd\xc2\xb2 = \xc2\xbc")
f = BytesIO()
assert_raises(UnicodeDecodeError,
dump_svmlight_file, X, y, f, comment=utf8_comment)
unicode_comment = utf8_comment.decode("utf-8")
f = BytesIO()
dump_svmlight_file(X, y, f, comment=unicode_comment, zero_based=False)
f.seek(0)
X2, y2 = load_svmlight_file(f, zero_based=False)
assert_array_almost_equal(X, X2.toarray())
assert_array_equal(y, y2)
f = BytesIO()
assert_raises(ValueError,
dump_svmlight_file, X, y, f, comment="I've got a \0.")
def test_dump_invalid():
X, y = load_svmlight_file(datafile)
f = BytesIO()
y2d = [y]
assert_raises(ValueError, dump_svmlight_file, X, y2d, f)
f = BytesIO()
assert_raises(ValueError, dump_svmlight_file, X, y[:-1], f)
def test_dump_query_id():
# test dumping a file with query_id
X, y = load_svmlight_file(datafile)
X = X.toarray()
query_id = np.arange(X.shape[0]) // 2
f = BytesIO()
dump_svmlight_file(X, y, f, query_id=query_id, zero_based=True)
f.seek(0)
X1, y1, query_id1 = load_svmlight_file(f, query_id=True, zero_based=True)
assert_array_almost_equal(X, X1.toarray())
assert_array_almost_equal(y, y1)
assert_array_almost_equal(query_id, query_id1)
| bsd-3-clause |
cwu2011/scikit-learn | examples/classification/plot_lda_qda.py | 164 | 4806 | """
====================================================================
Linear and Quadratic Discriminant Analysis with confidence ellipsoid
====================================================================
Plot the confidence ellipsoids of each class and decision boundary
"""
print(__doc__)
from scipy import linalg
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib import colors
from sklearn.lda import LDA
from sklearn.qda import QDA
###############################################################################
# colormap
cmap = colors.LinearSegmentedColormap(
'red_blue_classes',
{'red': [(0, 1, 1), (1, 0.7, 0.7)],
'green': [(0, 0.7, 0.7), (1, 0.7, 0.7)],
'blue': [(0, 0.7, 0.7), (1, 1, 1)]})
plt.cm.register_cmap(cmap=cmap)
###############################################################################
# generate datasets
def dataset_fixed_cov():
'''Generate 2 Gaussians samples with the same covariance matrix'''
n, dim = 300, 2
np.random.seed(0)
C = np.array([[0., -0.23], [0.83, .23]])
X = np.r_[np.dot(np.random.randn(n, dim), C),
np.dot(np.random.randn(n, dim), C) + np.array([1, 1])]
y = np.hstack((np.zeros(n), np.ones(n)))
return X, y
def dataset_cov():
'''Generate 2 Gaussians samples with different covariance matrices'''
n, dim = 300, 2
np.random.seed(0)
C = np.array([[0., -1.], [2.5, .7]]) * 2.
X = np.r_[np.dot(np.random.randn(n, dim), C),
np.dot(np.random.randn(n, dim), C.T) + np.array([1, 4])]
y = np.hstack((np.zeros(n), np.ones(n)))
return X, y
###############################################################################
# plot functions
def plot_data(lda, X, y, y_pred, fig_index):
splot = plt.subplot(2, 2, fig_index)
if fig_index == 1:
plt.title('Linear Discriminant Analysis')
plt.ylabel('Data with fixed covariance')
elif fig_index == 2:
plt.title('Quadratic Discriminant Analysis')
elif fig_index == 3:
plt.ylabel('Data with varying covariances')
tp = (y == y_pred) # True Positive
tp0, tp1 = tp[y == 0], tp[y == 1]
X0, X1 = X[y == 0], X[y == 1]
X0_tp, X0_fp = X0[tp0], X0[~tp0]
X1_tp, X1_fp = X1[tp1], X1[~tp1]
xmin, xmax = X[:, 0].min(), X[:, 0].max()
ymin, ymax = X[:, 1].min(), X[:, 1].max()
# class 0: dots
plt.plot(X0_tp[:, 0], X0_tp[:, 1], 'o', color='red')
plt.plot(X0_fp[:, 0], X0_fp[:, 1], '.', color='#990000') # dark red
# class 1: dots
plt.plot(X1_tp[:, 0], X1_tp[:, 1], 'o', color='blue')
plt.plot(X1_fp[:, 0], X1_fp[:, 1], '.', color='#000099') # dark blue
# class 0 and 1 : areas
nx, ny = 200, 100
x_min, x_max = plt.xlim()
y_min, y_max = plt.ylim()
xx, yy = np.meshgrid(np.linspace(x_min, x_max, nx),
np.linspace(y_min, y_max, ny))
Z = lda.predict_proba(np.c_[xx.ravel(), yy.ravel()])
Z = Z[:, 1].reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap='red_blue_classes',
norm=colors.Normalize(0., 1.))
plt.contour(xx, yy, Z, [0.5], linewidths=2., colors='k')
# means
plt.plot(lda.means_[0][0], lda.means_[0][1],
'o', color='black', markersize=10)
plt.plot(lda.means_[1][0], lda.means_[1][1],
'o', color='black', markersize=10)
return splot
def plot_ellipse(splot, mean, cov, color):
v, w = linalg.eigh(cov)
u = w[0] / linalg.norm(w[0])
angle = np.arctan(u[1] / u[0])
angle = 180 * angle / np.pi # convert to degrees
# filled Gaussian at 2 standard deviation
ell = mpl.patches.Ellipse(mean, 2 * v[0] ** 0.5, 2 * v[1] ** 0.5,
180 + angle, color=color)
ell.set_clip_box(splot.bbox)
ell.set_alpha(0.5)
splot.add_artist(ell)
splot.set_xticks(())
splot.set_yticks(())
def plot_lda_cov(lda, splot):
plot_ellipse(splot, lda.means_[0], lda.covariance_, 'red')
plot_ellipse(splot, lda.means_[1], lda.covariance_, 'blue')
def plot_qda_cov(qda, splot):
plot_ellipse(splot, qda.means_[0], qda.covariances_[0], 'red')
plot_ellipse(splot, qda.means_[1], qda.covariances_[1], 'blue')
###############################################################################
for i, (X, y) in enumerate([dataset_fixed_cov(), dataset_cov()]):
# LDA
lda = LDA(solver="svd", store_covariance=True)
y_pred = lda.fit(X, y).predict(X)
splot = plot_data(lda, X, y, y_pred, fig_index=2 * i + 1)
plot_lda_cov(lda, splot)
plt.axis('tight')
# QDA
qda = QDA()
y_pred = qda.fit(X, y, store_covariances=True).predict(X)
splot = plot_data(qda, X, y, y_pred, fig_index=2 * i + 2)
plot_qda_cov(qda, splot)
plt.axis('tight')
plt.suptitle('LDA vs QDA')
plt.show()
| bsd-3-clause |
schets/scikit-learn | sklearn/ensemble/tests/test_partial_dependence.py | 365 | 6996 | """
Testing for the partial dependence module.
"""
import numpy as np
from numpy.testing import assert_array_equal
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import if_matplotlib
from sklearn.ensemble.partial_dependence import partial_dependence
from sklearn.ensemble.partial_dependence import plot_partial_dependence
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import GradientBoostingRegressor
from sklearn import datasets
# toy sample
X = [[-2, -1], [-1, -1], [-1, -2], [1, 1], [1, 2], [2, 1]]
y = [-1, -1, -1, 1, 1, 1]
T = [[-1, -1], [2, 2], [3, 2]]
true_result = [-1, 1, 1]
# also load the boston dataset
boston = datasets.load_boston()
# also load the iris dataset
iris = datasets.load_iris()
def test_partial_dependence_classifier():
# Test partial dependence for classifier
clf = GradientBoostingClassifier(n_estimators=10, random_state=1)
clf.fit(X, y)
pdp, axes = partial_dependence(clf, [0], X=X, grid_resolution=5)
# only 4 grid points instead of 5 because only 4 unique X[:,0] vals
assert pdp.shape == (1, 4)
assert axes[0].shape[0] == 4
# now with our own grid
X_ = np.asarray(X)
grid = np.unique(X_[:, 0])
pdp_2, axes = partial_dependence(clf, [0], grid=grid)
assert axes is None
assert_array_equal(pdp, pdp_2)
def test_partial_dependence_multiclass():
# Test partial dependence for multi-class classifier
clf = GradientBoostingClassifier(n_estimators=10, random_state=1)
clf.fit(iris.data, iris.target)
grid_resolution = 25
n_classes = clf.n_classes_
pdp, axes = partial_dependence(
clf, [0], X=iris.data, grid_resolution=grid_resolution)
assert pdp.shape == (n_classes, grid_resolution)
assert len(axes) == 1
assert axes[0].shape[0] == grid_resolution
def test_partial_dependence_regressor():
# Test partial dependence for regressor
clf = GradientBoostingRegressor(n_estimators=10, random_state=1)
clf.fit(boston.data, boston.target)
grid_resolution = 25
pdp, axes = partial_dependence(
clf, [0], X=boston.data, grid_resolution=grid_resolution)
assert pdp.shape == (1, grid_resolution)
assert axes[0].shape[0] == grid_resolution
def test_partial_dependecy_input():
# Test input validation of partial dependence.
clf = GradientBoostingClassifier(n_estimators=10, random_state=1)
clf.fit(X, y)
assert_raises(ValueError, partial_dependence,
clf, [0], grid=None, X=None)
assert_raises(ValueError, partial_dependence,
clf, [0], grid=[0, 1], X=X)
# first argument must be an instance of BaseGradientBoosting
assert_raises(ValueError, partial_dependence,
{}, [0], X=X)
# Gradient boosting estimator must be fit
assert_raises(ValueError, partial_dependence,
GradientBoostingClassifier(), [0], X=X)
assert_raises(ValueError, partial_dependence, clf, [-1], X=X)
assert_raises(ValueError, partial_dependence, clf, [100], X=X)
# wrong ndim for grid
grid = np.random.rand(10, 2, 1)
assert_raises(ValueError, partial_dependence, clf, [0], grid=grid)
@if_matplotlib
def test_plot_partial_dependence():
# Test partial dependence plot function.
clf = GradientBoostingRegressor(n_estimators=10, random_state=1)
clf.fit(boston.data, boston.target)
grid_resolution = 25
fig, axs = plot_partial_dependence(clf, boston.data, [0, 1, (0, 1)],
grid_resolution=grid_resolution,
feature_names=boston.feature_names)
assert len(axs) == 3
assert all(ax.has_data for ax in axs)
# check with str features and array feature names
fig, axs = plot_partial_dependence(clf, boston.data, ['CRIM', 'ZN',
('CRIM', 'ZN')],
grid_resolution=grid_resolution,
feature_names=boston.feature_names)
assert len(axs) == 3
assert all(ax.has_data for ax in axs)
# check with list feature_names
feature_names = boston.feature_names.tolist()
fig, axs = plot_partial_dependence(clf, boston.data, ['CRIM', 'ZN',
('CRIM', 'ZN')],
grid_resolution=grid_resolution,
feature_names=feature_names)
assert len(axs) == 3
assert all(ax.has_data for ax in axs)
@if_matplotlib
def test_plot_partial_dependence_input():
# Test partial dependence plot function input checks.
clf = GradientBoostingClassifier(n_estimators=10, random_state=1)
# not fitted yet
assert_raises(ValueError, plot_partial_dependence,
clf, X, [0])
clf.fit(X, y)
assert_raises(ValueError, plot_partial_dependence,
clf, np.array(X)[:, :0], [0])
# first argument must be an instance of BaseGradientBoosting
assert_raises(ValueError, plot_partial_dependence,
{}, X, [0])
# must be larger than -1
assert_raises(ValueError, plot_partial_dependence,
clf, X, [-1])
# too large feature value
assert_raises(ValueError, plot_partial_dependence,
clf, X, [100])
# str feature but no feature_names
assert_raises(ValueError, plot_partial_dependence,
clf, X, ['foobar'])
# not valid features value
assert_raises(ValueError, plot_partial_dependence,
clf, X, [{'foo': 'bar'}])
@if_matplotlib
def test_plot_partial_dependence_multiclass():
# Test partial dependence plot function on multi-class input.
clf = GradientBoostingClassifier(n_estimators=10, random_state=1)
clf.fit(iris.data, iris.target)
grid_resolution = 25
fig, axs = plot_partial_dependence(clf, iris.data, [0, 1],
label=0,
grid_resolution=grid_resolution)
assert len(axs) == 2
assert all(ax.has_data for ax in axs)
# now with symbol labels
target = iris.target_names[iris.target]
clf = GradientBoostingClassifier(n_estimators=10, random_state=1)
clf.fit(iris.data, target)
grid_resolution = 25
fig, axs = plot_partial_dependence(clf, iris.data, [0, 1],
label='setosa',
grid_resolution=grid_resolution)
assert len(axs) == 2
assert all(ax.has_data for ax in axs)
# label not in gbrt.classes_
assert_raises(ValueError, plot_partial_dependence,
clf, iris.data, [0, 1], label='foobar',
grid_resolution=grid_resolution)
# label not provided
assert_raises(ValueError, plot_partial_dependence,
clf, iris.data, [0, 1],
grid_resolution=grid_resolution)
| bsd-3-clause |
srowen/spark | python/pyspark/pandas/data_type_ops/num_ops.py | 5 | 19156 | #
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import numbers
from typing import Any, Union
import numpy as np
import pandas as pd
from pandas.api.types import CategoricalDtype
from pyspark.pandas._typing import Dtype, IndexOpsLike, SeriesOrIndex
from pyspark.pandas.base import column_op, IndexOpsMixin, numpy_column_op
from pyspark.pandas.data_type_ops.base import (
DataTypeOps,
is_valid_operand_for_numeric_arithmetic,
transform_boolean_operand_to_numeric,
_as_bool_type,
_as_categorical_type,
_as_other_type,
_as_string_type,
)
from pyspark.pandas.internal import InternalField
from pyspark.pandas.spark import functions as SF
from pyspark.pandas.typedef import extension_dtypes, pandas_on_spark_type
from pyspark.sql import functions as F
from pyspark.sql.column import Column
from pyspark.sql.types import (
BooleanType,
StringType,
TimestampType,
)
class NumericOps(DataTypeOps):
"""The class for binary operations of numeric pandas-on-Spark objects."""
@property
def pretty_name(self) -> str:
return "numerics"
def add(self, left: IndexOpsLike, right: Any) -> SeriesOrIndex:
if (
isinstance(right, IndexOpsMixin) and isinstance(right.spark.data_type, StringType)
) or isinstance(right, str):
raise TypeError("string addition can only be applied to string series or literals.")
if not is_valid_operand_for_numeric_arithmetic(right):
raise TypeError("addition can not be applied to given types.")
right = transform_boolean_operand_to_numeric(right, left.spark.data_type)
return column_op(Column.__add__)(left, right)
def sub(self, left: IndexOpsLike, right: Any) -> SeriesOrIndex:
if (
isinstance(right, IndexOpsMixin) and isinstance(right.spark.data_type, StringType)
) or isinstance(right, str):
raise TypeError("subtraction can not be applied to string series or literals.")
if not is_valid_operand_for_numeric_arithmetic(right):
raise TypeError("subtraction can not be applied to given types.")
right = transform_boolean_operand_to_numeric(right, left.spark.data_type)
return column_op(Column.__sub__)(left, right)
def mod(self, left: IndexOpsLike, right: Any) -> SeriesOrIndex:
if (
isinstance(right, IndexOpsMixin) and isinstance(right.spark.data_type, StringType)
) or isinstance(right, str):
raise TypeError("modulo can not be applied on string series or literals.")
if not is_valid_operand_for_numeric_arithmetic(right):
raise TypeError("modulo can not be applied to given types.")
right = transform_boolean_operand_to_numeric(right, left.spark.data_type)
def mod(left: Column, right: Any) -> Column:
return ((left % right) + right) % right
return column_op(mod)(left, right)
def pow(self, left: IndexOpsLike, right: Any) -> SeriesOrIndex:
if (
isinstance(right, IndexOpsMixin) and isinstance(right.spark.data_type, StringType)
) or isinstance(right, str):
raise TypeError("exponentiation can not be applied on string series or literals.")
if not is_valid_operand_for_numeric_arithmetic(right):
raise TypeError("exponentiation can not be applied to given types.")
right = transform_boolean_operand_to_numeric(right, left.spark.data_type)
def pow_func(left: Column, right: Any) -> Column:
return F.when(left == 1, left).otherwise(Column.__pow__(left, right))
return column_op(pow_func)(left, right)
def radd(self, left: IndexOpsLike, right: Any) -> SeriesOrIndex:
if isinstance(right, str):
raise TypeError("string addition can only be applied to string series or literals.")
if not isinstance(right, numbers.Number):
raise TypeError("addition can not be applied to given types.")
right = transform_boolean_operand_to_numeric(right)
return column_op(Column.__radd__)(left, right)
def rsub(self, left: IndexOpsLike, right: Any) -> SeriesOrIndex:
if isinstance(right, str):
raise TypeError("subtraction can not be applied to string series or literals.")
if not isinstance(right, numbers.Number):
raise TypeError("subtraction can not be applied to given types.")
right = transform_boolean_operand_to_numeric(right)
return column_op(Column.__rsub__)(left, right)
def rmul(self, left: IndexOpsLike, right: Any) -> SeriesOrIndex:
if isinstance(right, str):
raise TypeError("multiplication can not be applied to a string literal.")
if not isinstance(right, numbers.Number):
raise TypeError("multiplication can not be applied to given types.")
right = transform_boolean_operand_to_numeric(right)
return column_op(Column.__rmul__)(left, right)
def rpow(self, left: IndexOpsLike, right: Any) -> SeriesOrIndex:
if isinstance(right, str):
raise TypeError("exponentiation can not be applied on string series or literals.")
if not isinstance(right, numbers.Number):
raise TypeError("exponentiation can not be applied to given types.")
def rpow_func(left: Column, right: Any) -> Column:
return F.when(SF.lit(right == 1), right).otherwise(Column.__rpow__(left, right))
right = transform_boolean_operand_to_numeric(right)
return column_op(rpow_func)(left, right)
def rmod(self, left: IndexOpsLike, right: Any) -> SeriesOrIndex:
if isinstance(right, str):
raise TypeError("modulo can not be applied on string series or literals.")
if not isinstance(right, numbers.Number):
raise TypeError("modulo can not be applied to given types.")
def rmod(left: Column, right: Any) -> Column:
return ((right % left) + left) % left
right = transform_boolean_operand_to_numeric(right)
return column_op(rmod)(left, right)
class IntegralOps(NumericOps):
"""
The class for binary operations of pandas-on-Spark objects with spark types:
LongType, IntegerType, ByteType and ShortType.
"""
@property
def pretty_name(self) -> str:
return "integrals"
def mul(self, left: IndexOpsLike, right: Any) -> SeriesOrIndex:
if isinstance(right, str):
raise TypeError("multiplication can not be applied to a string literal.")
if isinstance(right, IndexOpsMixin) and isinstance(right.spark.data_type, TimestampType):
raise TypeError("multiplication can not be applied to date times.")
if isinstance(right, IndexOpsMixin) and isinstance(right.spark.data_type, StringType):
return column_op(SF.repeat)(right, left)
if not is_valid_operand_for_numeric_arithmetic(right):
raise TypeError("multiplication can not be applied to given types.")
right = transform_boolean_operand_to_numeric(right, left.spark.data_type)
return column_op(Column.__mul__)(left, right)
def truediv(self, left: IndexOpsLike, right: Any) -> SeriesOrIndex:
if (
isinstance(right, IndexOpsMixin) and isinstance(right.spark.data_type, StringType)
) or isinstance(right, str):
raise TypeError("division can not be applied on string series or literals.")
if not is_valid_operand_for_numeric_arithmetic(right):
raise TypeError("division can not be applied to given types.")
right = transform_boolean_operand_to_numeric(right, left.spark.data_type)
def truediv(left: Column, right: Any) -> Column:
return F.when(
SF.lit(right != 0) | SF.lit(right).isNull(), left.__div__(right)
).otherwise(SF.lit(np.inf).__div__(left))
return numpy_column_op(truediv)(left, right)
def floordiv(self, left: IndexOpsLike, right: Any) -> SeriesOrIndex:
if (
isinstance(right, IndexOpsMixin) and isinstance(right.spark.data_type, StringType)
) or isinstance(right, str):
raise TypeError("division can not be applied on string series or literals.")
if not is_valid_operand_for_numeric_arithmetic(right):
raise TypeError("division can not be applied to given types.")
right = transform_boolean_operand_to_numeric(right, left.spark.data_type)
def floordiv(left: Column, right: Any) -> Column:
return F.when(SF.lit(right is np.nan), np.nan).otherwise(
F.when(
SF.lit(right != 0) | SF.lit(right).isNull(), F.floor(left.__div__(right))
).otherwise(SF.lit(np.inf).__div__(left))
)
return numpy_column_op(floordiv)(left, right)
def rtruediv(self, left: IndexOpsLike, right: Any) -> SeriesOrIndex:
if isinstance(right, str):
raise TypeError("division can not be applied on string series or literals.")
if not isinstance(right, numbers.Number):
raise TypeError("division can not be applied to given types.")
def rtruediv(left: Column, right: Any) -> Column:
return F.when(left == 0, SF.lit(np.inf).__div__(right)).otherwise(
SF.lit(right).__truediv__(left)
)
right = transform_boolean_operand_to_numeric(right, left.spark.data_type)
return numpy_column_op(rtruediv)(left, right)
def rfloordiv(self, left: IndexOpsLike, right: Any) -> SeriesOrIndex:
if isinstance(right, str):
raise TypeError("division can not be applied on string series or literals.")
if not isinstance(right, numbers.Number):
raise TypeError("division can not be applied to given types.")
def rfloordiv(left: Column, right: Any) -> Column:
return F.when(SF.lit(left == 0), SF.lit(np.inf).__div__(right)).otherwise(
F.floor(SF.lit(right).__div__(left))
)
right = transform_boolean_operand_to_numeric(right, left.spark.data_type)
return numpy_column_op(rfloordiv)(left, right)
def astype(self, index_ops: IndexOpsLike, dtype: Union[str, type, Dtype]) -> IndexOpsLike:
dtype, spark_type = pandas_on_spark_type(dtype)
if isinstance(dtype, CategoricalDtype):
return _as_categorical_type(index_ops, dtype, spark_type)
elif isinstance(spark_type, BooleanType):
return _as_bool_type(index_ops, dtype)
elif isinstance(spark_type, StringType):
return _as_string_type(index_ops, dtype, null_str=str(np.nan))
else:
return _as_other_type(index_ops, dtype, spark_type)
class FractionalOps(NumericOps):
"""
The class for binary operations of pandas-on-Spark objects with spark types:
FloatType, DoubleType.
"""
@property
def pretty_name(self) -> str:
return "fractions"
def mul(self, left: IndexOpsLike, right: Any) -> SeriesOrIndex:
if isinstance(right, str):
raise TypeError("multiplication can not be applied to a string literal.")
if isinstance(right, IndexOpsMixin) and isinstance(right.spark.data_type, TimestampType):
raise TypeError("multiplication can not be applied to date times.")
if not is_valid_operand_for_numeric_arithmetic(right):
raise TypeError("multiplication can not be applied to given types.")
right = transform_boolean_operand_to_numeric(right, left.spark.data_type)
return column_op(Column.__mul__)(left, right)
def truediv(self, left: IndexOpsLike, right: Any) -> SeriesOrIndex:
if (
isinstance(right, IndexOpsMixin) and isinstance(right.spark.data_type, StringType)
) or isinstance(right, str):
raise TypeError("division can not be applied on string series or literals.")
if not is_valid_operand_for_numeric_arithmetic(right):
raise TypeError("division can not be applied to given types.")
right = transform_boolean_operand_to_numeric(right, left.spark.data_type)
def truediv(left: Column, right: Any) -> Column:
return F.when(
SF.lit(right != 0) | SF.lit(right).isNull(), left.__div__(right)
).otherwise(
F.when(SF.lit(left == np.inf) | SF.lit(left == -np.inf), left).otherwise(
SF.lit(np.inf).__div__(left)
)
)
return numpy_column_op(truediv)(left, right)
def floordiv(self, left: IndexOpsLike, right: Any) -> SeriesOrIndex:
if (
isinstance(right, IndexOpsMixin) and isinstance(right.spark.data_type, StringType)
) or isinstance(right, str):
raise TypeError("division can not be applied on string series or literals.")
if not is_valid_operand_for_numeric_arithmetic(right):
raise TypeError("division can not be applied to given types.")
right = transform_boolean_operand_to_numeric(right, left.spark.data_type)
def floordiv(left: Column, right: Any) -> Column:
return F.when(SF.lit(right is np.nan), np.nan).otherwise(
F.when(
SF.lit(right != 0) | SF.lit(right).isNull(), F.floor(left.__div__(right))
).otherwise(
F.when(SF.lit(left == np.inf) | SF.lit(left == -np.inf), left).otherwise(
SF.lit(np.inf).__div__(left)
)
)
)
return numpy_column_op(floordiv)(left, right)
def rtruediv(self, left: IndexOpsLike, right: Any) -> SeriesOrIndex:
if isinstance(right, str):
raise TypeError("division can not be applied on string series or literals.")
if not isinstance(right, numbers.Number):
raise TypeError("division can not be applied to given types.")
def rtruediv(left: Column, right: Any) -> Column:
return F.when(left == 0, SF.lit(np.inf).__div__(right)).otherwise(
SF.lit(right).__truediv__(left)
)
right = transform_boolean_operand_to_numeric(right, left.spark.data_type)
return numpy_column_op(rtruediv)(left, right)
def rfloordiv(self, left: IndexOpsLike, right: Any) -> SeriesOrIndex:
if isinstance(right, str):
raise TypeError("division can not be applied on string series or literals.")
if not isinstance(right, numbers.Number):
raise TypeError("division can not be applied to given types.")
def rfloordiv(left: Column, right: Any) -> Column:
return F.when(SF.lit(left == 0), SF.lit(np.inf).__div__(right)).otherwise(
F.when(SF.lit(left) == np.nan, np.nan).otherwise(
F.floor(SF.lit(right).__div__(left))
)
)
right = transform_boolean_operand_to_numeric(right, left.spark.data_type)
return numpy_column_op(rfloordiv)(left, right)
def isnull(self, index_ops: IndexOpsLike) -> IndexOpsLike:
return index_ops._with_new_scol(
index_ops.spark.column.isNull() | F.isnan(index_ops.spark.column),
field=index_ops._internal.data_fields[0].copy(
dtype=np.dtype("bool"), spark_type=BooleanType(), nullable=False
),
)
def astype(self, index_ops: IndexOpsLike, dtype: Union[str, type, Dtype]) -> IndexOpsLike:
dtype, spark_type = pandas_on_spark_type(dtype)
if isinstance(dtype, CategoricalDtype):
return _as_categorical_type(index_ops, dtype, spark_type)
elif isinstance(spark_type, BooleanType):
if isinstance(dtype, extension_dtypes):
scol = index_ops.spark.column.cast(spark_type)
else:
scol = F.when(
index_ops.spark.column.isNull() | F.isnan(index_ops.spark.column),
SF.lit(True),
).otherwise(index_ops.spark.column.cast(spark_type))
return index_ops._with_new_scol(
scol.alias(index_ops._internal.data_spark_column_names[0]),
field=InternalField(dtype=dtype),
)
elif isinstance(spark_type, StringType):
return _as_string_type(index_ops, dtype, null_str=str(np.nan))
else:
return _as_other_type(index_ops, dtype, spark_type)
class DecimalOps(FractionalOps):
"""
The class for decimal operations of pandas-on-Spark objects with spark type:
DecimalType.
"""
@property
def pretty_name(self) -> str:
return "decimal"
def isnull(self, index_ops: IndexOpsLike) -> IndexOpsLike:
return index_ops._with_new_scol(
index_ops.spark.column.isNull(),
field=index_ops._internal.data_fields[0].copy(
dtype=np.dtype("bool"), spark_type=BooleanType(), nullable=False
),
)
def astype(self, index_ops: IndexOpsLike, dtype: Union[str, type, Dtype]) -> IndexOpsLike:
dtype, spark_type = pandas_on_spark_type(dtype)
if isinstance(dtype, CategoricalDtype):
return _as_categorical_type(index_ops, dtype, spark_type)
elif isinstance(spark_type, BooleanType):
return _as_bool_type(index_ops, dtype)
elif isinstance(spark_type, StringType):
return _as_string_type(index_ops, dtype, null_str=str(np.nan))
else:
return _as_other_type(index_ops, dtype, spark_type)
class IntegralExtensionOps(IntegralOps):
"""
The class for binary operations of pandas-on-Spark objects with one of the
- spark types:
LongType, IntegerType, ByteType and ShortType
- dtypes:
Int8Dtype, Int16Dtype, Int32Dtype, Int64Dtype
"""
def restore(self, col: pd.Series) -> pd.Series:
"""Restore column when to_pandas."""
return col.astype(self.dtype)
class FractionalExtensionOps(FractionalOps):
"""
The class for binary operations of pandas-on-Spark objects with one of the
- spark types:
FloatType, DoubleType and DecimalType
- dtypes:
Float32Dtype, Float64Dtype
"""
def restore(self, col: pd.Series) -> pd.Series:
"""Restore column when to_pandas."""
return col.astype(self.dtype)
| apache-2.0 |
rsivapr/scikit-learn | sklearn/datasets/species_distributions.py | 10 | 7844 | """
=============================
Species distribution dataset
=============================
This dataset represents the geographic distribution of species.
The dataset is provided by Phillips et. al. (2006).
The two species are:
- `"Bradypus variegatus"
<http://www.iucnredlist.org/apps/redlist/details/3038/0>`_ ,
the Brown-throated Sloth.
- `"Microryzomys minutus"
<http://www.iucnredlist.org/apps/redlist/details/13408/0>`_ ,
also known as the Forest Small Rice Rat, a rodent that lives in Peru,
Colombia, Ecuador, Peru, and Venezuela.
References:
* `"Maximum entropy modeling of species geographic distributions"
<http://www.cs.princeton.edu/~schapire/papers/ecolmod.pdf>`_
S. J. Phillips, R. P. Anderson, R. E. Schapire - Ecological Modelling,
190:231-259, 2006.
Notes:
* See examples/applications/plot_species_distribution_modeling.py
for an example of using this dataset
"""
# Authors: Peter Prettenhofer <peter.prettenhofer@gmail.com>
# Jake Vanderplas <vanderplas@astro.washington.edu>
#
# License: BSD 3 clause
from io import BytesIO
from os import makedirs
from os.path import join
from os.path import exists
try:
# Python 2
from urllib2 import urlopen
except ImportError:
# Python 3
from urllib.request import urlopen
import numpy as np
from sklearn.datasets.base import get_data_home, Bunch
from sklearn.externals import joblib
DIRECTORY_URL = "http://www.cs.princeton.edu/~schapire/maxent/datasets/"
SAMPLES_URL = join(DIRECTORY_URL, "samples.zip")
COVERAGES_URL = join(DIRECTORY_URL, "coverages.zip")
DATA_ARCHIVE_NAME = "species_coverage.pkz"
def _load_coverage(F, header_length=6,
dtype=np.int16):
"""
load a coverage file.
This will return a numpy array of the given dtype
"""
try:
header = [F.readline() for i in range(header_length)]
except:
F = open(F)
header = [F.readline() for i in range(header_length)]
make_tuple = lambda t: (t.split()[0], float(t.split()[1]))
header = dict([make_tuple(line) for line in header])
M = np.loadtxt(F, dtype=dtype)
nodata = header['NODATA_value']
if nodata != -9999:
M[nodata] = -9999
return M
def _load_csv(F):
"""Load csv file.
Parameters
----------
F : string or file object
file object or name of file
Returns
-------
rec : np.ndarray
record array representing the data
"""
try:
names = F.readline().strip().split(',')
except:
F = open(F)
names = F.readline().strip().split(',')
rec = np.loadtxt(F, skiprows=1, delimiter=',',
dtype='a22,f4,f4')
rec.dtype.names = names
return rec
def construct_grids(batch):
"""Construct the map grid from the batch object
Parameters
----------
batch : Batch object
The object returned by :func:`fetch_species_distributions`
Returns
-------
(xgrid, ygrid) : 1-D arrays
The grid corresponding to the values in batch.coverages
"""
# x,y coordinates for corner cells
xmin = batch.x_left_lower_corner + batch.grid_size
xmax = xmin + (batch.Nx * batch.grid_size)
ymin = batch.y_left_lower_corner + batch.grid_size
ymax = ymin + (batch.Ny * batch.grid_size)
# x coordinates of the grid cells
xgrid = np.arange(xmin, xmax, batch.grid_size)
# y coordinates of the grid cells
ygrid = np.arange(ymin, ymax, batch.grid_size)
return (xgrid, ygrid)
def fetch_species_distributions(data_home=None,
download_if_missing=True):
"""Loader for species distribution dataset from Phillips et. al. (2006)
Parameters
----------
data_home : optional, default: None
Specify another download and cache folder for the datasets. By default
all scikit learn data is stored in '~/scikit_learn_data' subfolders.
download_if_missing: optional, True by default
If False, raise a IOError if the data is not locally available
instead of trying to download the data from the source site.
Notes
------
This dataset represents the geographic distribution of species.
The dataset is provided by Phillips et. al. (2006).
The two species are:
- `"Bradypus variegatus"
<http://www.iucnredlist.org/apps/redlist/details/3038/0>`_ ,
the Brown-throated Sloth.
- `"Microryzomys minutus"
<http://www.iucnredlist.org/apps/redlist/details/13408/0>`_ ,
also known as the Forest Small Rice Rat, a rodent that lives in Peru,
Colombia, Ecuador, Peru, and Venezuela.
The data is returned as a Bunch object with the following attributes:
coverages : array, shape = [14, 1592, 1212]
These represent the 14 features measured at each point of the map grid.
The latitude/longitude values for the grid are discussed below.
Missing data is represented by the value -9999.
train : record array, shape = (1623,)
The training points for the data. Each point has three fields:
- train['species'] is the species name
- train['dd long'] is the longitude, in degrees
- train['dd lat'] is the latitude, in degrees
test : record array, shape = (619,)
The test points for the data. Same format as the training data.
Nx, Ny : integers
The number of longitudes (x) and latitudes (y) in the grid
x_left_lower_corner, y_left_lower_corner : floats
The (x,y) position of the lower-left corner, in degrees
grid_size : float
The spacing between points of the grid, in degrees
References
----------
* `"Maximum entropy modeling of species geographic distributions"
<http://www.cs.princeton.edu/~schapire/papers/ecolmod.pdf>`_
S. J. Phillips, R. P. Anderson, R. E. Schapire - Ecological Modelling,
190:231-259, 2006.
Notes
-----
* See examples/applications/plot_species_distribution_modeling.py
for an example of using this dataset with scikit-learn
"""
data_home = get_data_home(data_home)
if not exists(data_home):
makedirs(data_home)
# Define parameters for the data files. These should not be changed
# unless the data model changes. They will be saved in the npz file
# with the downloaded data.
extra_params = dict(x_left_lower_corner=-94.8,
Nx=1212,
y_left_lower_corner=-56.05,
Ny=1592,
grid_size=0.05)
dtype = np.int16
if not exists(join(data_home, DATA_ARCHIVE_NAME)):
print('Downloading species data from %s to %s' % (SAMPLES_URL,
data_home))
X = np.load(BytesIO(urlopen(SAMPLES_URL).read()))
for f in X.files:
fhandle = BytesIO(X[f])
if 'train' in f:
train = _load_csv(fhandle)
if 'test' in f:
test = _load_csv(fhandle)
print('Downloading coverage data from %s to %s' % (COVERAGES_URL,
data_home))
X = np.load(BytesIO(urlopen(COVERAGES_URL).read()))
coverages = []
for f in X.files:
fhandle = BytesIO(X[f])
print(' - converting', f)
coverages.append(_load_coverage(fhandle))
coverages = np.asarray(coverages,
dtype=dtype)
bunch = Bunch(coverages=coverages,
test=test,
train=train,
**extra_params)
joblib.dump(bunch, join(data_home, DATA_ARCHIVE_NAME), compress=9)
else:
bunch = joblib.load(join(data_home, DATA_ARCHIVE_NAME))
return bunch
| bsd-3-clause |
costypetrisor/scikit-learn | sklearn/preprocessing/data.py | 2 | 51228 | # Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Mathieu Blondel <mathieu@mblondel.org>
# Olivier Grisel <olivier.grisel@ensta.org>
# Andreas Mueller <amueller@ais.uni-bonn.de>
# Eric Martin <eric@ericmart.in>
# License: BSD 3 clause
from itertools import chain, combinations
import numbers
import warnings
import numpy as np
from scipy import sparse
from ..base import BaseEstimator, TransformerMixin
from ..externals import six
from ..utils import check_array
from ..utils.extmath import row_norms
from ..utils.fixes import combinations_with_replacement as combinations_w_r
from ..utils.sparsefuncs_fast import (inplace_csr_row_normalize_l1,
inplace_csr_row_normalize_l2)
from ..utils.sparsefuncs import (inplace_column_scale, mean_variance_axis,
min_max_axis)
from ..utils.validation import check_is_fitted, FLOAT_DTYPES
zip = six.moves.zip
map = six.moves.map
range = six.moves.range
__all__ = [
'Binarizer',
'KernelCenterer',
'MinMaxScaler',
'Normalizer',
'OneHotEncoder',
'RobustScaler',
'StandardScaler',
'add_dummy_feature',
'binarize',
'normalize',
'scale',
'robust_scale',
]
def _mean_and_std(X, axis=0, with_mean=True, with_std=True):
"""Compute mean and std deviation for centering, scaling.
Zero valued std components are reset to 1.0 to avoid NaNs when scaling.
"""
X = np.asarray(X)
Xr = np.rollaxis(X, axis)
if with_mean:
mean_ = Xr.mean(axis=0)
else:
mean_ = None
if with_std:
std_ = Xr.std(axis=0)
if isinstance(std_, np.ndarray):
std_[std_ == 0.] = 1.0
elif std_ == 0.:
std_ = 1.
else:
std_ = None
return mean_, std_
def scale(X, axis=0, with_mean=True, with_std=True, copy=True):
"""Standardize a dataset along any axis
Center to the mean and component wise scale to unit variance.
Parameters
----------
X : array-like or CSR matrix.
The data to center and scale.
axis : int (0 by default)
axis used to compute the means and standard deviations along. If 0,
independently standardize each feature, otherwise (if 1) standardize
each sample.
with_mean : boolean, True by default
If True, center the data before scaling.
with_std : boolean, True by default
If True, scale the data to unit variance (or equivalently,
unit standard deviation).
copy : boolean, optional, default True
set to False to perform inplace row normalization and avoid a
copy (if the input is already a numpy array or a scipy.sparse
CSR matrix and if axis is 1).
Notes
-----
This implementation will refuse to center scipy.sparse matrices
since it would make them non-sparse and would potentially crash the
program with memory exhaustion problems.
Instead the caller is expected to either set explicitly
`with_mean=False` (in that case, only variance scaling will be
performed on the features of the CSR matrix) or to call `X.toarray()`
if he/she expects the materialized dense array to fit in memory.
To avoid memory copy the caller should pass a CSR matrix.
See also
--------
:class:`sklearn.preprocessing.StandardScaler` to perform centering and
scaling using the ``Transformer`` API (e.g. as part of a preprocessing
:class:`sklearn.pipeline.Pipeline`)
"""
X = check_array(X, accept_sparse='csr', copy=copy, ensure_2d=False,
warn_on_dtype=True, estimator='the scale function',
dtype=FLOAT_DTYPES)
if sparse.issparse(X):
if with_mean:
raise ValueError(
"Cannot center sparse matrices: pass `with_mean=False` instead"
" See docstring for motivation and alternatives.")
if axis != 0:
raise ValueError("Can only scale sparse matrix on axis=0, "
" got axis=%d" % axis)
if not sparse.isspmatrix_csr(X):
X = X.tocsr()
copy = False
if copy:
X = X.copy()
_, var = mean_variance_axis(X, axis=0)
var[var == 0.0] = 1.0
inplace_column_scale(X, 1 / np.sqrt(var))
else:
X = np.asarray(X)
mean_, std_ = _mean_and_std(
X, axis, with_mean=with_mean, with_std=with_std)
if copy:
X = X.copy()
# Xr is a view on the original array that enables easy use of
# broadcasting on the axis in which we are interested in
Xr = np.rollaxis(X, axis)
if with_mean:
Xr -= mean_
mean_1 = Xr.mean(axis=0)
# Verify that mean_1 is 'close to zero'. If X contains very
# large values, mean_1 can also be very large, due to a lack of
# precision of mean_. In this case, a pre-scaling of the
# concerned feature is efficient, for instance by its mean or
# maximum.
if not np.allclose(mean_1, 0):
warnings.warn("Numerical issues were encountered "
"when centering the data "
"and might not be solved. Dataset may "
"contain too large values. You may need "
"to prescale your features.")
Xr -= mean_1
if with_std:
Xr /= std_
if with_mean:
mean_2 = Xr.mean(axis=0)
# If mean_2 is not 'close to zero', it comes from the fact that
# std_ is very small so that mean_2 = mean_1/std_ > 0, even if
# mean_1 was close to zero. The problem is thus essentially due
# to the lack of precision of mean_. A solution is then to
# substract the mean again:
if not np.allclose(mean_2, 0):
warnings.warn("Numerical issues were encountered "
"when scaling the data "
"and might not be solved. The standard "
"deviation of the data is probably "
"very close to 0. ")
Xr -= mean_2
return X
class MinMaxScaler(BaseEstimator, TransformerMixin):
"""Standardizes features by scaling each feature to a given range.
This estimator scales and translates each feature individually such
that it is in the given range on the training set, i.e. between
zero and one.
The standardization is given by::
X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0))
X_scaled = X_std * (max - min) + min
where min, max = feature_range.
This standardization is often used as an alternative to zero mean,
unit variance scaling.
Parameters
----------
feature_range: tuple (min, max), default=(0, 1)
Desired range of transformed data.
copy : boolean, optional, default True
Set to False to perform inplace row normalization and avoid a
copy (if the input is already a numpy array).
Attributes
----------
min_ : ndarray, shape (n_features,)
Per feature adjustment for minimum.
scale_ : ndarray, shape (n_features,)
Per feature relative scaling of the data.
"""
def __init__(self, feature_range=(0, 1), copy=True):
self.feature_range = feature_range
self.copy = copy
def fit(self, X, y=None):
"""Compute the minimum and maximum to be used for later scaling.
Parameters
----------
X : array-like, shape [n_samples, n_features]
The data used to compute the per-feature minimum and maximum
used for later scaling along the features axis.
"""
X = check_array(X, copy=self.copy, ensure_2d=False, warn_on_dtype=True,
estimator=self, dtype=FLOAT_DTYPES)
feature_range = self.feature_range
if feature_range[0] >= feature_range[1]:
raise ValueError("Minimum of desired feature range must be smaller"
" than maximum. Got %s." % str(feature_range))
data_min = np.min(X, axis=0)
data_range = np.max(X, axis=0) - data_min
# Do not scale constant features
if isinstance(data_range, np.ndarray):
data_range[data_range == 0.0] = 1.0
elif data_range == 0.:
data_range = 1.
self.scale_ = (feature_range[1] - feature_range[0]) / data_range
self.min_ = feature_range[0] - data_min * self.scale_
self.data_range = data_range
self.data_min = data_min
return self
def transform(self, X):
"""Scaling features of X according to feature_range.
Parameters
----------
X : array-like with shape [n_samples, n_features]
Input data that will be transformed.
"""
check_is_fitted(self, 'scale_')
X = check_array(X, copy=self.copy, ensure_2d=False)
X *= self.scale_
X += self.min_
return X
def inverse_transform(self, X):
"""Undo the scaling of X according to feature_range.
Parameters
----------
X : array-like with shape [n_samples, n_features]
Input data that will be transformed.
"""
check_is_fitted(self, 'scale_')
X = check_array(X, copy=self.copy, ensure_2d=False)
X -= self.min_
X /= self.scale_
return X
class StandardScaler(BaseEstimator, TransformerMixin):
"""Standardize features by removing the mean and scaling to unit variance
Centering and scaling happen independently on each feature by computing
the relevant statistics on the samples in the training set. Mean and
standard deviation are then stored to be used on later data using the
`transform` method.
Standardization of a dataset is a common requirement for many
machine learning estimators: they might behave badly if the
individual feature do not more or less look like standard normally
distributed data (e.g. Gaussian with 0 mean and unit variance).
For instance many elements used in the objective function of
a learning algorithm (such as the RBF kernel of Support Vector
Machines or the L1 and L2 regularizers of linear models) assume that
all features are centered around 0 and have variance in the same
order. If a feature has a variance that is orders of magnitude larger
that others, it might dominate the objective function and make the
estimator unable to learn from other features correctly as expected.
Parameters
----------
with_mean : boolean, True by default
If True, center the data before scaling.
This does not work (and will raise an exception) when attempted on
sparse matrices, because centering them entails building a dense
matrix which in common use cases is likely to be too large to fit in
memory.
with_std : boolean, True by default
If True, scale the data to unit variance (or equivalently,
unit standard deviation).
copy : boolean, optional, default True
If False, try to avoid a copy and do inplace scaling instead.
This is not guaranteed to always work inplace; e.g. if the data is
not a NumPy array or scipy.sparse CSR matrix, a copy may still be
returned.
Attributes
----------
mean_ : array of floats with shape [n_features]
The mean value for each feature in the training set.
std_ : array of floats with shape [n_features]
The standard deviation for each feature in the training set.
See also
--------
:func:`sklearn.preprocessing.scale` to perform centering and
scaling without using the ``Transformer`` object oriented API
:class:`sklearn.decomposition.RandomizedPCA` with `whiten=True`
to further remove the linear correlation across features.
"""
def __init__(self, copy=True, with_mean=True, with_std=True):
self.with_mean = with_mean
self.with_std = with_std
self.copy = copy
def fit(self, X, y=None):
"""Compute the mean and std to be used for later scaling.
Parameters
----------
X : array-like or CSR matrix with shape [n_samples, n_features]
The data used to compute the mean and standard deviation
used for later scaling along the features axis.
"""
X = check_array(X, accept_sparse='csr', copy=self.copy,
ensure_2d=False, warn_on_dtype=True,
estimator=self, dtype=FLOAT_DTYPES)
if sparse.issparse(X):
if self.with_mean:
raise ValueError(
"Cannot center sparse matrices: pass `with_mean=False` "
"instead. See docstring for motivation and alternatives.")
self.mean_ = None
if self.with_std:
var = mean_variance_axis(X, axis=0)[1]
self.std_ = np.sqrt(var)
self.std_[var == 0.0] = 1.0
else:
self.std_ = None
return self
else:
self.mean_, self.std_ = _mean_and_std(
X, axis=0, with_mean=self.with_mean, with_std=self.with_std)
return self
def transform(self, X, y=None, copy=None):
"""Perform standardization by centering and scaling
Parameters
----------
X : array-like with shape [n_samples, n_features]
The data used to scale along the features axis.
"""
check_is_fitted(self, 'std_')
copy = copy if copy is not None else self.copy
X = check_array(X, accept_sparse='csr', copy=copy,
ensure_2d=False, warn_on_dtype=True,
estimator=self, dtype=FLOAT_DTYPES)
if sparse.issparse(X):
if self.with_mean:
raise ValueError(
"Cannot center sparse matrices: pass `with_mean=False` "
"instead. See docstring for motivation and alternatives.")
if self.std_ is not None:
inplace_column_scale(X, 1 / self.std_)
else:
if self.with_mean:
X -= self.mean_
if self.with_std:
X /= self.std_
return X
def inverse_transform(self, X, copy=None):
"""Scale back the data to the original representation
Parameters
----------
X : array-like with shape [n_samples, n_features]
The data used to scale along the features axis.
"""
check_is_fitted(self, 'std_')
copy = copy if copy is not None else self.copy
if sparse.issparse(X):
if self.with_mean:
raise ValueError(
"Cannot uncenter sparse matrices: pass `with_mean=False` "
"instead See docstring for motivation and alternatives.")
if not sparse.isspmatrix_csr(X):
X = X.tocsr()
copy = False
if copy:
X = X.copy()
if self.std_ is not None:
inplace_column_scale(X, self.std_)
else:
X = np.asarray(X)
if copy:
X = X.copy()
if self.with_std:
X *= self.std_
if self.with_mean:
X += self.mean_
return X
class RobustScaler(BaseEstimator, TransformerMixin):
"""Scale features using statistics that are robust to outliers.
This Scaler removes the median and scales the data according to
the Interquartile Range (IQR). The IQR is the range between the 1st
quartile (25th quantile) and the 3rd quartile (75th quantile).
Centering and scaling happen independently on each feature (or each
sample, depending on the `axis` argument) by computing the relevant
statistics on the samples in the training set. Median and interquartile
range are then stored to be used on later data using the `transform`
method.
Standardization of a dataset is a common requirement for many
machine learning estimators. Typically this is done by removing the mean
and scaling to unit variance. However, outliers can often influence the
sample mean / variance in a negative way. In such cases, the median and
the interquartile range often give better results.
Parameters
----------
with_centering : boolean, True by default
If True, center the data before scaling.
This does not work (and will raise an exception) when attempted on
sparse matrices, because centering them entails building a dense
matrix which in common use cases is likely to be too large to fit in
memory.
with_scaling : boolean, True by default
If True, scale the data to interquartile range.
copy : boolean, optional, default is True
If False, try to avoid a copy and do inplace scaling instead.
This is not guaranteed to always work inplace; e.g. if the data is
not a NumPy array or scipy.sparse CSR matrix, a copy may still be
returned.
Attributes
----------
`center_` : array of floats
The median value for each feature in the training set.
`scale_` : array of floats
The (scaled) interquartile range for each feature in the training set.
See also
--------
:class:`sklearn.preprocessing.StandardScaler` to perform centering
and scaling using mean and variance.
:class:`sklearn.decomposition.RandomizedPCA` with `whiten=True`
to further remove the linear correlation across features.
Notes
-----
See examples/preprocessing/plot_robust_scaling.py for an example.
http://en.wikipedia.org/wiki/Median_(statistics)
http://en.wikipedia.org/wiki/Interquartile_range
"""
def __init__(self, with_centering=True, with_scaling=True, copy=True):
self.with_centering = with_centering
self.with_scaling = with_scaling
self.copy = copy
def _check_array(self, X, copy):
"""Makes sure centering is not enabled for sparse matrices."""
X = check_array(X, accept_sparse=('csr', 'csc'), dtype=np.float,
copy=copy, ensure_2d=False)
if sparse.issparse(X):
if self.with_centering:
raise ValueError(
"Cannot center sparse matrices: use `with_centering=False`"
" instead. See docstring for motivation and alternatives.")
return X
def _handle_zeros_in_scale(self, scale):
''' Makes sure that whenever scale is zero, we handle it correctly.
This happens in most scalers when we have constant features.'''
# if we are fitting on 1D arrays, scale might be a scalar
if np.isscalar(scale):
if scale == 0:
scale = 1.
elif isinstance(scale, np.ndarray):
scale[scale == 0.0] = 1.0
scale[~np.isfinite(scale)] = 1.0
return scale
def fit(self, X, y=None):
"""Compute the median and quantiles to be used for scaling.
Parameters
----------
X : array-like with shape [n_samples, n_features]
The data used to compute the median and quantiles
used for later scaling along the features axis.
"""
if sparse.issparse(X):
raise TypeError("RobustScaler cannot be fitted on sparse inputs")
X = self._check_array(X, self.copy)
if self.with_centering:
self.center_ = np.median(X, axis=0)
if self.with_scaling:
q = np.percentile(X, (25, 75), axis=0)
self.scale_ = (q[1] - q[0])
if np.isscalar(self.scale_):
if self.scale_ == 0:
self.scale_ = 1.
else:
self.scale_[self.scale_ == 0.0] = 1.0
self.scale_[~np.isfinite(self.scale_)] = 1.0
return self
def transform(self, X, y=None):
"""Center and scale the data
Parameters
----------
X : array-like or CSR matrix.
The data used to scale along the specified axis.
"""
if self.with_centering:
check_is_fitted(self, 'center_')
if self.with_scaling:
check_is_fitted(self, 'scale_')
X = self._check_array(X, self.copy)
if sparse.issparse(X):
if self.with_scaling:
if X.shape[0] == 1:
inplace_row_scale(X, 1.0 / self.scale_)
elif self.axis == 0:
inplace_column_scale(X, 1.0 / self.scale_)
else:
if self.with_centering:
X -= self.center_
if self.with_scaling:
X /= self.scale_
return X
def inverse_transform(self, X):
"""Scale back the data to the original representation
Parameters
----------
X : array-like or CSR matrix.
The data used to scale along the specified axis.
"""
if self.with_centering:
check_is_fitted(self, 'center_')
if self.with_scaling:
check_is_fitted(self, 'scale_')
X = self._check_array(X, self.copy)
if sparse.issparse(X):
if self.with_scaling:
if X.shape[0] == 1:
inplace_row_scale(X, self.scale_)
else:
inplace_column_scale(X, self.scale_)
else:
if self.with_scaling:
X *= self.scale_
if self.with_centering:
X += self.center_
return X
def robust_scale(X, axis=0, with_centering=True, with_scaling=True, copy=True):
"""Standardize a dataset along any axis
Center to the median and component wise scale
according to the interquartile range.
Parameters
----------
X : array-like.
The data to center and scale.
axis : int (0 by default)
axis used to compute the medians and IQR along. If 0,
independently scale each feature, otherwise (if 1) scale
each sample.
with_centering : boolean, True by default
If True, center the data before scaling.
with_scaling : boolean, True by default
If True, scale the data to unit variance (or equivalently,
unit standard deviation).
copy : boolean, optional, default is True
set to False to perform inplace row normalization and avoid a
copy (if the input is already a numpy array or a scipy.sparse
CSR matrix and if axis is 1).
Notes
-----
This implementation will refuse to center scipy.sparse matrices
since it would make them non-sparse and would potentially crash the
program with memory exhaustion problems.
Instead the caller is expected to either set explicitly
`with_centering=False` (in that case, only variance scaling will be
performed on the features of the CSR matrix) or to call `X.toarray()`
if he/she expects the materialized dense array to fit in memory.
To avoid memory copy the caller should pass a CSR matrix.
See also
--------
:class:`sklearn.preprocessing.RobustScaler` to perform centering and
scaling using the ``Transformer`` API (e.g. as part of a preprocessing
:class:`sklearn.pipeline.Pipeline`)
"""
s = RobustScaler(with_centering=with_centering, with_scaling=with_scaling,
copy=copy)
if axis == 0:
return s.fit_transform(X)
else:
return s.fit_transform(X.T).T
class PolynomialFeatures(BaseEstimator, TransformerMixin):
"""Generate polynomial and interaction features.
Generate a new feature matrix consisting of all polynomial combinations
of the features with degree less than or equal to the specified degree.
For example, if an input sample is two dimensional and of the form
[a, b], the degree-2 polynomial features are [1, a, b, a^2, ab, b^2].
Parameters
----------
degree : integer
The degree of the polynomial features. Default = 2.
interaction_only : boolean, default = False
If true, only interaction features are produced: features that are
products of at most ``degree`` *distinct* input features (so not
``x[1] ** 2``, ``x[0] * x[2] ** 3``, etc.).
include_bias : boolean
If True (default), then include a bias column, the feature in which
all polynomial powers are zero (i.e. a column of ones - acts as an
intercept term in a linear model).
Examples
--------
>>> X = np.arange(6).reshape(3, 2)
>>> X
array([[0, 1],
[2, 3],
[4, 5]])
>>> poly = PolynomialFeatures(2)
>>> poly.fit_transform(X)
array([[ 1, 0, 1, 0, 0, 1],
[ 1, 2, 3, 4, 6, 9],
[ 1, 4, 5, 16, 20, 25]])
>>> poly = PolynomialFeatures(interaction_only=True)
>>> poly.fit_transform(X)
array([[ 1, 0, 1, 0],
[ 1, 2, 3, 6],
[ 1, 4, 5, 20]])
Attributes
----------
powers_ : array, shape (n_input_features, n_output_features)
powers_[i, j] is the exponent of the jth input in the ith output.
n_input_features_ : int
The total number of input features.
n_output_features_ : int
The total number of polynomial output features. The number of output
features is computed by iterating over all suitably sized combinations
of input features.
Notes
-----
Be aware that the number of features in the output array scales
polynomially in the number of features of the input array, and
exponentially in the degree. High degrees can cause overfitting.
See :ref:`examples/linear_model/plot_polynomial_interpolation.py
<example_linear_model_plot_polynomial_interpolation.py>`
"""
def __init__(self, degree=2, interaction_only=False, include_bias=True):
self.degree = degree
self.interaction_only = interaction_only
self.include_bias = include_bias
@staticmethod
def _combinations(n_features, degree, interaction_only, include_bias):
comb = (combinations if interaction_only else combinations_w_r)
start = int(not include_bias)
return chain.from_iterable(comb(range(n_features), i)
for i in range(start, degree + 1))
@property
def powers_(self):
check_is_fitted(self, 'n_input_features_')
combinations = self._combinations(self.n_input_features_, self.degree,
self.interaction_only,
self.include_bias)
return np.vstack(np.bincount(c, minlength=self.n_input_features_)
for c in combinations)
def fit(self, X, y=None):
"""
Compute number of output features.
"""
n_samples, n_features = check_array(X).shape
combinations = self._combinations(n_features, self.degree,
self.interaction_only,
self.include_bias)
self.n_input_features_ = n_features
self.n_output_features_ = sum(1 for _ in combinations)
return self
def transform(self, X, y=None):
"""Transform data to polynomial features
Parameters
----------
X : array with shape [n_samples, n_features]
The data to transform, row by row.
Returns
-------
XP : np.ndarray shape [n_samples, NP]
The matrix of features, where NP is the number of polynomial
features generated from the combination of inputs.
"""
check_is_fitted(self, ['n_input_features_', 'n_output_features_'])
X = check_array(X)
n_samples, n_features = X.shape
if n_features != self.n_input_features_:
raise ValueError("X shape does not match training shape")
# allocate output data
XP = np.empty((n_samples, self.n_output_features_), dtype=X.dtype)
combinations = self._combinations(n_features, self.degree,
self.interaction_only,
self.include_bias)
for i, c in enumerate(combinations):
XP[:, i] = X[:, c].prod(1)
return XP
def normalize(X, norm='l2', axis=1, copy=True):
"""Scale input vectors individually to unit norm (vector length).
Parameters
----------
X : array or scipy.sparse matrix with shape [n_samples, n_features]
The data to normalize, element by element.
scipy.sparse matrices should be in CSR format to avoid an
un-necessary copy.
norm : 'l1', 'l2', or 'max', optional ('l2' by default)
The norm to use to normalize each non zero sample (or each non-zero
feature if axis is 0).
axis : 0 or 1, optional (1 by default)
axis used to normalize the data along. If 1, independently normalize
each sample, otherwise (if 0) normalize each feature.
copy : boolean, optional, default True
set to False to perform inplace row normalization and avoid a
copy (if the input is already a numpy array or a scipy.sparse
CSR matrix and if axis is 1).
See also
--------
:class:`sklearn.preprocessing.Normalizer` to perform normalization
using the ``Transformer`` API (e.g. as part of a preprocessing
:class:`sklearn.pipeline.Pipeline`)
"""
if norm not in ('l1', 'l2', 'max'):
raise ValueError("'%s' is not a supported norm" % norm)
if axis == 0:
sparse_format = 'csc'
elif axis == 1:
sparse_format = 'csr'
else:
raise ValueError("'%d' is not a supported axis" % axis)
X = check_array(X, sparse_format, copy=copy, warn_on_dtype=True,
estimator='the normalize function', dtype=FLOAT_DTYPES)
if axis == 0:
X = X.T
if sparse.issparse(X):
if norm == 'l1':
inplace_csr_row_normalize_l1(X)
elif norm == 'l2':
inplace_csr_row_normalize_l2(X)
elif norm == 'max':
_, norms = min_max_axis(X, 1)
norms = norms.repeat(np.diff(X.indptr))
mask = norms != 0
X.data[mask] /= norms[mask]
else:
if norm == 'l1':
norms = np.abs(X).sum(axis=1)
elif norm == 'l2':
norms = row_norms(X)
elif norm == 'max':
norms = np.max(X, axis=1)
norms[norms == 0.0] = 1.0
X /= norms[:, np.newaxis]
if axis == 0:
X = X.T
return X
class Normalizer(BaseEstimator, TransformerMixin):
"""Normalize samples individually to unit norm.
Each sample (i.e. each row of the data matrix) with at least one
non zero component is rescaled independently of other samples so
that its norm (l1 or l2) equals one.
This transformer is able to work both with dense numpy arrays and
scipy.sparse matrix (use CSR format if you want to avoid the burden of
a copy / conversion).
Scaling inputs to unit norms is a common operation for text
classification or clustering for instance. For instance the dot
product of two l2-normalized TF-IDF vectors is the cosine similarity
of the vectors and is the base similarity metric for the Vector
Space Model commonly used by the Information Retrieval community.
Parameters
----------
norm : 'l1', 'l2', or 'max', optional ('l2' by default)
The norm to use to normalize each non zero sample.
copy : boolean, optional, default True
set to False to perform inplace row normalization and avoid a
copy (if the input is already a numpy array or a scipy.sparse
CSR matrix).
Notes
-----
This estimator is stateless (besides constructor parameters), the
fit method does nothing but is useful when used in a pipeline.
See also
--------
:func:`sklearn.preprocessing.normalize` equivalent function
without the object oriented API
"""
def __init__(self, norm='l2', copy=True):
self.norm = norm
self.copy = copy
def fit(self, X, y=None):
"""Do nothing and return the estimator unchanged
This method is just there to implement the usual API and hence
work in pipelines.
"""
X = check_array(X, accept_sparse='csr')
return self
def transform(self, X, y=None, copy=None):
"""Scale each non zero row of X to unit norm
Parameters
----------
X : array or scipy.sparse matrix with shape [n_samples, n_features]
The data to normalize, row by row. scipy.sparse matrices should be
in CSR format to avoid an un-necessary copy.
"""
copy = copy if copy is not None else self.copy
X = check_array(X, accept_sparse='csr')
return normalize(X, norm=self.norm, axis=1, copy=copy)
def binarize(X, threshold=0.0, copy=True):
"""Boolean thresholding of array-like or scipy.sparse matrix
Parameters
----------
X : array or scipy.sparse matrix with shape [n_samples, n_features]
The data to binarize, element by element.
scipy.sparse matrices should be in CSR or CSC format to avoid an
un-necessary copy.
threshold : float, optional (0.0 by default)
Feature values below or equal to this are replaced by 0, above it by 1.
Threshold may not be less than 0 for operations on sparse matrices.
copy : boolean, optional, default True
set to False to perform inplace binarization and avoid a copy
(if the input is already a numpy array or a scipy.sparse CSR / CSC
matrix and if axis is 1).
See also
--------
:class:`sklearn.preprocessing.Binarizer` to perform binarization
using the ``Transformer`` API (e.g. as part of a preprocessing
:class:`sklearn.pipeline.Pipeline`)
"""
X = check_array(X, accept_sparse=['csr', 'csc'], copy=copy)
if sparse.issparse(X):
if threshold < 0:
raise ValueError('Cannot binarize a sparse matrix with threshold '
'< 0')
cond = X.data > threshold
not_cond = np.logical_not(cond)
X.data[cond] = 1
X.data[not_cond] = 0
X.eliminate_zeros()
else:
cond = X > threshold
not_cond = np.logical_not(cond)
X[cond] = 1
X[not_cond] = 0
return X
class Binarizer(BaseEstimator, TransformerMixin):
"""Binarize data (set feature values to 0 or 1) according to a threshold
Values greater than the threshold map to 1, while values less than
or equal to the threshold map to 0. With the default threshold of 0,
only positive values map to 1.
Binarization is a common operation on text count data where the
analyst can decide to only consider the presence or absence of a
feature rather than a quantified number of occurrences for instance.
It can also be used as a pre-processing step for estimators that
consider boolean random variables (e.g. modelled using the Bernoulli
distribution in a Bayesian setting).
Parameters
----------
threshold : float, optional (0.0 by default)
Feature values below or equal to this are replaced by 0, above it by 1.
Threshold may not be less than 0 for operations on sparse matrices.
copy : boolean, optional, default True
set to False to perform inplace binarization and avoid a copy (if
the input is already a numpy array or a scipy.sparse CSR matrix).
Notes
-----
If the input is a sparse matrix, only the non-zero values are subject
to update by the Binarizer class.
This estimator is stateless (besides constructor parameters), the
fit method does nothing but is useful when used in a pipeline.
"""
def __init__(self, threshold=0.0, copy=True):
self.threshold = threshold
self.copy = copy
def fit(self, X, y=None):
"""Do nothing and return the estimator unchanged
This method is just there to implement the usual API and hence
work in pipelines.
"""
check_array(X, accept_sparse='csr')
return self
def transform(self, X, y=None, copy=None):
"""Binarize each element of X
Parameters
----------
X : array or scipy.sparse matrix with shape [n_samples, n_features]
The data to binarize, element by element.
scipy.sparse matrices should be in CSR format to avoid an
un-necessary copy.
"""
copy = copy if copy is not None else self.copy
return binarize(X, threshold=self.threshold, copy=copy)
class KernelCenterer(BaseEstimator, TransformerMixin):
"""Center a kernel matrix
Let K(x, z) be a kernel defined by phi(x)^T phi(z), where phi is a
function mapping x to a Hilbert space. KernelCenterer centers (i.e.,
normalize to have zero mean) the data without explicitly computing phi(x).
It is equivalent to centering phi(x) with
sklearn.preprocessing.StandardScaler(with_std=False).
"""
def fit(self, K, y=None):
"""Fit KernelCenterer
Parameters
----------
K : numpy array of shape [n_samples, n_samples]
Kernel matrix.
Returns
-------
self : returns an instance of self.
"""
K = check_array(K)
n_samples = K.shape[0]
self.K_fit_rows_ = np.sum(K, axis=0) / n_samples
self.K_fit_all_ = self.K_fit_rows_.sum() / n_samples
return self
def transform(self, K, y=None, copy=True):
"""Center kernel matrix.
Parameters
----------
K : numpy array of shape [n_samples1, n_samples2]
Kernel matrix.
copy : boolean, optional, default True
Set to False to perform inplace computation.
Returns
-------
K_new : numpy array of shape [n_samples1, n_samples2]
"""
check_is_fitted(self, 'K_fit_all_')
K = check_array(K)
if copy:
K = K.copy()
K_pred_cols = (np.sum(K, axis=1) /
self.K_fit_rows_.shape[0])[:, np.newaxis]
K -= self.K_fit_rows_
K -= K_pred_cols
K += self.K_fit_all_
return K
def add_dummy_feature(X, value=1.0):
"""Augment dataset with an additional dummy feature.
This is useful for fitting an intercept term with implementations which
cannot otherwise fit it directly.
Parameters
----------
X : array or scipy.sparse matrix with shape [n_samples, n_features]
Data.
value : float
Value to use for the dummy feature.
Returns
-------
X : array or scipy.sparse matrix with shape [n_samples, n_features + 1]
Same data with dummy feature added as first column.
Examples
--------
>>> from sklearn.preprocessing import add_dummy_feature
>>> add_dummy_feature([[0, 1], [1, 0]])
array([[ 1., 0., 1.],
[ 1., 1., 0.]])
"""
X = check_array(X, accept_sparse=['csc', 'csr', 'coo'])
n_samples, n_features = X.shape
shape = (n_samples, n_features + 1)
if sparse.issparse(X):
if sparse.isspmatrix_coo(X):
# Shift columns to the right.
col = X.col + 1
# Column indices of dummy feature are 0 everywhere.
col = np.concatenate((np.zeros(n_samples), col))
# Row indices of dummy feature are 0, ..., n_samples-1.
row = np.concatenate((np.arange(n_samples), X.row))
# Prepend the dummy feature n_samples times.
data = np.concatenate((np.ones(n_samples) * value, X.data))
return sparse.coo_matrix((data, (row, col)), shape)
elif sparse.isspmatrix_csc(X):
# Shift index pointers since we need to add n_samples elements.
indptr = X.indptr + n_samples
# indptr[0] must be 0.
indptr = np.concatenate((np.array([0]), indptr))
# Row indices of dummy feature are 0, ..., n_samples-1.
indices = np.concatenate((np.arange(n_samples), X.indices))
# Prepend the dummy feature n_samples times.
data = np.concatenate((np.ones(n_samples) * value, X.data))
return sparse.csc_matrix((data, indices, indptr), shape)
else:
klass = X.__class__
return klass(add_dummy_feature(X.tocoo(), value))
else:
return np.hstack((np.ones((n_samples, 1)) * value, X))
def _transform_selected(X, transform, selected="all", copy=True):
"""Apply a transform function to portion of selected features
Parameters
----------
X : array-like or sparse matrix, shape=(n_samples, n_features)
Dense array or sparse matrix.
transform : callable
A callable transform(X) -> X_transformed
copy : boolean, optional
Copy X even if it could be avoided.
selected: "all" or array of indices or mask
Specify which features to apply the transform to.
Returns
-------
X : array or sparse matrix, shape=(n_samples, n_features_new)
"""
if selected == "all":
return transform(X)
X = check_array(X, accept_sparse='csc', copy=copy)
if len(selected) == 0:
return X
n_features = X.shape[1]
ind = np.arange(n_features)
sel = np.zeros(n_features, dtype=bool)
sel[np.asarray(selected)] = True
not_sel = np.logical_not(sel)
n_selected = np.sum(sel)
if n_selected == 0:
# No features selected.
return X
elif n_selected == n_features:
# All features selected.
return transform(X)
else:
X_sel = transform(X[:, ind[sel]])
X_not_sel = X[:, ind[not_sel]]
if sparse.issparse(X_sel) or sparse.issparse(X_not_sel):
return sparse.hstack((X_sel, X_not_sel))
else:
return np.hstack((X_sel, X_not_sel))
class OneHotEncoder(BaseEstimator, TransformerMixin):
"""Encode categorical integer features using a one-hot aka one-of-K scheme.
The input to this transformer should be a matrix of integers, denoting
the values taken on by categorical (discrete) features. The output will be
a sparse matrix where each column corresponds to one possible value of one
feature. It is assumed that input features take on values in the range
[0, n_values).
This encoding is needed for feeding categorical data to many scikit-learn
estimators, notably linear models and SVMs with the standard kernels.
Parameters
----------
n_values : 'auto', int or array of ints
Number of values per feature.
- 'auto' : determine value range from training data.
- int : maximum value for all features.
- array : maximum value per feature.
categorical_features: "all" or array of indices or mask
Specify what features are treated as categorical.
- 'all' (default): All features are treated as categorical.
- array of indices: Array of categorical feature indices.
- mask: Array of length n_features and with dtype=bool.
Non-categorical features are always stacked to the right of the matrix.
dtype : number type, default=np.float
Desired dtype of output.
sparse : boolean, default=True
Will return sparse matrix if set True else will return an array.
handle_unknown : str, 'error' or 'ignore'
Whether to raise an error or ignore if a unknown categorical feature is
present during transform.
Attributes
----------
active_features_ : array
Indices for active features, meaning values that actually occur
in the training set. Only available when n_values is ``'auto'``.
feature_indices_ : array of shape (n_features,)
Indices to feature ranges.
Feature ``i`` in the original data is mapped to features
from ``feature_indices_[i]`` to ``feature_indices_[i+1]``
(and then potentially masked by `active_features_` afterwards)
n_values_ : array of shape (n_features,)
Maximum number of values per feature.
Examples
--------
Given a dataset with three features and two samples, we let the encoder
find the maximum value per feature and transform the data to a binary
one-hot encoding.
>>> from sklearn.preprocessing import OneHotEncoder
>>> enc = OneHotEncoder()
>>> enc.fit([[0, 0, 3], [1, 1, 0], [0, 2, 1], \
[1, 0, 2]]) # doctest: +ELLIPSIS
OneHotEncoder(categorical_features='all', dtype=<... 'float'>,
handle_unknown='error', n_values='auto', sparse=True)
>>> enc.n_values_
array([2, 3, 4])
>>> enc.feature_indices_
array([0, 2, 5, 9])
>>> enc.transform([[0, 1, 1]]).toarray()
array([[ 1., 0., 0., 1., 0., 0., 1., 0., 0.]])
See also
--------
sklearn.feature_extraction.DictVectorizer : performs a one-hot encoding of
dictionary items (also handles string-valued features).
sklearn.feature_extraction.FeatureHasher : performs an approximate one-hot
encoding of dictionary items or strings.
"""
def __init__(self, n_values="auto", categorical_features="all",
dtype=np.float, sparse=True, handle_unknown='error'):
self.n_values = n_values
self.categorical_features = categorical_features
self.dtype = dtype
self.sparse = sparse
self.handle_unknown = handle_unknown
def fit(self, X, y=None):
"""Fit OneHotEncoder to X.
Parameters
----------
X : array-like, shape=(n_samples, n_feature)
Input array of type int.
Returns
-------
self
"""
self.fit_transform(X)
return self
def _fit_transform(self, X):
"""Assumes X contains only categorical features."""
X = check_array(X, dtype=np.int)
if np.any(X < 0):
raise ValueError("X needs to contain only non-negative integers.")
n_samples, n_features = X.shape
if self.n_values == 'auto':
n_values = np.max(X, axis=0) + 1
elif isinstance(self.n_values, numbers.Integral):
if (np.max(X, axis=0) >= self.n_values).any():
raise ValueError("Feature out of bounds for n_values=%d"
% self.n_values)
n_values = np.empty(n_features, dtype=np.int)
n_values.fill(self.n_values)
else:
try:
n_values = np.asarray(self.n_values, dtype=int)
except (ValueError, TypeError):
raise TypeError("Wrong type for parameter `n_values`. Expected"
" 'auto', int or array of ints, got %r"
% type(X))
if n_values.ndim < 1 or n_values.shape[0] != X.shape[1]:
raise ValueError("Shape mismatch: if n_values is an array,"
" it has to be of shape (n_features,).")
self.n_values_ = n_values
n_values = np.hstack([[0], n_values])
indices = np.cumsum(n_values)
self.feature_indices_ = indices
column_indices = (X + indices[:-1]).ravel()
row_indices = np.repeat(np.arange(n_samples, dtype=np.int32),
n_features)
data = np.ones(n_samples * n_features)
out = sparse.coo_matrix((data, (row_indices, column_indices)),
shape=(n_samples, indices[-1]),
dtype=self.dtype).tocsr()
if self.n_values == 'auto':
mask = np.array(out.sum(axis=0)).ravel() != 0
active_features = np.where(mask)[0]
out = out[:, active_features]
self.active_features_ = active_features
return out if self.sparse else out.toarray()
def fit_transform(self, X, y=None):
"""Fit OneHotEncoder to X, then transform X.
Equivalent to self.fit(X).transform(X), but more convenient and more
efficient. See fit for the parameters, transform for the return value.
"""
return _transform_selected(X, self._fit_transform,
self.categorical_features, copy=True)
def _transform(self, X):
"""Assumes X contains only categorical features."""
X = check_array(X, dtype=np.int)
if np.any(X < 0):
raise ValueError("X needs to contain only non-negative integers.")
n_samples, n_features = X.shape
indices = self.feature_indices_
if n_features != indices.shape[0] - 1:
raise ValueError("X has different shape than during fitting."
" Expected %d, got %d."
% (indices.shape[0] - 1, n_features))
# We use only those catgorical features of X that are known using fit.
# i.e lesser than n_values_ using mask.
# This means, if self.handle_unknown is "ignore", the row_indices and
# col_indices corresponding to the unknown categorical feature are
# ignored.
mask = (X < self.n_values_).ravel()
if np.any(~mask):
if self.handle_unknown not in ['error', 'ignore']:
raise ValueError("handle_unknown should be either error or "
"unknown got %s" % self.handle_unknown)
if self.handle_unknown == 'error':
raise ValueError("unknown categorical feature present %s "
"during transform." % X[~mask])
column_indices = (X + indices[:-1]).ravel()[mask]
row_indices = np.repeat(np.arange(n_samples, dtype=np.int32),
n_features)[mask]
data = np.ones(np.sum(mask))
out = sparse.coo_matrix((data, (row_indices, column_indices)),
shape=(n_samples, indices[-1]),
dtype=self.dtype).tocsr()
if self.n_values == 'auto':
out = out[:, self.active_features_]
return out if self.sparse else out.toarray()
def transform(self, X):
"""Transform X using one-hot encoding.
Parameters
----------
X : array-like, shape=(n_samples, n_features)
Input array of type int.
Returns
-------
X_out : sparse matrix if sparse=True else a 2-d array, dtype=int
Transformed input.
"""
return _transform_selected(X, self._transform,
self.categorical_features, copy=True)
| bsd-3-clause |
LaurenLuoYun/losslessh264 | plot_prior_misses.py | 40 | 1124 | # Run h264dec on a single file compiled with PRIOR_STATS and then run this script
# Outputs timeseries plot at /tmp/misses.pdf
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
import os
def temporal_misses(key):
values = data[key]
numbins = 100
binsize = len(values) // numbins
bins = [[]]
for v in values:
if len(bins[-1]) >= binsize:
bins.append([])
bins[-1].append(v)
x = range(len(bins))
total_misses = float(sum(values))
y = [100 * float(sum(b)) / total_misses for b in bins]
return plt.plot(x, y, label=key)[0]
paths = filter(lambda s: 'misses.log' in s, os.listdir('/tmp/'))
data = {p.split('_misses.')[0]: map(lambda c: c == '0', open('/tmp/' + p).read()) for p in paths}
handles = []
plt.figure(figsize=(20,10))
keys = data.keys()
for k in keys:
handles.append(temporal_misses(k))
plt.axis((0, 100, 0, 2))
plt.xlabel('temporal %')
plt.ylabel('% total misses')
plt.legend(handles, keys, bbox_to_anchor=(1, 1), bbox_transform=plt.gcf().transFigure)
out = PdfPages('/tmp/misses.pdf')
out.savefig()
out.close()
| bsd-2-clause |
mph-/lcapy | lcapy/circuitgraph.py | 1 | 12275 | """
This module provides a class to represent circuits as graphs.
This is primarily for loop analysis but is also used for nodal analysis.
Copyright 2019--2021 Michael Hayes, UCECE
"""
from matplotlib.pyplot import subplots, savefig
import networkx as nx
# V1 1 0 {u(t)}; down
# R1 1 2; right=2
# L1 2 3; down=2
# W1 0 3; right
# W 1 5; up
# W 2 6; up
# C1 5 6; right=2
class CircuitGraph(object):
def __init__(self, cct, G=None):
self.cct = cct
self.node_map = cct.node_map
if G is not None:
self.G = G
return
self.G = nx.Graph()
dummy = 0
# Dummy nodes are used to avoid parallel edges.
dummy_nodes = {}
self.G.add_nodes_from(cct.node_list)
node_map = cct.node_map
for name in cct.branch_list:
elt = cct.elements[name]
if len(elt.nodenames) < 2:
continue
nodename1 = node_map[elt.nodenames[0]]
nodename2 = node_map[elt.nodenames[1]]
if self.G.has_edge(nodename1, nodename2):
# Add dummy node in graph to avoid parallel edges.
dummynode = '*%d' % dummy
dummycpt = 'W%d' % dummy
self.G.add_edge(nodename1, dummynode, name=name)
self.G.add_edge(dummynode, nodename2, name=dummycpt)
dummy_nodes[dummynode] = nodename2
dummy += 1
else:
self.G.add_edge(nodename1, nodename2, name=name)
def connected_cpts(self, node):
"""Components connected to specified node."""
for node1, edges in self.node_edges(node).items():
if node1.startswith('*'):
for elt in self.connected_cpts(node1):
yield elt
continue
for key, edge in edges.items():
if not edge.startswith('W'):
elt = self.cct.elements[edge]
yield elt
def connected(self, node):
"""Set of component names connected to specified node."""
return set([cpt.name for cpt in self.connected_cpts(node)])
def all_loops(self):
# This adds forward and backward edges.
DG = nx.DiGraph(self.G)
cycles = list(nx.simple_cycles(DG))
loops = []
for cycle in cycles:
if len(cycle) <= 2:
continue
cycle = sorted(cycle)
if cycle not in loops:
loops.append(cycle)
return loops
def chordless_loops(self):
loops = self.all_loops()
sets = [set(loop) for loop in loops]
DG = nx.DiGraph(self.G)
rejects = []
for i in range(len(sets)):
# Reject loops with chords.
loop = loops[i]
if len(loop) == 2:
continue
for j in range(i + 1, len(sets)):
if sets[i].issubset(sets[j]):
rejects.append(j)
elif sets[j].issubset(sets[i]):
rejects.append(i)
cloops = []
for i, loop in enumerate(loops):
if i not in rejects:
cloops.append(loop)
return cloops
def cut_sets(self):
"""Return list of cut sets. Each cut set is a set of nodes describing
a sub-graph G'. Removing all the edges of G' from the graph
disconnects it. This will fail if there are unconnected
components."""
# It may be better to return a list of sets of edges.
if hasattr(self, '_cutsets'):
return self._cutsets
self._cutsets = list(nx.all_node_cuts(self.G))
return self._cutsets
def cut_vertices(self):
"""Return list of cut vertices. Each cut vertex is a node
that if removed, with its edges, disconnects the graph."""
return list(nx.articulation_points(self.G))
def cut_edges(self):
"""Return list of cut edges. Each cut edge is an edge that
disconnects the graph if removed."""
return list(nx.minimum_edge_cut(self.G))
def loops(self):
"""Return list of loops: Each loop is a list of nodes."""
if hasattr(self, '_loops'):
return self._loops
self._loops = self.chordless_loops()
return self._loops
def loops_by_cpt_name(self):
"""Return list of loops specified by cpt name."""
ret = []
for loop in self.loops():
foo = []
for m in range(len(loop) - 1):
cpt = self.component(loop[m + 1], loop[m])
if cpt is None:
continue
foo.append(cpt.name)
foo.append(self.component(loop[-1], loop[0]).name)
ret.append(foo)
return ret
def draw(self, filename=None):
"""Use matplotlib to draw circuit graph."""
fig, ax = subplots(1)
G = self.G
pos = nx.spring_layout(G)
labels = dict(zip(G.nodes(), G.nodes()))
nx.draw_networkx(G, pos, ax, labels=labels)
edge_labels = dict([((u, v), d['name'])
for u, v, d in G.edges(data=True)])
nx.draw_networkx_edge_labels(G, pos, edge_labels=edge_labels)
if filename is not None:
savefig(filename, bbox_inches='tight')
@property
def is_planar(self):
"""Return True for a planar network."""
return nx.check_planarity(self.G)[0]
@property
def nodes(self):
"""Return nodes comprising network."""
return self.G.nodes()
def node_edges(self, node):
"""Return edges connected to specified node."""
return self.G[node]
def component(self, node1, node2):
"""Return component connected between specified nodes."""
name = self.G.get_edge_data(node1, node2)['name']
if name.startswith('W'):
return None
return self.cct.elements[name]
def loops_for_cpt(self, elt):
"""Return list of tuples; one for each loop. The first element of the
tuple is the loop the cpt belongs to or an empty list; the
second element indicates the cpt direction compared to the
loop direction."""
loops = self.loops()
cloops = []
# Map node names to equipotential node names.
nodenames = [self.cct.node_map[nodename] for nodename in elt.nodenames]
for n, loop in enumerate(loops):
loop1 = loop.copy()
loop1.append(loop1[0])
def find(loop1, nodename1, nodename2):
for m in range(len(loop1) - 1):
if (nodename1 == loop1[m] and
nodename2 == loop1[m + 1]):
return True
if find(loop1, nodenames[0], nodenames[1]):
cloops.append((loop, False))
elif find(loop1, nodenames[1], nodenames[0]):
cloops.append((loop, True))
else:
cloops.append(([], None))
return cloops
@property
def components(self):
"""Return list of component names."""
return [d['name'] for n1, n2, d in self.G.edges(data=True)]
def in_series(self, cpt_name):
"""Return set of component names in series with cpt including itself."""
cct = self.cct
elt = cct.elements[cpt_name]
nodenames = [cct.node_map[nodename] for nodename in elt.nodenames]
series = []
series.append(cpt_name)
def follow(node):
neighbours = self.G[node]
if len(neighbours) > 2:
return
for n, e in neighbours.items():
if not e['name'] in series:
series.append(e['name'])
follow(n)
follow(nodenames[0])
follow(nodenames[1])
# If only have two components in parallel, they will be
# detected as a series connection. However, if there is a
# dummy wire, the components are in parallel.
for name in series:
if name.startswith('W'):
return set((cpt_name, ))
return set(series)
def in_parallel(self, cpt_name):
"""Return set of component names in parallel with cpt including itself."""
cct = self.cct
elt = cct.elements[cpt_name]
nodenames = [cct.node_map[nodename] for nodename in elt.nodenames]
n1, n2 = nodenames[0:2]
# This is trivial for a multigraph but a mutigraph adds additional problems
# since component() will fail if have multiple edges between the same nodes.
# edges = self.get_edge_data(n1, n2)
# parallel = [d['name'] for k, d in edges.items()]
parallel = [cpt_name]
neighbours1 = self.G[n1]
neighbours2 = self.G[n2]
# The first created parallel component has no dummy nodes.
try:
name = self.G.get_edge_data(n1, n2)['name']
parallel.append(name)
except:
pass
# If find a dummy node name then there is a parallel component.
for n, e in neighbours1.items():
if n.startswith('*'):
for n3, e3 in self.G[n].items():
if n3 == n2 and not e['name'].startswith('W'):
parallel.append(e['name'])
for n, e in neighbours2.items():
if n.startswith('*'):
for n3, e3 in self.G[n].items():
if n3 == n1 and not e['name'].startswith('W'):
parallel.append(e['name'])
if n1.startswith('*'):
for n3, e3 in self.G[n1].items():
if n3 == n2 and not e['name'].startswith('W'):
parallel.append(e['name'])
if n2.startswith('*'):
for n3, e3 in self.G[n2].items():
if n3 == n1 and not e['name'].startswith('W'):
parallel.append(e['name'])
return set(parallel)
@property
def node_connectivity(self):
"""Return node connectivity for graph. If the connectivity is 0,
then there are disconnected components. If there is a component
with a single connected node, the connectivity is 1."""
return nx.node_connectivity(self.G)
@property
def is_connected(self):
"""Return True if all components are connected."""
return self.node_connectivity != 0
def tree(self):
"""Return minimum spanning tree. A tree has no loops so no current flows."""
T = nx.minimum_spanning_tree(self.G)
return CircuitGraph(self.cct, T)
def links(self):
"""Return links; the graph of the edges that are not in the minimum
spanning tree."""
G = self.G
T = self.tree().G
G_edges = set(G.edges())
T_edges = set(T.edges())
L_edges = G_edges - T_edges
L = nx.Graph()
for edge in L_edges:
data = G.get_edge_data(*edge)
L.add_edge(*edge, name=data['name'])
return CircuitGraph(self.cct, L)
@property
def num_parts(self):
if self.is_connected:
return 1
raise ValueError('TODO, calculate number of separate graphs')
@property
def num_nodes(self):
"""The number of nodes in the graph."""
return len(self.G.nodes)
@property
def num_branches(self):
"""The number of branches (edges) in the graph."""
return len(self.G.edges)
@property
def rank(self):
"""The required number of node voltages for nodal analysis."""
return self.num_nodes - self.num_parts
@property
def nullity(self):
"""For a planar circuit, this is equal to the number of meshes in the graph."""
return self.num_branches - self.num_nodes + self.num_parts
| lgpl-2.1 |
idlead/scikit-learn | sklearn/decomposition/tests/test_sparse_pca.py | 160 | 6028 | # Author: Vlad Niculae
# License: BSD 3 clause
import sys
import numpy as np
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import SkipTest
from sklearn.utils.testing import assert_true
from sklearn.utils.testing import assert_false
from sklearn.utils.testing import if_safe_multiprocessing_with_blas
from sklearn.decomposition import SparsePCA, MiniBatchSparsePCA
from sklearn.utils import check_random_state
def generate_toy_data(n_components, n_samples, image_size, random_state=None):
n_features = image_size[0] * image_size[1]
rng = check_random_state(random_state)
U = rng.randn(n_samples, n_components)
V = rng.randn(n_components, n_features)
centers = [(3, 3), (6, 7), (8, 1)]
sz = [1, 2, 1]
for k in range(n_components):
img = np.zeros(image_size)
xmin, xmax = centers[k][0] - sz[k], centers[k][0] + sz[k]
ymin, ymax = centers[k][1] - sz[k], centers[k][1] + sz[k]
img[xmin:xmax][:, ymin:ymax] = 1.0
V[k, :] = img.ravel()
# Y is defined by : Y = UV + noise
Y = np.dot(U, V)
Y += 0.1 * rng.randn(Y.shape[0], Y.shape[1]) # Add noise
return Y, U, V
# SparsePCA can be a bit slow. To avoid having test times go up, we
# test different aspects of the code in the same test
def test_correct_shapes():
rng = np.random.RandomState(0)
X = rng.randn(12, 10)
spca = SparsePCA(n_components=8, random_state=rng)
U = spca.fit_transform(X)
assert_equal(spca.components_.shape, (8, 10))
assert_equal(U.shape, (12, 8))
# test overcomplete decomposition
spca = SparsePCA(n_components=13, random_state=rng)
U = spca.fit_transform(X)
assert_equal(spca.components_.shape, (13, 10))
assert_equal(U.shape, (12, 13))
def test_fit_transform():
alpha = 1
rng = np.random.RandomState(0)
Y, _, _ = generate_toy_data(3, 10, (8, 8), random_state=rng) # wide array
spca_lars = SparsePCA(n_components=3, method='lars', alpha=alpha,
random_state=0)
spca_lars.fit(Y)
# Test that CD gives similar results
spca_lasso = SparsePCA(n_components=3, method='cd', random_state=0,
alpha=alpha)
spca_lasso.fit(Y)
assert_array_almost_equal(spca_lasso.components_, spca_lars.components_)
@if_safe_multiprocessing_with_blas
def test_fit_transform_parallel():
alpha = 1
rng = np.random.RandomState(0)
Y, _, _ = generate_toy_data(3, 10, (8, 8), random_state=rng) # wide array
spca_lars = SparsePCA(n_components=3, method='lars', alpha=alpha,
random_state=0)
spca_lars.fit(Y)
U1 = spca_lars.transform(Y)
# Test multiple CPUs
spca = SparsePCA(n_components=3, n_jobs=2, method='lars', alpha=alpha,
random_state=0).fit(Y)
U2 = spca.transform(Y)
assert_true(not np.all(spca_lars.components_ == 0))
assert_array_almost_equal(U1, U2)
def test_transform_nan():
# Test that SparsePCA won't return NaN when there is 0 feature in all
# samples.
rng = np.random.RandomState(0)
Y, _, _ = generate_toy_data(3, 10, (8, 8), random_state=rng) # wide array
Y[:, 0] = 0
estimator = SparsePCA(n_components=8)
assert_false(np.any(np.isnan(estimator.fit_transform(Y))))
def test_fit_transform_tall():
rng = np.random.RandomState(0)
Y, _, _ = generate_toy_data(3, 65, (8, 8), random_state=rng) # tall array
spca_lars = SparsePCA(n_components=3, method='lars',
random_state=rng)
U1 = spca_lars.fit_transform(Y)
spca_lasso = SparsePCA(n_components=3, method='cd', random_state=rng)
U2 = spca_lasso.fit(Y).transform(Y)
assert_array_almost_equal(U1, U2)
def test_initialization():
rng = np.random.RandomState(0)
U_init = rng.randn(5, 3)
V_init = rng.randn(3, 4)
model = SparsePCA(n_components=3, U_init=U_init, V_init=V_init, max_iter=0,
random_state=rng)
model.fit(rng.randn(5, 4))
assert_array_equal(model.components_, V_init)
def test_mini_batch_correct_shapes():
rng = np.random.RandomState(0)
X = rng.randn(12, 10)
pca = MiniBatchSparsePCA(n_components=8, random_state=rng)
U = pca.fit_transform(X)
assert_equal(pca.components_.shape, (8, 10))
assert_equal(U.shape, (12, 8))
# test overcomplete decomposition
pca = MiniBatchSparsePCA(n_components=13, random_state=rng)
U = pca.fit_transform(X)
assert_equal(pca.components_.shape, (13, 10))
assert_equal(U.shape, (12, 13))
def test_mini_batch_fit_transform():
raise SkipTest("skipping mini_batch_fit_transform.")
alpha = 1
rng = np.random.RandomState(0)
Y, _, _ = generate_toy_data(3, 10, (8, 8), random_state=rng) # wide array
spca_lars = MiniBatchSparsePCA(n_components=3, random_state=0,
alpha=alpha).fit(Y)
U1 = spca_lars.transform(Y)
# Test multiple CPUs
if sys.platform == 'win32': # fake parallelism for win32
import sklearn.externals.joblib.parallel as joblib_par
_mp = joblib_par.multiprocessing
joblib_par.multiprocessing = None
try:
U2 = MiniBatchSparsePCA(n_components=3, n_jobs=2, alpha=alpha,
random_state=0).fit(Y).transform(Y)
finally:
joblib_par.multiprocessing = _mp
else: # we can efficiently use parallelism
U2 = MiniBatchSparsePCA(n_components=3, n_jobs=2, alpha=alpha,
random_state=0).fit(Y).transform(Y)
assert_true(not np.all(spca_lars.components_ == 0))
assert_array_almost_equal(U1, U2)
# Test that CD gives similar results
spca_lasso = MiniBatchSparsePCA(n_components=3, method='cd', alpha=alpha,
random_state=0).fit(Y)
assert_array_almost_equal(spca_lasso.components_, spca_lars.components_)
| bsd-3-clause |
jeffery-do/Vizdoombot | doom/lib/python3.5/site-packages/dask/array/tests/test_slicing.py | 1 | 19010 | import pytest
pytest.importorskip('numpy')
import itertools
from operator import getitem
from dask.compatibility import skip
import dask.array as da
from dask.array.slicing import (slice_array, _slice_1d, take, new_blockdim,
sanitize_index)
from dask.array.utils import assert_eq
import numpy as np
from toolz import merge
def same_keys(a, b):
def key(k):
if isinstance(k, str):
return (k, -1, -1, -1)
else:
return k
return sorted(a.dask, key=key) == sorted(b.dask, key=key)
def test_slice_1d():
expected = {0: slice(10, 25, 1), 1: slice(None, None, None), 2: slice(0, 1, 1)}
result = _slice_1d(100, [25] * 4, slice(10, 51, None))
assert expected == result
# x[100:12:-3]
expected = {0: slice(-2, -8, -3),
1: slice(-1, -21, -3),
2: slice(-3, -21, -3),
3: slice(-2, -21, -3),
4: slice(-1, -21, -3)}
result = _slice_1d(100, [20] * 5, slice(100, 12, -3))
assert expected == result
# x[102::-3]
expected = {0: slice(-2, -21, -3),
1: slice(-1, -21, -3),
2: slice(-3, -21, -3),
3: slice(-2, -21, -3),
4: slice(-1, -21, -3)}
result = _slice_1d(100, [20] * 5, slice(102, None, -3))
assert expected == result
# x[::-4]
expected = {0: slice(-1, -21, -4),
1: slice(-1, -21, -4),
2: slice(-1, -21, -4),
3: slice(-1, -21, -4),
4: slice(-1, -21, -4)}
result = _slice_1d(100, [20] * 5, slice(None, None, -4))
assert expected == result
# x[::-7]
expected = {0: slice(-5, -21, -7),
1: slice(-4, -21, -7),
2: slice(-3, -21, -7),
3: slice(-2, -21, -7),
4: slice(-1, -21, -7)}
result = _slice_1d(100, [20] * 5, slice(None, None, -7))
assert expected == result
# x=range(115)
# x[::-7]
expected = {0: slice(-7, -24, -7),
1: slice(-2, -24, -7),
2: slice(-4, -24, -7),
3: slice(-6, -24, -7),
4: slice(-1, -24, -7)}
result = _slice_1d(115, [23] * 5, slice(None, None, -7))
assert expected == result
# x[79::-3]
expected = {0: slice(-1, -21, -3),
1: slice(-3, -21, -3),
2: slice(-2, -21, -3),
3: slice(-1, -21, -3)}
result = _slice_1d(100, [20] * 5, slice(79, None, -3))
assert expected == result
# x[-1:-8:-1]
expected = {4: slice(-1, -8, -1)}
result = _slice_1d(100, [20, 20, 20, 20, 20], slice(-1, 92, -1))
assert expected == result
# x[20:0:-1]
expected = {0: slice(-1, -20, -1),
1: slice(-20, -21, -1)}
result = _slice_1d(100, [20, 20, 20, 20, 20], slice(20, 0, -1))
assert expected == result
# x[:0]
expected = {}
result = _slice_1d(100, [20, 20, 20, 20, 20], slice(0))
assert result
# x=range(99)
expected = {0: slice(-3, -21, -3),
1: slice(-2, -21, -3),
2: slice(-1, -21, -3),
3: slice(-2, -20, -3),
4: slice(-1, -21, -3)}
# This array has non-uniformly sized blocks
result = _slice_1d(99, [20, 20, 20, 19, 20], slice(100, None, -3))
assert expected == result
# x=range(104)
# x[::-3]
expected = {0: slice(-1, -21, -3),
1: slice(-3, -24, -3),
2: slice(-3, -28, -3),
3: slice(-1, -14, -3),
4: slice(-1, -22, -3)}
# This array has non-uniformly sized blocks
result = _slice_1d(104, [20, 23, 27, 13, 21], slice(None, None, -3))
assert expected == result
# x=range(104)
# x[:27:-3]
expected = {1: slice(-3, -16, -3),
2: slice(-3, -28, -3),
3: slice(-1, -14, -3),
4: slice(-1, -22, -3)}
# This array has non-uniformly sized blocks
result = _slice_1d(104, [20, 23, 27, 13, 21], slice(None, 27, -3))
assert expected == result
# x=range(104)
# x[100:27:-3]
expected = {1: slice(-3, -16, -3),
2: slice(-3, -28, -3),
3: slice(-1, -14, -3),
4: slice(-4, -22, -3)}
# This array has non-uniformly sized blocks
result = _slice_1d(104, [20, 23, 27, 13, 21], slice(100, 27, -3))
assert expected == result
def test_slice_singleton_value_on_boundary():
assert _slice_1d(15, [5, 5, 5], 10) == {2: 0}
assert _slice_1d(30, (5, 5, 5, 5, 5, 5), 10) == {2: 0}
def test_slice_array_1d():
#x[24::2]
expected = {('y', 0): (getitem, ('x', 0), (slice(24, 25, 2),)),
('y', 1): (getitem, ('x', 1), (slice(1, 25, 2),)),
('y', 2): (getitem, ('x', 2), (slice(0, 25, 2),)),
('y', 3): (getitem, ('x', 3), (slice(1, 25, 2),))}
result, chunks = slice_array('y', 'x', [[25] * 4], [slice(24, None, 2)])
assert expected == result
#x[26::2]
expected = {('y', 0): (getitem, ('x', 1), (slice(1, 25, 2),)),
('y', 1): (getitem, ('x', 2), (slice(0, 25, 2),)),
('y', 2): (getitem, ('x', 3), (slice(1, 25, 2),))}
result, chunks = slice_array('y', 'x', [[25] * 4], [slice(26, None, 2)])
assert expected == result
#x[24::2]
expected = {('y', 0): (getitem, ('x', 0), (slice(24, 25, 2),)),
('y', 1): (getitem, ('x', 1), (slice(1, 25, 2),)),
('y', 2): (getitem, ('x', 2), (slice(0, 25, 2),)),
('y', 3): (getitem, ('x', 3), (slice(1, 25, 2),))}
result, chunks = slice_array('y', 'x', [(25, ) * 4], (slice(24, None, 2), ))
assert expected == result
#x[26::2]
expected = {('y', 0): (getitem, ('x', 1), (slice(1, 25, 2),)),
('y', 1): (getitem, ('x', 2), (slice(0, 25, 2),)),
('y', 2): (getitem, ('x', 3), (slice(1, 25, 2),))}
result, chunks = slice_array('y', 'x', [(25, ) * 4], (slice(26, None, 2), ))
assert expected == result
def test_slice_array_2d():
#2d slices: x[13::2,10::1]
expected = {('y', 0, 0): (getitem, ('x', 0, 0),
(slice(13, 20, 2), slice(10, 20, 1))),
('y', 0, 1): (getitem, ('x', 0, 1),
(slice(13, 20, 2), slice(None, None, None))),
('y', 0, 2): (getitem, ('x', 0, 2),
(slice(13, 20, 2), slice(None, None, None)))}
result, chunks = slice_array('y', 'x', [[20], [20, 20, 5]],
[slice(13, None, 2), slice(10, None, 1)])
assert expected == result
#2d slices with one dimension: x[5,10::1]
expected = {('y', 0): (getitem, ('x', 0, 0),
(5, slice(10, 20, 1))),
('y', 1): (getitem, ('x', 0, 1),
(5, slice(None, None, None))),
('y', 2): (getitem, ('x', 0, 2),
(5, slice(None, None, None)))}
result, chunks = slice_array('y', 'x', ([20], [20, 20, 5]),
[5, slice(10, None, 1)])
assert expected == result
def test_slice_optimizations():
#bar[:]
expected = {('foo', 0): ('bar', 0)}
result, chunks = slice_array('foo', 'bar', [[100]], (slice(None, None, None),))
assert expected == result
#bar[:,:,:]
expected = {('foo', 0): ('bar', 0),
('foo', 1): ('bar', 1),
('foo', 2): ('bar', 2)}
result, chunks = slice_array('foo', 'bar', [(100, 1000, 10000)],
(slice(None, None, None),
slice(None, None, None),
slice(None, None, None)))
assert expected == result
def test_slicing_with_singleton_indices():
result, chunks = slice_array('y', 'x', ([5, 5], [5, 5]), (slice(0, 5), 8))
expected = {('y', 0): (getitem, ('x', 0, 1), (slice(None, None, None), 3))}
assert expected == result
def test_slicing_with_newaxis():
result, chunks = slice_array('y', 'x', ([5, 5], [5, 5]),
(slice(0, 3), None, slice(None, None, None)))
expected = {
('y', 0, 0, 0): (getitem, ('x', 0, 0),
(slice(0, 3, 1), None, slice(None, None, None))),
('y', 0, 0, 1): (getitem, ('x', 0, 1),
(slice(0, 3, 1), None, slice(None, None, None)))}
assert expected == result
assert chunks == ((3,), (1,), (5, 5))
def test_take():
chunks, dsk = take('y', 'x', [(20, 20, 20, 20)], [5, 1, 47, 3], axis=0)
expected = {('y', 0): (getitem, (np.concatenate,
[(getitem, ('x', 0), ([1, 3, 5],)),
(getitem, ('x', 2), ([7],))], 0),
([2, 0, 3, 1], ))}
assert dsk == expected
assert chunks == ((4,),)
chunks, dsk = take('y', 'x', [(20, 20, 20, 20), (20, 20)], [5, 1, 47, 3], axis=0)
expected = {('y', 0, j): (getitem, (np.concatenate,
[(getitem, ('x', 0, j),
([1, 3, 5], slice(None, None, None))),
(getitem, ('x', 2, j),
([7], slice(None, None, None)))], 0),
([2, 0, 3, 1], slice(None, None, None)))
for j in range(2)}
assert dsk == expected
assert chunks == ((4,), (20, 20))
chunks, dsk = take('y', 'x', [(20, 20, 20, 20), (20, 20)], [5, 1, 37, 3], axis=1)
expected = {('y', i, 0): (getitem, (np.concatenate,
[(getitem, ('x', i, 0),
(slice(None, None, None), [1, 3, 5])),
(getitem, ('x', i, 1),
(slice(None, None, None), [17]))], 1),
(slice(None, None, None), [2, 0, 3, 1]))
for i in range(4)}
assert dsk == expected
assert chunks == ((20, 20, 20, 20), (4,))
def test_take_sorted():
chunks, dsk = take('y', 'x', [(20, 20, 20, 20)], [1, 3, 5, 47], axis=0)
expected = {('y', 0): (getitem, ('x', 0), ([1, 3, 5],)),
('y', 1): (getitem, ('x', 2), ([7],))}
assert dsk == expected
assert chunks == ((3, 1),)
chunks, dsk = take('y', 'x', [(20, 20, 20, 20), (20, 20)], [1, 3, 5, 37], axis=1)
expected = merge(dict((('y', i, 0), (getitem, ('x', i, 0),
(slice(None, None, None), [1, 3, 5])))
for i in range(4)),
dict((('y', i, 1), (getitem, ('x', i, 1),
(slice(None, None, None), [17])))
for i in range(4)))
assert dsk == expected
assert chunks == ((20, 20, 20, 20), (3, 1))
def test_slice_lists():
y, chunks = slice_array('y', 'x', ((3, 3, 3, 1), (3, 3, 3, 1)),
([2, 1, 9], slice(None, None, None)))
exp = {('y', 0, i): (getitem, (np.concatenate,
[(getitem, ('x', 0, i),
([1, 2], slice(None, None, None))),
(getitem, ('x', 3, i),
([0], slice(None, None, None)))], 0),
([1, 0, 2], slice(None, None, None)))
for i in range(4)}
assert y == exp
assert chunks == ((3,), (3, 3, 3, 1))
def test_slicing_chunks():
result, chunks = slice_array('y', 'x', ([5, 5], [5, 5]),
(1, [2, 0, 3]))
assert chunks == ((3,), )
result, chunks = slice_array('y', 'x', ([5, 5], [5, 5]),
(slice(0, 7), [2, 0, 3]))
assert chunks == ((5, 2), (3, ))
result, chunks = slice_array('y', 'x', ([5, 5], [5, 5]),
(slice(0, 7), 1))
assert chunks == ((5, 2), )
def test_slicing_with_numpy_arrays():
a, bd1 = slice_array('y', 'x', ((3, 3, 3, 1), (3, 3, 3, 1)),
([1, 2, 9], slice(None, None, None)))
b, bd2 = slice_array('y', 'x', ((3, 3, 3, 1), (3, 3, 3, 1)),
(np.array([1, 2, 9]), slice(None, None, None)))
assert bd1 == bd2
assert a == b
i = [False, True, True, False, False,
False, False, False, False, True, False]
c, bd3 = slice_array('y', 'x', ((3, 3, 3, 1), (3, 3, 3, 1)),
(i, slice(None, None, None)))
assert bd1 == bd3
assert a == c
def test_slicing_and_chunks():
o = da.ones((24, 16), chunks=((4, 8, 8, 4), (2, 6, 6, 2)))
t = o[4:-4, 2:-2]
assert t.chunks == ((8, 8), (6, 6))
def test_slice_stop_0():
# from gh-125
a = da.ones(10, chunks=(10,))[:0].compute()
b = np.ones(10)[:0]
assert_eq(a, b)
def test_slice_list_then_None():
x = da.zeros(shape=(5, 5), chunks=(3, 3))
y = x[[2, 1]][None]
assert_eq(y, np.zeros((1, 2, 5)))
class ReturnItem(object):
def __getitem__(self, key):
return key
@skip
def test_slicing_exhaustively():
x = np.random.rand(6, 7, 8)
a = da.from_array(x, chunks=(3, 3, 3))
I = ReturnItem()
# independent indexing along different axes
indexers = [0, -2, I[:], I[:5], [0, 1], [0, 1, 2], [4, 2], I[::-1], None, I[:0], []]
for i in indexers:
assert_eq(x[i], a[i]), i
for j in indexers:
assert_eq(x[i][:, j], a[i][:, j]), (i, j)
assert_eq(x[:, i][j], a[:, i][j]), (i, j)
for k in indexers:
assert_eq(x[..., i][:, j][k], a[..., i][:, j][k]), (i, j, k)
# repeated indexing along the first axis
first_indexers = [I[:], I[:5], np.arange(5), [3, 1, 4, 5, 0], np.arange(6) < 6]
second_indexers = [0, -1, 3, I[:], I[:3], I[2:-1], [2, 4], [], I[:0]]
for i in first_indexers:
for j in second_indexers:
assert_eq(x[i][j], a[i][j]), (i, j)
def test_slicing_with_negative_step_flops_keys():
x = da.arange(10, chunks=5)
y = x[:1:-1]
assert (x.name, 1) in y.dask[(y.name, 0)]
assert (x.name, 0) in y.dask[(y.name, 1)]
assert_eq(y, np.arange(10)[:1:-1])
assert y.chunks == ((5, 3),)
assert y.dask[(y.name, 0)] == (getitem, (x.name, 1),
(slice(-1, -6, -1),))
assert y.dask[(y.name, 1)] == (getitem, (x.name, 0),
(slice(-1, -4, -1),))
def test_empty_slice():
x = da.ones((5, 5), chunks=(2, 2), dtype='i4')
y = x[:0]
assert_eq(y, np.ones((5, 5), dtype='i4')[:0])
def test_multiple_list_slicing():
x = np.random.rand(6, 7, 8)
a = da.from_array(x, chunks=(3, 3, 3))
assert_eq(x[:, [0, 1, 2]][[0, 1]], a[:, [0, 1, 2]][[0, 1]])
def test_uneven_chunks():
assert da.ones(20, chunks=5)[::2].chunks == ((3, 2, 3, 2),)
def test_new_blockdim():
assert new_blockdim(20, [5, 5, 5, 5], slice(0, None, 2)) == [3, 2, 3, 2]
def test_slicing_consistent_names():
x = np.arange(100).reshape((10, 10))
a = da.from_array(x, chunks=(5, 5))
assert same_keys(a[0], a[0])
assert same_keys(a[:, [1, 2, 3]], a[:, [1, 2, 3]])
assert same_keys(a[:, 5:2:-1], a[:, 5:2:-1])
def test_sanitize_index():
pd = pytest.importorskip('pandas')
with pytest.raises(TypeError):
sanitize_index('Hello!')
assert sanitize_index(pd.Series([1, 2, 3])) == [1, 2, 3]
assert sanitize_index((1, 2, 3)) == [1, 2, 3]
def test_uneven_blockdims():
blockdims = ((31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30), (100,))
index = (slice(240, 270), slice(None))
dsk_out, bd_out = slice_array('in', 'out', blockdims, index)
sol = {('in', 0, 0): (getitem, ('out', 7, 0), (slice(28, 31, 1), slice(None))),
('in', 1, 0): (getitem, ('out', 8, 0), (slice(0, 27, 1), slice(None)))}
assert dsk_out == sol
assert bd_out == ((3, 27), (100,))
blockdims = ((31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30),) * 2
index = (slice(240, 270), slice(180, 230))
dsk_out, bd_out = slice_array('in', 'out', blockdims, index)
sol = {('in', 0, 0): (getitem, ('out', 7, 5), (slice(28, 31, 1), slice(29, 30, 1))),
('in', 0, 1): (getitem, ('out', 7, 6), (slice(28, 31, 1), slice(None))),
('in', 0, 2): (getitem, ('out', 7, 7), (slice(28, 31, 1), slice(0, 18, 1))),
('in', 1, 0): (getitem, ('out', 8, 5), (slice(0, 27, 1), slice(29, 30, 1))),
('in', 1, 1): (getitem, ('out', 8, 6), (slice(0, 27, 1), slice(None))),
('in', 1, 2): (getitem, ('out', 8, 7), (slice(0, 27, 1), slice(0, 18, 1)))}
assert dsk_out == sol
assert bd_out == ((3, 27), (1, 31, 18))
def test_oob_check():
x = da.ones(5, chunks=(2,))
with pytest.raises(IndexError):
x[6]
with pytest.raises(IndexError):
x[[6]]
with pytest.raises(IndexError):
x[0, 0]
def test_index_with_dask_array_errors():
x = da.ones((5, 5), chunks=2)
with pytest.raises(NotImplementedError):
x[x > 10]
with pytest.raises(NotImplementedError):
x[0, x > 10]
def test_cull():
x = da.ones(1000, chunks=(10,))
for slc in [1, slice(0, 30), slice(0, None, 100)]:
y = x[slc]
assert len(y.dask) < len(x.dask)
@pytest.mark.parametrize('shape', [(2,), (2, 3), (2, 3, 5)])
@pytest.mark.parametrize('slice', [(Ellipsis,),
(None, Ellipsis),
(Ellipsis, None),
(None, Ellipsis, None)])
def test_slicing_with_Nones(shape, slice):
x = np.random.random(shape)
d = da.from_array(x, chunks=shape)
assert_eq(x[slice], d[slice])
indexers = [Ellipsis, slice(2), 0, 1, -2, -1, slice(-2, None), None]
"""
@pytest.mark.parametrize('a', indexers)
@pytest.mark.parametrize('b', indexers)
@pytest.mark.parametrize('c', indexers)
@pytest.mark.parametrize('d', indexers)
def test_slicing_none_int_ellipses(a, b, c, d):
if (a, b, c, d).count(Ellipsis) > 1:
return
shape = (2,3,5,7,11)
x = np.arange(np.prod(shape)).reshape(shape)
y = da.core.asarray(x)
xx = x[a, b, c, d]
yy = y[a, b, c, d]
assert_eq(xx, yy)
"""
def test_slicing_none_int_ellipes():
shape = (2,3,5,7,11)
x = np.arange(np.prod(shape)).reshape(shape)
y = da.core.asarray(x)
for ind in itertools.product(indexers, indexers, indexers, indexers):
if ind.count(Ellipsis) > 1:
continue
assert_eq(x[ind], y[ind])
def test_None_overlap_int():
a, b, c, d = (0, slice(None, 2, None), None, Ellipsis)
shape = (2,3,5,7,11)
x = np.arange(np.prod(shape)).reshape(shape)
y = da.core.asarray(x)
xx = x[a, b, c, d]
yy = y[a, b, c, d]
assert_eq(xx, yy)
def test_negative_n_slicing():
assert_eq(da.ones(2, chunks=2)[-2], np.ones(2)[-2])
| mit |
JPFrancoia/scikit-learn | sklearn/model_selection/tests/test_split.py | 7 | 41116 | """Test the split module"""
from __future__ import division
import warnings
import numpy as np
from scipy.sparse import coo_matrix, csc_matrix, csr_matrix
from scipy import stats
from scipy.misc import comb
from itertools import combinations
from sklearn.utils.fixes import combinations_with_replacement
from sklearn.utils.testing import assert_true
from sklearn.utils.testing import assert_false
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_raises_regexp
from sklearn.utils.testing import assert_greater
from sklearn.utils.testing import assert_greater_equal
from sklearn.utils.testing import assert_not_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_warns_message
from sklearn.utils.testing import assert_raise_message
from sklearn.utils.testing import ignore_warnings
from sklearn.utils.validation import _num_samples
from sklearn.utils.mocking import MockDataFrame
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import GroupKFold
from sklearn.model_selection import TimeSeriesSplit
from sklearn.model_selection import LeaveOneOut
from sklearn.model_selection import LeaveOneGroupOut
from sklearn.model_selection import LeavePOut
from sklearn.model_selection import LeavePGroupsOut
from sklearn.model_selection import ShuffleSplit
from sklearn.model_selection import GroupShuffleSplit
from sklearn.model_selection import StratifiedShuffleSplit
from sklearn.model_selection import PredefinedSplit
from sklearn.model_selection import check_cv
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import Ridge
from sklearn.model_selection._split import _validate_shuffle_split
from sklearn.model_selection._split import _CVIterableWrapper
from sklearn.model_selection._split import _build_repr
from sklearn.datasets import load_digits
from sklearn.datasets import make_classification
from sklearn.externals import six
from sklearn.externals.six.moves import zip
from sklearn.svm import SVC
X = np.ones(10)
y = np.arange(10) // 2
P_sparse = coo_matrix(np.eye(5))
digits = load_digits()
class MockClassifier(object):
"""Dummy classifier to test the cross-validation"""
def __init__(self, a=0, allow_nd=False):
self.a = a
self.allow_nd = allow_nd
def fit(self, X, Y=None, sample_weight=None, class_prior=None,
sparse_sample_weight=None, sparse_param=None, dummy_int=None,
dummy_str=None, dummy_obj=None, callback=None):
"""The dummy arguments are to test that this fit function can
accept non-array arguments through cross-validation, such as:
- int
- str (this is actually array-like)
- object
- function
"""
self.dummy_int = dummy_int
self.dummy_str = dummy_str
self.dummy_obj = dummy_obj
if callback is not None:
callback(self)
if self.allow_nd:
X = X.reshape(len(X), -1)
if X.ndim >= 3 and not self.allow_nd:
raise ValueError('X cannot be d')
if sample_weight is not None:
assert_true(sample_weight.shape[0] == X.shape[0],
'MockClassifier extra fit_param sample_weight.shape[0]'
' is {0}, should be {1}'.format(sample_weight.shape[0],
X.shape[0]))
if class_prior is not None:
assert_true(class_prior.shape[0] == len(np.unique(y)),
'MockClassifier extra fit_param class_prior.shape[0]'
' is {0}, should be {1}'.format(class_prior.shape[0],
len(np.unique(y))))
if sparse_sample_weight is not None:
fmt = ('MockClassifier extra fit_param sparse_sample_weight'
'.shape[0] is {0}, should be {1}')
assert_true(sparse_sample_weight.shape[0] == X.shape[0],
fmt.format(sparse_sample_weight.shape[0], X.shape[0]))
if sparse_param is not None:
fmt = ('MockClassifier extra fit_param sparse_param.shape '
'is ({0}, {1}), should be ({2}, {3})')
assert_true(sparse_param.shape == P_sparse.shape,
fmt.format(sparse_param.shape[0],
sparse_param.shape[1],
P_sparse.shape[0], P_sparse.shape[1]))
return self
def predict(self, T):
if self.allow_nd:
T = T.reshape(len(T), -1)
return T[:, 0]
def score(self, X=None, Y=None):
return 1. / (1 + np.abs(self.a))
def get_params(self, deep=False):
return {'a': self.a, 'allow_nd': self.allow_nd}
@ignore_warnings
def test_cross_validator_with_default_params():
n_samples = 4
n_unique_groups = 4
n_splits = 2
p = 2
n_shuffle_splits = 10 # (the default value)
X = np.array([[1, 2], [3, 4], [5, 6], [7, 8]])
X_1d = np.array([1, 2, 3, 4])
y = np.array([1, 1, 2, 2])
groups = np.array([1, 2, 3, 4])
loo = LeaveOneOut()
lpo = LeavePOut(p)
kf = KFold(n_splits)
skf = StratifiedKFold(n_splits)
lolo = LeaveOneGroupOut()
lopo = LeavePGroupsOut(p)
ss = ShuffleSplit(random_state=0)
ps = PredefinedSplit([1, 1, 2, 2]) # n_splits = np of unique folds = 2
loo_repr = "LeaveOneOut()"
lpo_repr = "LeavePOut(p=2)"
kf_repr = "KFold(n_splits=2, random_state=None, shuffle=False)"
skf_repr = "StratifiedKFold(n_splits=2, random_state=None, shuffle=False)"
lolo_repr = "LeaveOneGroupOut()"
lopo_repr = "LeavePGroupsOut(n_groups=2)"
ss_repr = ("ShuffleSplit(n_splits=10, random_state=0, test_size=0.1, "
"train_size=None)")
ps_repr = "PredefinedSplit(test_fold=array([1, 1, 2, 2]))"
n_splits_expected = [n_samples, comb(n_samples, p), n_splits, n_splits,
n_unique_groups, comb(n_unique_groups, p),
n_shuffle_splits, 2]
for i, (cv, cv_repr) in enumerate(zip(
[loo, lpo, kf, skf, lolo, lopo, ss, ps],
[loo_repr, lpo_repr, kf_repr, skf_repr, lolo_repr, lopo_repr,
ss_repr, ps_repr])):
# Test if get_n_splits works correctly
assert_equal(n_splits_expected[i], cv.get_n_splits(X, y, groups))
# Test if the cross-validator works as expected even if
# the data is 1d
np.testing.assert_equal(list(cv.split(X, y, groups)),
list(cv.split(X_1d, y, groups)))
# Test that train, test indices returned are integers
for train, test in cv.split(X, y, groups):
assert_equal(np.asarray(train).dtype.kind, 'i')
assert_equal(np.asarray(train).dtype.kind, 'i')
# Test if the repr works without any errors
assert_equal(cv_repr, repr(cv))
def check_valid_split(train, test, n_samples=None):
# Use python sets to get more informative assertion failure messages
train, test = set(train), set(test)
# Train and test split should not overlap
assert_equal(train.intersection(test), set())
if n_samples is not None:
# Check that the union of train an test split cover all the indices
assert_equal(train.union(test), set(range(n_samples)))
def check_cv_coverage(cv, X, y, groups, expected_n_splits=None):
n_samples = _num_samples(X)
# Check that a all the samples appear at least once in a test fold
if expected_n_splits is not None:
assert_equal(cv.get_n_splits(X, y, groups), expected_n_splits)
else:
expected_n_splits = cv.get_n_splits(X, y, groups)
collected_test_samples = set()
iterations = 0
for train, test in cv.split(X, y, groups):
check_valid_split(train, test, n_samples=n_samples)
iterations += 1
collected_test_samples.update(test)
# Check that the accumulated test samples cover the whole dataset
assert_equal(iterations, expected_n_splits)
if n_samples is not None:
assert_equal(collected_test_samples, set(range(n_samples)))
def test_kfold_valueerrors():
X1 = np.array([[1, 2], [3, 4], [5, 6]])
X2 = np.array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]])
# Check that errors are raised if there is not enough samples
assert_raises(ValueError, next, KFold(4).split(X1))
# Check that a warning is raised if the least populated class has too few
# members.
y = np.array([3, 3, -1, -1, 3])
skf_3 = StratifiedKFold(3)
assert_warns_message(Warning, "The least populated class",
next, skf_3.split(X2, y))
# Check that despite the warning the folds are still computed even
# though all the classes are not necessarily represented at on each
# side of the split at each split
with warnings.catch_warnings():
warnings.simplefilter("ignore")
check_cv_coverage(skf_3, X2, y, groups=None, expected_n_splits=3)
# Check that errors are raised if all n_groups for individual
# classes are less than n_splits.
y = np.array([3, 3, -1, -1, 2])
assert_raises(ValueError, next, skf_3.split(X2, y))
# Error when number of folds is <= 1
assert_raises(ValueError, KFold, 0)
assert_raises(ValueError, KFold, 1)
error_string = ("k-fold cross-validation requires at least one"
" train/test split")
assert_raise_message(ValueError, error_string,
StratifiedKFold, 0)
assert_raise_message(ValueError, error_string,
StratifiedKFold, 1)
# When n_splits is not integer:
assert_raises(ValueError, KFold, 1.5)
assert_raises(ValueError, KFold, 2.0)
assert_raises(ValueError, StratifiedKFold, 1.5)
assert_raises(ValueError, StratifiedKFold, 2.0)
# When shuffle is not a bool:
assert_raises(TypeError, KFold, n_splits=4, shuffle=None)
def test_kfold_indices():
# Check all indices are returned in the test folds
X1 = np.ones(18)
kf = KFold(3)
check_cv_coverage(kf, X1, y=None, groups=None, expected_n_splits=3)
# Check all indices are returned in the test folds even when equal-sized
# folds are not possible
X2 = np.ones(17)
kf = KFold(3)
check_cv_coverage(kf, X2, y=None, groups=None, expected_n_splits=3)
# Check if get_n_splits returns the number of folds
assert_equal(5, KFold(5).get_n_splits(X2))
def test_kfold_no_shuffle():
# Manually check that KFold preserves the data ordering on toy datasets
X2 = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]
splits = KFold(2).split(X2[:-1])
train, test = next(splits)
assert_array_equal(test, [0, 1])
assert_array_equal(train, [2, 3])
train, test = next(splits)
assert_array_equal(test, [2, 3])
assert_array_equal(train, [0, 1])
splits = KFold(2).split(X2)
train, test = next(splits)
assert_array_equal(test, [0, 1, 2])
assert_array_equal(train, [3, 4])
train, test = next(splits)
assert_array_equal(test, [3, 4])
assert_array_equal(train, [0, 1, 2])
def test_stratified_kfold_no_shuffle():
# Manually check that StratifiedKFold preserves the data ordering as much
# as possible on toy datasets in order to avoid hiding sample dependencies
# when possible
X, y = np.ones(4), [1, 1, 0, 0]
splits = StratifiedKFold(2).split(X, y)
train, test = next(splits)
assert_array_equal(test, [0, 2])
assert_array_equal(train, [1, 3])
train, test = next(splits)
assert_array_equal(test, [1, 3])
assert_array_equal(train, [0, 2])
X, y = np.ones(7), [1, 1, 1, 0, 0, 0, 0]
splits = StratifiedKFold(2).split(X, y)
train, test = next(splits)
assert_array_equal(test, [0, 1, 3, 4])
assert_array_equal(train, [2, 5, 6])
train, test = next(splits)
assert_array_equal(test, [2, 5, 6])
assert_array_equal(train, [0, 1, 3, 4])
# Check if get_n_splits returns the number of folds
assert_equal(5, StratifiedKFold(5).get_n_splits(X, y))
def test_stratified_kfold_ratios():
# Check that stratified kfold preserves class ratios in individual splits
# Repeat with shuffling turned off and on
n_samples = 1000
X = np.ones(n_samples)
y = np.array([4] * int(0.10 * n_samples) +
[0] * int(0.89 * n_samples) +
[1] * int(0.01 * n_samples))
for shuffle in (False, True):
for train, test in StratifiedKFold(5, shuffle=shuffle).split(X, y):
assert_almost_equal(np.sum(y[train] == 4) / len(train), 0.10, 2)
assert_almost_equal(np.sum(y[train] == 0) / len(train), 0.89, 2)
assert_almost_equal(np.sum(y[train] == 1) / len(train), 0.01, 2)
assert_almost_equal(np.sum(y[test] == 4) / len(test), 0.10, 2)
assert_almost_equal(np.sum(y[test] == 0) / len(test), 0.89, 2)
assert_almost_equal(np.sum(y[test] == 1) / len(test), 0.01, 2)
def test_kfold_balance():
# Check that KFold returns folds with balanced sizes
for i in range(11, 17):
kf = KFold(5).split(X=np.ones(i))
sizes = []
for _, test in kf:
sizes.append(len(test))
assert_true((np.max(sizes) - np.min(sizes)) <= 1)
assert_equal(np.sum(sizes), i)
def test_stratifiedkfold_balance():
# Check that KFold returns folds with balanced sizes (only when
# stratification is possible)
# Repeat with shuffling turned off and on
X = np.ones(17)
y = [0] * 3 + [1] * 14
for shuffle in (True, False):
cv = StratifiedKFold(3, shuffle=shuffle)
for i in range(11, 17):
skf = cv.split(X[:i], y[:i])
sizes = []
for _, test in skf:
sizes.append(len(test))
assert_true((np.max(sizes) - np.min(sizes)) <= 1)
assert_equal(np.sum(sizes), i)
def test_shuffle_kfold():
# Check the indices are shuffled properly
kf = KFold(3)
kf2 = KFold(3, shuffle=True, random_state=0)
kf3 = KFold(3, shuffle=True, random_state=1)
X = np.ones(300)
all_folds = np.zeros(300)
for (tr1, te1), (tr2, te2), (tr3, te3) in zip(
kf.split(X), kf2.split(X), kf3.split(X)):
for tr_a, tr_b in combinations((tr1, tr2, tr3), 2):
# Assert that there is no complete overlap
assert_not_equal(len(np.intersect1d(tr_a, tr_b)), len(tr1))
# Set all test indices in successive iterations of kf2 to 1
all_folds[te2] = 1
# Check that all indices are returned in the different test folds
assert_equal(sum(all_folds), 300)
def test_shuffle_kfold_stratifiedkfold_reproducibility():
# Check that when the shuffle is True multiple split calls produce the
# same split when random_state is set
X = np.ones(15) # Divisible by 3
y = [0] * 7 + [1] * 8
X2 = np.ones(16) # Not divisible by 3
y2 = [0] * 8 + [1] * 8
kf = KFold(3, shuffle=True, random_state=0)
skf = StratifiedKFold(3, shuffle=True, random_state=0)
for cv in (kf, skf):
np.testing.assert_equal(list(cv.split(X, y)), list(cv.split(X, y)))
np.testing.assert_equal(list(cv.split(X2, y2)), list(cv.split(X2, y2)))
kf = KFold(3, shuffle=True)
skf = StratifiedKFold(3, shuffle=True)
for cv in (kf, skf):
for data in zip((X, X2), (y, y2)):
try:
np.testing.assert_equal(list(cv.split(*data)),
list(cv.split(*data)))
except AssertionError:
pass
else:
raise AssertionError("The splits for data, %s, are same even "
"when random state is not set" % data)
def test_shuffle_stratifiedkfold():
# Check that shuffling is happening when requested, and for proper
# sample coverage
X_40 = np.ones(40)
y = [0] * 20 + [1] * 20
kf0 = StratifiedKFold(5, shuffle=True, random_state=0)
kf1 = StratifiedKFold(5, shuffle=True, random_state=1)
for (_, test0), (_, test1) in zip(kf0.split(X_40, y),
kf1.split(X_40, y)):
assert_not_equal(set(test0), set(test1))
check_cv_coverage(kf0, X_40, y, groups=None, expected_n_splits=5)
def test_kfold_can_detect_dependent_samples_on_digits(): # see #2372
# The digits samples are dependent: they are apparently grouped by authors
# although we don't have any information on the groups segment locations
# for this data. We can highlight this fact by computing k-fold cross-
# validation with and without shuffling: we observe that the shuffling case
# wrongly makes the IID assumption and is therefore too optimistic: it
# estimates a much higher accuracy (around 0.93) than that the non
# shuffling variant (around 0.81).
X, y = digits.data[:600], digits.target[:600]
model = SVC(C=10, gamma=0.005)
n_splits = 3
cv = KFold(n_splits=n_splits, shuffle=False)
mean_score = cross_val_score(model, X, y, cv=cv).mean()
assert_greater(0.92, mean_score)
assert_greater(mean_score, 0.80)
# Shuffling the data artificially breaks the dependency and hides the
# overfitting of the model with regards to the writing style of the authors
# by yielding a seriously overestimated score:
cv = KFold(n_splits, shuffle=True, random_state=0)
mean_score = cross_val_score(model, X, y, cv=cv).mean()
assert_greater(mean_score, 0.92)
cv = KFold(n_splits, shuffle=True, random_state=1)
mean_score = cross_val_score(model, X, y, cv=cv).mean()
assert_greater(mean_score, 0.92)
# Similarly, StratifiedKFold should try to shuffle the data as little
# as possible (while respecting the balanced class constraints)
# and thus be able to detect the dependency by not overestimating
# the CV score either. As the digits dataset is approximately balanced
# the estimated mean score is close to the score measured with
# non-shuffled KFold
cv = StratifiedKFold(n_splits)
mean_score = cross_val_score(model, X, y, cv=cv).mean()
assert_greater(0.93, mean_score)
assert_greater(mean_score, 0.80)
def test_shuffle_split():
ss1 = ShuffleSplit(test_size=0.2, random_state=0).split(X)
ss2 = ShuffleSplit(test_size=2, random_state=0).split(X)
ss3 = ShuffleSplit(test_size=np.int32(2), random_state=0).split(X)
for typ in six.integer_types:
ss4 = ShuffleSplit(test_size=typ(2), random_state=0).split(X)
for t1, t2, t3, t4 in zip(ss1, ss2, ss3, ss4):
assert_array_equal(t1[0], t2[0])
assert_array_equal(t2[0], t3[0])
assert_array_equal(t3[0], t4[0])
assert_array_equal(t1[1], t2[1])
assert_array_equal(t2[1], t3[1])
assert_array_equal(t3[1], t4[1])
def test_stratified_shuffle_split_init():
X = np.arange(7)
y = np.asarray([0, 1, 1, 1, 2, 2, 2])
# Check that error is raised if there is a class with only one sample
assert_raises(ValueError, next,
StratifiedShuffleSplit(3, 0.2).split(X, y))
# Check that error is raised if the test set size is smaller than n_classes
assert_raises(ValueError, next, StratifiedShuffleSplit(3, 2).split(X, y))
# Check that error is raised if the train set size is smaller than
# n_classes
assert_raises(ValueError, next,
StratifiedShuffleSplit(3, 3, 2).split(X, y))
X = np.arange(9)
y = np.asarray([0, 0, 0, 1, 1, 1, 2, 2, 2])
# Check that errors are raised if there is not enough samples
assert_raises(ValueError, StratifiedShuffleSplit, 3, 0.5, 0.6)
assert_raises(ValueError, next,
StratifiedShuffleSplit(3, 8, 0.6).split(X, y))
assert_raises(ValueError, next,
StratifiedShuffleSplit(3, 0.6, 8).split(X, y))
# Train size or test size too small
assert_raises(ValueError, next,
StratifiedShuffleSplit(train_size=2).split(X, y))
assert_raises(ValueError, next,
StratifiedShuffleSplit(test_size=2).split(X, y))
def test_stratified_shuffle_split_respects_test_size():
y = np.array([0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2])
test_size = 5
train_size = 10
sss = StratifiedShuffleSplit(6, test_size=test_size, train_size=train_size,
random_state=0).split(np.ones(len(y)), y)
for train, test in sss:
assert_equal(len(train), train_size)
assert_equal(len(test), test_size)
def test_stratified_shuffle_split_iter():
ys = [np.array([1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3, 3]),
np.array([0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3]),
np.array([0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2] * 2),
np.array([1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4]),
np.array([-1] * 800 + [1] * 50),
np.concatenate([[i] * (100 + i) for i in range(11)])
]
for y in ys:
sss = StratifiedShuffleSplit(6, test_size=0.33,
random_state=0).split(np.ones(len(y)), y)
# this is how test-size is computed internally
# in _validate_shuffle_split
test_size = np.ceil(0.33 * len(y))
train_size = len(y) - test_size
for train, test in sss:
assert_array_equal(np.unique(y[train]), np.unique(y[test]))
# Checks if folds keep classes proportions
p_train = (np.bincount(np.unique(y[train],
return_inverse=True)[1]) /
float(len(y[train])))
p_test = (np.bincount(np.unique(y[test],
return_inverse=True)[1]) /
float(len(y[test])))
assert_array_almost_equal(p_train, p_test, 1)
assert_equal(len(train) + len(test), y.size)
assert_equal(len(train), train_size)
assert_equal(len(test), test_size)
assert_array_equal(np.lib.arraysetops.intersect1d(train, test), [])
def test_stratified_shuffle_split_even():
# Test the StratifiedShuffleSplit, indices are drawn with a
# equal chance
n_folds = 5
n_splits = 1000
def assert_counts_are_ok(idx_counts, p):
# Here we test that the distribution of the counts
# per index is close enough to a binomial
threshold = 0.05 / n_splits
bf = stats.binom(n_splits, p)
for count in idx_counts:
prob = bf.pmf(count)
assert_true(prob > threshold,
"An index is not drawn with chance corresponding "
"to even draws")
for n_samples in (6, 22):
groups = np.array((n_samples // 2) * [0, 1])
splits = StratifiedShuffleSplit(n_splits=n_splits,
test_size=1. / n_folds,
random_state=0)
train_counts = [0] * n_samples
test_counts = [0] * n_samples
n_splits_actual = 0
for train, test in splits.split(X=np.ones(n_samples), y=groups):
n_splits_actual += 1
for counter, ids in [(train_counts, train), (test_counts, test)]:
for id in ids:
counter[id] += 1
assert_equal(n_splits_actual, n_splits)
n_train, n_test = _validate_shuffle_split(
n_samples, test_size=1. / n_folds, train_size=1. - (1. / n_folds))
assert_equal(len(train), n_train)
assert_equal(len(test), n_test)
assert_equal(len(set(train).intersection(test)), 0)
group_counts = np.unique(groups)
assert_equal(splits.test_size, 1.0 / n_folds)
assert_equal(n_train + n_test, len(groups))
assert_equal(len(group_counts), 2)
ex_test_p = float(n_test) / n_samples
ex_train_p = float(n_train) / n_samples
assert_counts_are_ok(train_counts, ex_train_p)
assert_counts_are_ok(test_counts, ex_test_p)
def test_stratified_shuffle_split_overlap_train_test_bug():
# See https://github.com/scikit-learn/scikit-learn/issues/6121 for
# the original bug report
y = [0, 1, 2, 3] * 3 + [4, 5] * 5
X = np.ones_like(y)
sss = StratifiedShuffleSplit(n_splits=1,
test_size=0.5, random_state=0)
train, test = next(iter(sss.split(X=X, y=y)))
assert_array_equal(np.intersect1d(train, test), [])
def test_predefinedsplit_with_kfold_split():
# Check that PredefinedSplit can reproduce a split generated by Kfold.
folds = -1 * np.ones(10)
kf_train = []
kf_test = []
for i, (train_ind, test_ind) in enumerate(KFold(5, shuffle=True).split(X)):
kf_train.append(train_ind)
kf_test.append(test_ind)
folds[test_ind] = i
ps_train = []
ps_test = []
ps = PredefinedSplit(folds)
# n_splits is simply the no of unique folds
assert_equal(len(np.unique(folds)), ps.get_n_splits())
for train_ind, test_ind in ps.split():
ps_train.append(train_ind)
ps_test.append(test_ind)
assert_array_equal(ps_train, kf_train)
assert_array_equal(ps_test, kf_test)
def test_group_shuffle_split():
groups = [np.array([1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3, 3]),
np.array([0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3]),
np.array([0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2]),
np.array([1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4])]
for l in groups:
X = y = np.ones(len(l))
n_splits = 6
test_size = 1./3
slo = GroupShuffleSplit(n_splits, test_size=test_size, random_state=0)
# Make sure the repr works
repr(slo)
# Test that the length is correct
assert_equal(slo.get_n_splits(X, y, groups=l), n_splits)
l_unique = np.unique(l)
for train, test in slo.split(X, y, groups=l):
# First test: no train group is in the test set and vice versa
l_train_unique = np.unique(l[train])
l_test_unique = np.unique(l[test])
assert_false(np.any(np.in1d(l[train], l_test_unique)))
assert_false(np.any(np.in1d(l[test], l_train_unique)))
# Second test: train and test add up to all the data
assert_equal(l[train].size + l[test].size, l.size)
# Third test: train and test are disjoint
assert_array_equal(np.intersect1d(train, test), [])
# Fourth test:
# unique train and test groups are correct, +- 1 for rounding error
assert_true(abs(len(l_test_unique) -
round(test_size * len(l_unique))) <= 1)
assert_true(abs(len(l_train_unique) -
round((1.0 - test_size) * len(l_unique))) <= 1)
def test_leave_group_out_changing_groups():
# Check that LeaveOneGroupOut and LeavePGroupsOut work normally if
# the groups variable is changed before calling split
groups = np.array([0, 1, 2, 1, 1, 2, 0, 0])
X = np.ones(len(groups))
groups_changing = np.array(groups, copy=True)
lolo = LeaveOneGroupOut().split(X, groups=groups)
lolo_changing = LeaveOneGroupOut().split(X, groups=groups)
lplo = LeavePGroupsOut(n_groups=2).split(X, groups=groups)
lplo_changing = LeavePGroupsOut(n_groups=2).split(X, groups=groups)
groups_changing[:] = 0
for llo, llo_changing in [(lolo, lolo_changing), (lplo, lplo_changing)]:
for (train, test), (train_chan, test_chan) in zip(llo, llo_changing):
assert_array_equal(train, train_chan)
assert_array_equal(test, test_chan)
# n_splits = no of 2 (p) group combinations of the unique groups = 3C2 = 3
assert_equal(3, LeavePGroupsOut(n_groups=2).get_n_splits(X, y, groups))
# n_splits = no of unique groups (C(uniq_lbls, 1) = n_unique_groups)
assert_equal(3, LeaveOneGroupOut().get_n_splits(X, y, groups))
def test_train_test_split_errors():
assert_raises(ValueError, train_test_split)
assert_raises(ValueError, train_test_split, range(3), train_size=1.1)
assert_raises(ValueError, train_test_split, range(3), test_size=0.6,
train_size=0.6)
assert_raises(ValueError, train_test_split, range(3),
test_size=np.float32(0.6), train_size=np.float32(0.6))
assert_raises(ValueError, train_test_split, range(3),
test_size="wrong_type")
assert_raises(ValueError, train_test_split, range(3), test_size=2,
train_size=4)
assert_raises(TypeError, train_test_split, range(3),
some_argument=1.1)
assert_raises(ValueError, train_test_split, range(3), range(42))
def test_train_test_split():
X = np.arange(100).reshape((10, 10))
X_s = coo_matrix(X)
y = np.arange(10)
# simple test
split = train_test_split(X, y, test_size=None, train_size=.5)
X_train, X_test, y_train, y_test = split
assert_equal(len(y_test), len(y_train))
# test correspondence of X and y
assert_array_equal(X_train[:, 0], y_train * 10)
assert_array_equal(X_test[:, 0], y_test * 10)
# don't convert lists to anything else by default
split = train_test_split(X, X_s, y.tolist())
X_train, X_test, X_s_train, X_s_test, y_train, y_test = split
assert_true(isinstance(y_train, list))
assert_true(isinstance(y_test, list))
# allow nd-arrays
X_4d = np.arange(10 * 5 * 3 * 2).reshape(10, 5, 3, 2)
y_3d = np.arange(10 * 7 * 11).reshape(10, 7, 11)
split = train_test_split(X_4d, y_3d)
assert_equal(split[0].shape, (7, 5, 3, 2))
assert_equal(split[1].shape, (3, 5, 3, 2))
assert_equal(split[2].shape, (7, 7, 11))
assert_equal(split[3].shape, (3, 7, 11))
# test stratification option
y = np.array([1, 1, 1, 1, 2, 2, 2, 2])
for test_size, exp_test_size in zip([2, 4, 0.25, 0.5, 0.75],
[2, 4, 2, 4, 6]):
train, test = train_test_split(y, test_size=test_size,
stratify=y,
random_state=0)
assert_equal(len(test), exp_test_size)
assert_equal(len(test) + len(train), len(y))
# check the 1:1 ratio of ones and twos in the data is preserved
assert_equal(np.sum(train == 1), np.sum(train == 2))
@ignore_warnings
def train_test_split_pandas():
# check train_test_split doesn't destroy pandas dataframe
types = [MockDataFrame]
try:
from pandas import DataFrame
types.append(DataFrame)
except ImportError:
pass
for InputFeatureType in types:
# X dataframe
X_df = InputFeatureType(X)
X_train, X_test = train_test_split(X_df)
assert_true(isinstance(X_train, InputFeatureType))
assert_true(isinstance(X_test, InputFeatureType))
def train_test_split_sparse():
# check that train_test_split converts scipy sparse matrices
# to csr, as stated in the documentation
X = np.arange(100).reshape((10, 10))
sparse_types = [csr_matrix, csc_matrix, coo_matrix]
for InputFeatureType in sparse_types:
X_s = InputFeatureType(X)
X_train, X_test = train_test_split(X_s)
assert_true(isinstance(X_train, csr_matrix))
assert_true(isinstance(X_test, csr_matrix))
def train_test_split_mock_pandas():
# X mock dataframe
X_df = MockDataFrame(X)
X_train, X_test = train_test_split(X_df)
assert_true(isinstance(X_train, MockDataFrame))
assert_true(isinstance(X_test, MockDataFrame))
X_train_arr, X_test_arr = train_test_split(X_df)
def test_shufflesplit_errors():
# When the {test|train}_size is a float/invalid, error is raised at init
assert_raises(ValueError, ShuffleSplit, test_size=None, train_size=None)
assert_raises(ValueError, ShuffleSplit, test_size=2.0)
assert_raises(ValueError, ShuffleSplit, test_size=1.0)
assert_raises(ValueError, ShuffleSplit, test_size=0.1, train_size=0.95)
assert_raises(ValueError, ShuffleSplit, train_size=1j)
# When the {test|train}_size is an int, validation is based on the input X
# and happens at split(...)
assert_raises(ValueError, next, ShuffleSplit(test_size=11).split(X))
assert_raises(ValueError, next, ShuffleSplit(test_size=10).split(X))
assert_raises(ValueError, next, ShuffleSplit(test_size=8,
train_size=3).split(X))
def test_shufflesplit_reproducible():
# Check that iterating twice on the ShuffleSplit gives the same
# sequence of train-test when the random_state is given
ss = ShuffleSplit(random_state=21)
assert_array_equal(list(a for a, b in ss.split(X)),
list(a for a, b in ss.split(X)))
def test_train_test_split_allow_nans():
# Check that train_test_split allows input data with NaNs
X = np.arange(200, dtype=np.float64).reshape(10, -1)
X[2, :] = np.nan
y = np.repeat([0, 1], X.shape[0] / 2)
train_test_split(X, y, test_size=0.2, random_state=42)
def test_check_cv():
X = np.ones(9)
cv = check_cv(3, classifier=False)
# Use numpy.testing.assert_equal which recursively compares
# lists of lists
np.testing.assert_equal(list(KFold(3).split(X)), list(cv.split(X)))
y_binary = np.array([0, 1, 0, 1, 0, 0, 1, 1, 1])
cv = check_cv(3, y_binary, classifier=True)
np.testing.assert_equal(list(StratifiedKFold(3).split(X, y_binary)),
list(cv.split(X, y_binary)))
y_multiclass = np.array([0, 1, 0, 1, 2, 1, 2, 0, 2])
cv = check_cv(3, y_multiclass, classifier=True)
np.testing.assert_equal(list(StratifiedKFold(3).split(X, y_multiclass)),
list(cv.split(X, y_multiclass)))
X = np.ones(5)
y_multilabel = np.array([[0, 0, 0, 0], [0, 1, 1, 0], [0, 0, 0, 1],
[1, 1, 0, 1], [0, 0, 1, 0]])
cv = check_cv(3, y_multilabel, classifier=True)
np.testing.assert_equal(list(KFold(3).split(X)), list(cv.split(X)))
y_multioutput = np.array([[1, 2], [0, 3], [0, 0], [3, 1], [2, 0]])
cv = check_cv(3, y_multioutput, classifier=True)
np.testing.assert_equal(list(KFold(3).split(X)), list(cv.split(X)))
# Check if the old style classes are wrapped to have a split method
X = np.ones(9)
y_multiclass = np.array([0, 1, 0, 1, 2, 1, 2, 0, 2])
cv1 = check_cv(3, y_multiclass, classifier=True)
with warnings.catch_warnings(record=True):
from sklearn.cross_validation import StratifiedKFold as OldSKF
cv2 = check_cv(OldSKF(y_multiclass, n_folds=3))
np.testing.assert_equal(list(cv1.split(X, y_multiclass)),
list(cv2.split()))
assert_raises(ValueError, check_cv, cv="lolo")
def test_cv_iterable_wrapper():
y_multiclass = np.array([0, 1, 0, 1, 2, 1, 2, 0, 2])
with warnings.catch_warnings(record=True):
from sklearn.cross_validation import StratifiedKFold as OldSKF
cv = OldSKF(y_multiclass, n_folds=3)
wrapped_old_skf = _CVIterableWrapper(cv)
# Check if split works correctly
np.testing.assert_equal(list(cv), list(wrapped_old_skf.split()))
# Check if get_n_splits works correctly
assert_equal(len(cv), wrapped_old_skf.get_n_splits())
def test_group_kfold():
rng = np.random.RandomState(0)
# Parameters of the test
n_groups = 15
n_samples = 1000
n_splits = 5
X = y = np.ones(n_samples)
# Construct the test data
tolerance = 0.05 * n_samples # 5 percent error allowed
groups = rng.randint(0, n_groups, n_samples)
ideal_n_groups_per_fold = n_samples // n_splits
len(np.unique(groups))
# Get the test fold indices from the test set indices of each fold
folds = np.zeros(n_samples)
lkf = GroupKFold(n_splits=n_splits)
for i, (_, test) in enumerate(lkf.split(X, y, groups)):
folds[test] = i
# Check that folds have approximately the same size
assert_equal(len(folds), len(groups))
for i in np.unique(folds):
assert_greater_equal(tolerance,
abs(sum(folds == i) - ideal_n_groups_per_fold))
# Check that each group appears only in 1 fold
for group in np.unique(groups):
assert_equal(len(np.unique(folds[groups == group])), 1)
# Check that no group is on both sides of the split
groups = np.asarray(groups, dtype=object)
for train, test in lkf.split(X, y, groups):
assert_equal(len(np.intersect1d(groups[train], groups[test])), 0)
# Construct the test data
groups = np.array(['Albert', 'Jean', 'Bertrand', 'Michel', 'Jean',
'Francis', 'Robert', 'Michel', 'Rachel', 'Lois',
'Michelle', 'Bernard', 'Marion', 'Laura', 'Jean',
'Rachel', 'Franck', 'John', 'Gael', 'Anna', 'Alix',
'Robert', 'Marion', 'David', 'Tony', 'Abel', 'Becky',
'Madmood', 'Cary', 'Mary', 'Alexandre', 'David',
'Francis', 'Barack', 'Abdoul', 'Rasha', 'Xi', 'Silvia'])
n_groups = len(np.unique(groups))
n_samples = len(groups)
n_splits = 5
tolerance = 0.05 * n_samples # 5 percent error allowed
ideal_n_groups_per_fold = n_samples // n_splits
X = y = np.ones(n_samples)
# Get the test fold indices from the test set indices of each fold
folds = np.zeros(n_samples)
for i, (_, test) in enumerate(lkf.split(X, y, groups)):
folds[test] = i
# Check that folds have approximately the same size
assert_equal(len(folds), len(groups))
for i in np.unique(folds):
assert_greater_equal(tolerance,
abs(sum(folds == i) - ideal_n_groups_per_fold))
# Check that each group appears only in 1 fold
with warnings.catch_warnings():
warnings.simplefilter("ignore", DeprecationWarning)
for group in np.unique(groups):
assert_equal(len(np.unique(folds[groups == group])), 1)
# Check that no group is on both sides of the split
groups = np.asarray(groups, dtype=object)
for train, test in lkf.split(X, y, groups):
assert_equal(len(np.intersect1d(groups[train], groups[test])), 0)
# Should fail if there are more folds than groups
groups = np.array([1, 1, 1, 2, 2])
X = y = np.ones(len(groups))
assert_raises_regexp(ValueError, "Cannot have number of splits.*greater",
next, GroupKFold(n_splits=3).split(X, y, groups))
def test_time_series_cv():
X = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14]]
# Should fail if there are more folds than samples
assert_raises_regexp(ValueError, "Cannot have number of folds.*greater",
next,
TimeSeriesSplit(n_splits=7).split(X))
tscv = TimeSeriesSplit(2)
# Manually check that Time Series CV preserves the data
# ordering on toy datasets
splits = tscv.split(X[:-1])
train, test = next(splits)
assert_array_equal(train, [0, 1])
assert_array_equal(test, [2, 3])
train, test = next(splits)
assert_array_equal(train, [0, 1, 2, 3])
assert_array_equal(test, [4, 5])
splits = TimeSeriesSplit(2).split(X)
train, test = next(splits)
assert_array_equal(train, [0, 1, 2])
assert_array_equal(test, [3, 4])
train, test = next(splits)
assert_array_equal(train, [0, 1, 2, 3, 4])
assert_array_equal(test, [5, 6])
# Check get_n_splits returns the correct number of splits
splits = TimeSeriesSplit(2).split(X)
n_splits_actual = len(list(splits))
assert_equal(n_splits_actual, tscv.get_n_splits())
assert_equal(n_splits_actual, 2)
def test_nested_cv():
# Test if nested cross validation works with different combinations of cv
rng = np.random.RandomState(0)
X, y = make_classification(n_samples=15, n_classes=2, random_state=0)
groups = rng.randint(0, 5, 15)
cvs = [LeaveOneGroupOut(), LeaveOneOut(), GroupKFold(), StratifiedKFold(),
StratifiedShuffleSplit(n_splits=3, random_state=0)]
for inner_cv, outer_cv in combinations_with_replacement(cvs, 2):
gs = GridSearchCV(Ridge(), param_grid={'alpha': [1, .1]},
cv=inner_cv)
cross_val_score(gs, X=X, y=y, groups=groups, cv=outer_cv,
fit_params={'groups': groups})
def test_build_repr():
class MockSplitter:
def __init__(self, a, b=0, c=None):
self.a = a
self.b = b
self.c = c
def __repr__(self):
return _build_repr(self)
assert_equal(repr(MockSplitter(5, 6)), "MockSplitter(a=5, b=6, c=None)")
| bsd-3-clause |
chenyyx/scikit-learn-doc-zh | examples/en/applications/plot_prediction_latency.py | 13 | 11475 | """
==================
Prediction Latency
==================
This is an example showing the prediction latency of various scikit-learn
estimators.
The goal is to measure the latency one can expect when doing predictions
either in bulk or atomic (i.e. one by one) mode.
The plots represent the distribution of the prediction latency as a boxplot.
"""
# Authors: Eustache Diemert <eustache@diemert.fr>
# License: BSD 3 clause
from __future__ import print_function
from collections import defaultdict
import time
import gc
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from scipy.stats import scoreatpercentile
from sklearn.datasets.samples_generator import make_regression
from sklearn.ensemble.forest import RandomForestRegressor
from sklearn.linear_model.ridge import Ridge
from sklearn.linear_model.stochastic_gradient import SGDRegressor
from sklearn.svm.classes import SVR
from sklearn.utils import shuffle
def _not_in_sphinx():
# Hack to detect whether we are running by the sphinx builder
return '__file__' in globals()
def atomic_benchmark_estimator(estimator, X_test, verbose=False):
"""Measure runtime prediction of each instance."""
n_instances = X_test.shape[0]
runtimes = np.zeros(n_instances, dtype=np.float)
for i in range(n_instances):
instance = X_test[[i], :]
start = time.time()
estimator.predict(instance)
runtimes[i] = time.time() - start
if verbose:
print("atomic_benchmark runtimes:", min(runtimes), scoreatpercentile(
runtimes, 50), max(runtimes))
return runtimes
def bulk_benchmark_estimator(estimator, X_test, n_bulk_repeats, verbose):
"""Measure runtime prediction of the whole input."""
n_instances = X_test.shape[0]
runtimes = np.zeros(n_bulk_repeats, dtype=np.float)
for i in range(n_bulk_repeats):
start = time.time()
estimator.predict(X_test)
runtimes[i] = time.time() - start
runtimes = np.array(list(map(lambda x: x / float(n_instances), runtimes)))
if verbose:
print("bulk_benchmark runtimes:", min(runtimes), scoreatpercentile(
runtimes, 50), max(runtimes))
return runtimes
def benchmark_estimator(estimator, X_test, n_bulk_repeats=30, verbose=False):
"""
Measure runtimes of prediction in both atomic and bulk mode.
Parameters
----------
estimator : already trained estimator supporting `predict()`
X_test : test input
n_bulk_repeats : how many times to repeat when evaluating bulk mode
Returns
-------
atomic_runtimes, bulk_runtimes : a pair of `np.array` which contain the
runtimes in seconds.
"""
atomic_runtimes = atomic_benchmark_estimator(estimator, X_test, verbose)
bulk_runtimes = bulk_benchmark_estimator(estimator, X_test, n_bulk_repeats,
verbose)
return atomic_runtimes, bulk_runtimes
def generate_dataset(n_train, n_test, n_features, noise=0.1, verbose=False):
"""Generate a regression dataset with the given parameters."""
if verbose:
print("generating dataset...")
X, y, coef = make_regression(n_samples=n_train + n_test,
n_features=n_features, noise=noise, coef=True)
random_seed = 13
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=n_train, random_state=random_seed)
X_train, y_train = shuffle(X_train, y_train, random_state=random_seed)
X_scaler = StandardScaler()
X_train = X_scaler.fit_transform(X_train)
X_test = X_scaler.transform(X_test)
y_scaler = StandardScaler()
y_train = y_scaler.fit_transform(y_train[:, None])[:, 0]
y_test = y_scaler.transform(y_test[:, None])[:, 0]
gc.collect()
if verbose:
print("ok")
return X_train, y_train, X_test, y_test
def boxplot_runtimes(runtimes, pred_type, configuration):
"""
Plot a new `Figure` with boxplots of prediction runtimes.
Parameters
----------
runtimes : list of `np.array` of latencies in micro-seconds
cls_names : list of estimator class names that generated the runtimes
pred_type : 'bulk' or 'atomic'
"""
fig, ax1 = plt.subplots(figsize=(10, 6))
bp = plt.boxplot(runtimes, )
cls_infos = ['%s\n(%d %s)' % (estimator_conf['name'],
estimator_conf['complexity_computer'](
estimator_conf['instance']),
estimator_conf['complexity_label']) for
estimator_conf in configuration['estimators']]
plt.setp(ax1, xticklabels=cls_infos)
plt.setp(bp['boxes'], color='black')
plt.setp(bp['whiskers'], color='black')
plt.setp(bp['fliers'], color='red', marker='+')
ax1.yaxis.grid(True, linestyle='-', which='major', color='lightgrey',
alpha=0.5)
ax1.set_axisbelow(True)
ax1.set_title('Prediction Time per Instance - %s, %d feats.' % (
pred_type.capitalize(),
configuration['n_features']))
ax1.set_ylabel('Prediction Time (us)')
plt.show()
def benchmark(configuration):
"""Run the whole benchmark."""
X_train, y_train, X_test, y_test = generate_dataset(
configuration['n_train'], configuration['n_test'],
configuration['n_features'])
stats = {}
for estimator_conf in configuration['estimators']:
print("Benchmarking", estimator_conf['instance'])
estimator_conf['instance'].fit(X_train, y_train)
gc.collect()
a, b = benchmark_estimator(estimator_conf['instance'], X_test)
stats[estimator_conf['name']] = {'atomic': a, 'bulk': b}
cls_names = [estimator_conf['name'] for estimator_conf in configuration[
'estimators']]
runtimes = [1e6 * stats[clf_name]['atomic'] for clf_name in cls_names]
boxplot_runtimes(runtimes, 'atomic', configuration)
runtimes = [1e6 * stats[clf_name]['bulk'] for clf_name in cls_names]
boxplot_runtimes(runtimes, 'bulk (%d)' % configuration['n_test'],
configuration)
def n_feature_influence(estimators, n_train, n_test, n_features, percentile):
"""
Estimate influence of the number of features on prediction time.
Parameters
----------
estimators : dict of (name (str), estimator) to benchmark
n_train : nber of training instances (int)
n_test : nber of testing instances (int)
n_features : list of feature-space dimensionality to test (int)
percentile : percentile at which to measure the speed (int [0-100])
Returns:
--------
percentiles : dict(estimator_name,
dict(n_features, percentile_perf_in_us))
"""
percentiles = defaultdict(defaultdict)
for n in n_features:
print("benchmarking with %d features" % n)
X_train, y_train, X_test, y_test = generate_dataset(n_train, n_test, n)
for cls_name, estimator in estimators.items():
estimator.fit(X_train, y_train)
gc.collect()
runtimes = bulk_benchmark_estimator(estimator, X_test, 30, False)
percentiles[cls_name][n] = 1e6 * scoreatpercentile(runtimes,
percentile)
return percentiles
def plot_n_features_influence(percentiles, percentile):
fig, ax1 = plt.subplots(figsize=(10, 6))
colors = ['r', 'g', 'b']
for i, cls_name in enumerate(percentiles.keys()):
x = np.array(sorted([n for n in percentiles[cls_name].keys()]))
y = np.array([percentiles[cls_name][n] for n in x])
plt.plot(x, y, color=colors[i], )
ax1.yaxis.grid(True, linestyle='-', which='major', color='lightgrey',
alpha=0.5)
ax1.set_axisbelow(True)
ax1.set_title('Evolution of Prediction Time with #Features')
ax1.set_xlabel('#Features')
ax1.set_ylabel('Prediction Time at %d%%-ile (us)' % percentile)
plt.show()
def benchmark_throughputs(configuration, duration_secs=0.1):
"""benchmark throughput for different estimators."""
X_train, y_train, X_test, y_test = generate_dataset(
configuration['n_train'], configuration['n_test'],
configuration['n_features'])
throughputs = dict()
for estimator_config in configuration['estimators']:
estimator_config['instance'].fit(X_train, y_train)
start_time = time.time()
n_predictions = 0
while (time.time() - start_time) < duration_secs:
estimator_config['instance'].predict(X_test[[0]])
n_predictions += 1
throughputs[estimator_config['name']] = n_predictions / duration_secs
return throughputs
def plot_benchmark_throughput(throughputs, configuration):
fig, ax = plt.subplots(figsize=(10, 6))
colors = ['r', 'g', 'b']
cls_infos = ['%s\n(%d %s)' % (estimator_conf['name'],
estimator_conf['complexity_computer'](
estimator_conf['instance']),
estimator_conf['complexity_label']) for
estimator_conf in configuration['estimators']]
cls_values = [throughputs[estimator_conf['name']] for estimator_conf in
configuration['estimators']]
plt.bar(range(len(throughputs)), cls_values, width=0.5, color=colors)
ax.set_xticks(np.linspace(0.25, len(throughputs) - 0.75, len(throughputs)))
ax.set_xticklabels(cls_infos, fontsize=10)
ymax = max(cls_values) * 1.2
ax.set_ylim((0, ymax))
ax.set_ylabel('Throughput (predictions/sec)')
ax.set_title('Prediction Throughput for different estimators (%d '
'features)' % configuration['n_features'])
plt.show()
# #############################################################################
# Main code
start_time = time.time()
# #############################################################################
# Benchmark bulk/atomic prediction speed for various regressors
configuration = {
'n_train': int(1e3),
'n_test': int(1e2),
'n_features': int(1e2),
'estimators': [
{'name': 'Linear Model',
'instance': SGDRegressor(penalty='elasticnet', alpha=0.01,
l1_ratio=0.25, fit_intercept=True),
'complexity_label': 'non-zero coefficients',
'complexity_computer': lambda clf: np.count_nonzero(clf.coef_)},
{'name': 'RandomForest',
'instance': RandomForestRegressor(),
'complexity_label': 'estimators',
'complexity_computer': lambda clf: clf.n_estimators},
{'name': 'SVR',
'instance': SVR(kernel='rbf'),
'complexity_label': 'support vectors',
'complexity_computer': lambda clf: len(clf.support_vectors_)},
]
}
benchmark(configuration)
# benchmark n_features influence on prediction speed
percentile = 90
percentiles = n_feature_influence({'ridge': Ridge()},
configuration['n_train'],
configuration['n_test'],
[100, 250, 500], percentile)
plot_n_features_influence(percentiles, percentile)
# benchmark throughput
throughputs = benchmark_throughputs(configuration)
plot_benchmark_throughput(throughputs, configuration)
stop_time = time.time()
print("example run in %.2fs" % (stop_time - start_time))
| gpl-3.0 |
poojavade/Genomics_Docker | Dockerfiles/gedlab-khmer-filter-abund/pymodules/python2.7/lib/python/statsmodels-0.5.0-py2.7-linux-x86_64.egg/statsmodels/sandbox/km_class.py | 5 | 11704 | #a class for the Kaplan-Meier estimator
import numpy as np
from math import sqrt
import matplotlib.pyplot as plt
class KAPLAN_MEIER(object):
def __init__(self, data, timesIn, groupIn, censoringIn):
raise RuntimeError('Newer version of Kaplan-Meier class available in survival2.py')
#store the inputs
self.data = data
self.timesIn = timesIn
self.groupIn = groupIn
self.censoringIn = censoringIn
def fit(self):
#split the data into groups based on the predicting variable
#get a set of all the groups
groups = list(set(self.data[:,self.groupIn]))
#create an empty list to store the data for different groups
groupList = []
#create an empty list for each group and add it to groups
for i in range(len(groups)):
groupList.append([])
#iterate through all the groups in groups
for i in range(len(groups)):
#iterate though the rows of dataArray
for j in range(len(self.data)):
#test if this row has the correct group
if self.data[j,self.groupIn] == groups[i]:
#add the row to groupList
groupList[i].append(self.data[j])
#create an empty list to store the times for each group
timeList = []
#iterate through all the groups
for i in range(len(groupList)):
#create an empty list
times = []
#iterate through all the rows of the group
for j in range(len(groupList[i])):
#get a list of all the times in the group
times.append(groupList[i][j][self.timesIn])
#get a sorted set of the times and store it in timeList
times = list(sorted(set(times)))
timeList.append(times)
#get a list of the number at risk and events at each time
#create an empty list to store the results in
timeCounts = []
#create an empty list to hold points for plotting
points = []
#create a list for points where censoring occurs
censoredPoints = []
#iterate trough each group
for i in range(len(groupList)):
#initialize a variable to estimate the survival function
survival = 1
#initialize a variable to estimate the variance of
#the survival function
varSum = 0
#initialize a counter for the number at risk
riskCounter = len(groupList[i])
#create a list for the counts for this group
counts = []
##create a list for points to plot
x = []
y = []
#iterate through the list of times
for j in range(len(timeList[i])):
if j != 0:
if j == 1:
#add an indicator to tell if the time
#starts a new group
groupInd = 1
#add (0,1) to the list of points
x.append(0)
y.append(1)
#add the point time to the right of that
x.append(timeList[i][j-1])
y.append(1)
#add the point below that at survival
x.append(timeList[i][j-1])
y.append(survival)
#add the survival to y
y.append(survival)
else:
groupInd = 0
#add survival twice to y
y.append(survival)
y.append(survival)
#add the time twice to x
x.append(timeList[i][j-1])
x.append(timeList[i][j-1])
#add each censored time, number of censorings and
#its survival to censoredPoints
censoredPoints.append([timeList[i][j-1],
censoringNum,survival,groupInd])
#add the count to the list
counts.append([timeList[i][j-1],riskCounter,
eventCounter,survival,
sqrt(((survival)**2)*varSum)])
#increment the number at risk
riskCounter += -1*(riskChange)
#initialize a counter for the change in the number at risk
riskChange = 0
#initialize a counter to zero
eventCounter = 0
#intialize a counter to tell when censoring occurs
censoringCounter = 0
censoringNum = 0
#iterate through the observations in each group
for k in range(len(groupList[i])):
#check of the observation has the given time
if (groupList[i][k][self.timesIn]) == (timeList[i][j]):
#increment the number at risk counter
riskChange += 1
#check if this is an event or censoring
if groupList[i][k][self.censoringIn] == 1:
#add 1 to the counter
eventCounter += 1
else:
censoringNum += 1
#check if there are any events at this time
if eventCounter != censoringCounter:
censoringCounter = eventCounter
#calculate the estimate of the survival function
survival *= ((float(riskCounter) -
eventCounter)/(riskCounter))
try:
#calculate the estimate of the variance
varSum += (eventCounter)/((riskCounter)
*(float(riskCounter)-
eventCounter))
except ZeroDivisionError:
varSum = 0
#append the last row to counts
counts.append([timeList[i][len(timeList[i])-1],
riskCounter,eventCounter,survival,
sqrt(((survival)**2)*varSum)])
#add the last time once to x
x.append(timeList[i][len(timeList[i])-1])
x.append(timeList[i][len(timeList[i])-1])
#add the last survival twice to y
y.append(survival)
#y.append(survival)
censoredPoints.append([timeList[i][len(timeList[i])-1],
censoringNum,survival,1])
#add the list for the group to al ist for all the groups
timeCounts.append(np.array(counts))
points.append([x,y])
#returns a list of arrays, where each array has as it columns: the time,
#the number at risk, the number of events, the estimated value of the
#survival function at that time, and the estimated standard error at
#that time, in that order
self.results = timeCounts
self.points = points
self.censoredPoints = censoredPoints
def plot(self):
x = []
#iterate through the groups
for i in range(len(self.points)):
#plot x and y
plt.plot(np.array(self.points[i][0]),np.array(self.points[i][1]))
#create lists of all the x and y values
x += self.points[i][0]
for j in range(len(self.censoredPoints)):
#check if censoring is occuring
if (self.censoredPoints[j][1] != 0):
#if this is the first censored point
if (self.censoredPoints[j][3] == 1) and (j == 0):
#calculate a distance beyond 1 to place it
#so all the points will fit
dx = ((1./((self.censoredPoints[j][1])+1.))
*(float(self.censoredPoints[j][0])))
#iterate through all the censored points at this time
for k in range(self.censoredPoints[j][1]):
#plot a vertical line for censoring
plt.vlines((1+((k+1)*dx)),
self.censoredPoints[j][2]-0.03,
self.censoredPoints[j][2]+0.03)
#if this censored point starts a new group
elif ((self.censoredPoints[j][3] == 1) and
(self.censoredPoints[j-1][3] == 1)):
#calculate a distance beyond 1 to place it
#so all the points will fit
dx = ((1./((self.censoredPoints[j][1])+1.))
*(float(self.censoredPoints[j][0])))
#iterate through all the censored points at this time
for k in range(self.censoredPoints[j][1]):
#plot a vertical line for censoring
plt.vlines((1+((k+1)*dx)),
self.censoredPoints[j][2]-0.03,
self.censoredPoints[j][2]+0.03)
#if this is the last censored point
elif j == (len(self.censoredPoints) - 1):
#calculate a distance beyond the previous time
#so that all the points will fit
dx = ((1./((self.censoredPoints[j][1])+1.))
*(float(self.censoredPoints[j][0])))
#iterate through all the points at this time
for k in range(self.censoredPoints[j][1]):
#plot a vertical line for censoring
plt.vlines((self.censoredPoints[j-1][0]+((k+1)*dx)),
self.censoredPoints[j][2]-0.03,
self.censoredPoints[j][2]+0.03)
#if this is a point in the middle of the group
else:
#calcuate a distance beyond the current time
#to place the point, so they all fit
dx = ((1./((self.censoredPoints[j][1])+1.))
*(float(self.censoredPoints[j+1][0])
- self.censoredPoints[j][0]))
#iterate through all the points at this time
for k in range(self.censoredPoints[j][1]):
#plot a vetical line for censoring
plt.vlines((self.censoredPoints[j][0]+((k+1)*dx)),
self.censoredPoints[j][2]-0.03,
self.censoredPoints[j][2]+0.03)
#set the size of the plot so it extends to the max x and above 1 for y
plt.xlim((0,np.max(x)))
plt.ylim((0,1.05))
#label the axes
plt.xlabel('time')
plt.ylabel('survival')
plt.show()
def show_results(self):
#start a string that will be a table of the results
resultsString = ''
#iterate through all the groups
for i in range(len(self.results)):
#label the group and header
resultsString += ('Group {0}\n\n'.format(i) +
'Time At Risk Events Survival Std. Err\n')
for j in self.results[i]:
#add the results to the string
resultsString += (
'{0:<9d}{1:<12d}{2:<11d}{3:<13.4f}{4:<6.4f}\n'.format(
int(j[0]),int(j[1]),int(j[2]),j[3],j[4]))
print(resultsString)
| apache-2.0 |
AlexanderFabisch/scikit-learn | benchmarks/bench_isotonic.py | 268 | 3046 | """
Benchmarks of isotonic regression performance.
We generate a synthetic dataset of size 10^n, for n in [min, max], and
examine the time taken to run isotonic regression over the dataset.
The timings are then output to stdout, or visualized on a log-log scale
with matplotlib.
This alows the scaling of the algorithm with the problem size to be
visualized and understood.
"""
from __future__ import print_function
import numpy as np
import gc
from datetime import datetime
from sklearn.isotonic import isotonic_regression
from sklearn.utils.bench import total_seconds
import matplotlib.pyplot as plt
import argparse
def generate_perturbed_logarithm_dataset(size):
return np.random.randint(-50, 50, size=n) \
+ 50. * np.log(1 + np.arange(n))
def generate_logistic_dataset(size):
X = np.sort(np.random.normal(size=size))
return np.random.random(size=size) < 1.0 / (1.0 + np.exp(-X))
DATASET_GENERATORS = {
'perturbed_logarithm': generate_perturbed_logarithm_dataset,
'logistic': generate_logistic_dataset
}
def bench_isotonic_regression(Y):
"""
Runs a single iteration of isotonic regression on the input data,
and reports the total time taken (in seconds).
"""
gc.collect()
tstart = datetime.now()
isotonic_regression(Y)
delta = datetime.now() - tstart
return total_seconds(delta)
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description="Isotonic Regression benchmark tool")
parser.add_argument('--iterations', type=int, required=True,
help="Number of iterations to average timings over "
"for each problem size")
parser.add_argument('--log_min_problem_size', type=int, required=True,
help="Base 10 logarithm of the minimum problem size")
parser.add_argument('--log_max_problem_size', type=int, required=True,
help="Base 10 logarithm of the maximum problem size")
parser.add_argument('--show_plot', action='store_true',
help="Plot timing output with matplotlib")
parser.add_argument('--dataset', choices=DATASET_GENERATORS.keys(),
required=True)
args = parser.parse_args()
timings = []
for exponent in range(args.log_min_problem_size,
args.log_max_problem_size):
n = 10 ** exponent
Y = DATASET_GENERATORS[args.dataset](n)
time_per_iteration = \
[bench_isotonic_regression(Y) for i in range(args.iterations)]
timing = (n, np.mean(time_per_iteration))
timings.append(timing)
# If we're not plotting, dump the timing to stdout
if not args.show_plot:
print(n, np.mean(time_per_iteration))
if args.show_plot:
plt.plot(*zip(*timings))
plt.title("Average time taken running isotonic regression")
plt.xlabel('Number of observations')
plt.ylabel('Time (s)')
plt.axis('tight')
plt.loglog()
plt.show()
| bsd-3-clause |
CforED/Machine-Learning | examples/linear_model/plot_ols_3d.py | 350 | 2040 | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
=========================================================
Sparsity Example: Fitting only features 1 and 2
=========================================================
Features 1 and 2 of the diabetes-dataset are fitted and
plotted below. It illustrates that although feature 2
has a strong coefficient on the full model, it does not
give us much regarding `y` when compared to just feature 1
"""
print(__doc__)
# Code source: Gaël Varoquaux
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
from sklearn import datasets, linear_model
diabetes = datasets.load_diabetes()
indices = (0, 1)
X_train = diabetes.data[:-20, indices]
X_test = diabetes.data[-20:, indices]
y_train = diabetes.target[:-20]
y_test = diabetes.target[-20:]
ols = linear_model.LinearRegression()
ols.fit(X_train, y_train)
###############################################################################
# Plot the figure
def plot_figs(fig_num, elev, azim, X_train, clf):
fig = plt.figure(fig_num, figsize=(4, 3))
plt.clf()
ax = Axes3D(fig, elev=elev, azim=azim)
ax.scatter(X_train[:, 0], X_train[:, 1], y_train, c='k', marker='+')
ax.plot_surface(np.array([[-.1, -.1], [.15, .15]]),
np.array([[-.1, .15], [-.1, .15]]),
clf.predict(np.array([[-.1, -.1, .15, .15],
[-.1, .15, -.1, .15]]).T
).reshape((2, 2)),
alpha=.5)
ax.set_xlabel('X_1')
ax.set_ylabel('X_2')
ax.set_zlabel('Y')
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
#Generate the three different figures from different views
elev = 43.5
azim = -110
plot_figs(1, elev, azim, X_train, ols)
elev = -.5
azim = 0
plot_figs(2, elev, azim, X_train, ols)
elev = -.5
azim = 90
plot_figs(3, elev, azim, X_train, ols)
plt.show()
| bsd-3-clause |
karvenka/sp17-i524 | project/S17-IO-3012/code/bin/benchmark_replicas_mapreduce.py | 19 | 5506 | import matplotlib.pyplot as plt
import sys
import pandas as pd
def get_parm():
"""retrieves mandatory parameter to program
@param: none
@type: n/a
"""
try:
return sys.argv[1]
except:
print ('Must enter file name as parameter')
exit()
def read_file(filename):
"""reads a file into a pandas dataframe
@param: filename The name of the file to read
@type: string
"""
try:
return pd.read_csv(filename)
except:
print ('Error retrieving file')
exit()
def select_data(benchmark_df, cloud, config_replicas, mongos_instances, shard_replicas, shards_per_replica):
benchmark_df = benchmark_df[benchmark_df.mongo_version == 34]
benchmark_df = benchmark_df[benchmark_df.test_size == "large"]
if cloud != 'X':
benchmark_df = benchmark_df[benchmark_df.cloud == cloud]
if config_replicas != 'X':
benchmark_df = benchmark_df[benchmark_df.config_replicas == config_replicas]
if mongos_instances != 'X':
benchmark_df = benchmark_df[benchmark_df.mongos_instances == mongos_instances]
if shard_replicas != 'X':
benchmark_df = benchmark_df[benchmark_df.shard_replicas == shard_replicas]
if shards_per_replica != 'X':
benchmark_df = benchmark_df[benchmark_df.shards_per_replica == shards_per_replica]
# benchmark_df1 = benchmark_df.groupby(['cloud', 'config_replicas', 'mongos_instances', 'shard_replicas', 'shards_per_replica']).mean()
# http://stackoverflow.com/questions/10373660/converting-a-pandas-groupby-object-to-dataframe
benchmark_df = benchmark_df.groupby(
['cloud', 'config_replicas', 'mongos_instances', 'shard_replicas', 'shards_per_replica'], as_index=False).mean()
# http://stackoverflow.com/questions/10373660/converting-a-pandas-groupby-object-to-dataframe
# print benchmark_df1['shard_replicas']
# print benchmark_df1
# print benchmark_df
benchmark_df = benchmark_df.sort_values(by='shard_replicas', ascending=1)
return benchmark_df
def make_figure(mapreduce_seconds_kilo, replicas_kilo, mapreduce_seconds_chameleon, replicas_chameleon, mapreduce_seconds_jetstream, replicas_jetstream):
"""formats and creates a line chart
@param1: find_seconds_kilo Array with find_seconds from kilo
@type: numpy array
@param2: replicas_kilo Array with replicas from kilo
@type: numpy array
@param3: find_seconds_chameleon Array with find_seconds from chameleon
@type: numpy array
@param4: replicas_chameleon Array with replicas from chameleon
@type: numpy array
"""
fig = plt.figure()
#plt.title('Average Find Command Runtime by Shard Replication Factor')
plt.ylabel('Runtime in Seconds')
plt.xlabel('Degree of Replication Per Set')
# Make the chart
plt.plot(replicas_kilo, mapreduce_seconds_kilo, label='Kilo Cloud')
plt.plot(replicas_chameleon, mapreduce_seconds_chameleon, label='Chameleon Cloud')
plt.plot(replicas_jetstream, mapreduce_seconds_jetstream, label='Jetstream Cloud')
# http://stackoverflow.com/questions/11744990/how-to-set-auto-for-upper-limit-but-keep-a-fixed-lower-limit-with-matplotlib
plt.ylim(ymin=0)
plt.legend(loc='best')
# Show the chart (for testing)
# plt.show()
# Save the chart
fig.savefig('../report/replica_mapreduce.png')
# Run the program by calling the functions
if __name__ == "__main__":
filename = get_parm()
benchmark_df = read_file(filename)
cloud = 'kilo'
config_replicas = 1
mongos_instances = 1
shard_replicas = 1
shards_per_replica = 'X'
select_df = select_data(benchmark_df, cloud, config_replicas, mongos_instances, shard_replicas, shards_per_replica)
# http://stackoverflow.com/questions/31791476/pandas-dataframe-to-numpy-array-valueerror
# percentage death=\
mapreduce_seconds_kilo = select_df.as_matrix(columns=[select_df.columns[8]])
replicas_kilo = select_df.as_matrix(columns=[select_df.columns[4]])
# http://stackoverflow.com/questions/31791476/pandas-dataframe-to-numpy-array-valueerror
cloud = 'chameleon'
config_replicas = 1
mongos_instances = 1
shard_replicas = 1
shards_per_replica = 'X'
select_df = select_data(benchmark_df, cloud, config_replicas, mongos_instances, shard_replicas, shards_per_replica)
# http://stackoverflow.com/questions/31791476/pandas-dataframe-to-numpy-array-valueerror
# percentage death=\
mapreduce_seconds_chameleon = select_df.as_matrix(columns=[select_df.columns[8]])
replicas_chameleon = select_df.as_matrix(columns=[select_df.columns[4]])
# http://stackoverflow.com/questions/31791476/pandas-dataframe-to-numpy-array-valueerror
cloud = 'jetstream'
config_replicas = 1
mongos_instances = 1
shard_replicas = 1
shards_per_replica = 'X'
select_df = select_data(benchmark_df, cloud, config_replicas, mongos_instances, shard_replicas, shards_per_replica)
# http://stackoverflow.com/questions/31791476/pandas-dataframe-to-numpy-array-valueerror
# percentage death=\
mapreduce_seconds_jetstream = select_df.as_matrix(columns=[select_df.columns[8]])
replicas_jetstream = select_df.as_matrix(columns=[select_df.columns[4]])
# http://stackoverflow.com/questions/31791476/pandas-dataframe-to-numpy-array-valueerror
make_figure(mapreduce_seconds_kilo, replicas_kilo, mapreduce_seconds_chameleon, replicas_chameleon, mapreduce_seconds_jetstream, replicas_jetstream)
| apache-2.0 |
Averroes/statsmodels | statsmodels/examples/run_all.py | 34 | 1984 | '''run all examples to make sure we don't get an exception
Note:
If an example contaings plt.show(), then all plot windows have to be closed
manually, at least in my setup.
uncomment plt.show() to show all plot windows
'''
from __future__ import print_function
from statsmodels.compat.python import lzip, input
import matplotlib.pyplot as plt #matplotlib is required for many examples
stop_on_error = True
filelist = ['example_glsar.py', 'example_wls.py', 'example_gls.py',
'example_glm.py', 'example_ols_tftest.py', #'example_rpy.py',
'example_ols.py', 'example_ols_minimal.py', 'example_rlm.py',
'example_discrete.py', 'example_predict.py',
'example_ols_table.py',
'tut_ols.py', 'tut_ols_rlm.py', 'tut_ols_wls.py']
use_glob = True
if use_glob:
import glob
filelist = glob.glob('*.py')
print(lzip(range(len(filelist)), filelist))
for fname in ['run_all.py', 'example_rpy.py']:
filelist.remove(fname)
#filelist = filelist[15:]
#temporarily disable show
plt_show = plt.show
def noop(*args):
pass
plt.show = noop
cont = input("""Are you sure you want to run all of the examples?
This is done mainly to check that they are up to date.
(y/n) >>> """)
has_errors = []
if 'y' in cont.lower():
for run_all_f in filelist:
try:
print("\n\nExecuting example file", run_all_f)
print("-----------------------" + "-"*len(run_all_f))
exec(open(run_all_f).read())
except:
#f might be overwritten in the executed file
print("**********************" + "*"*len(run_all_f))
print("ERROR in example file", run_all_f)
print("**********************" + "*"*len(run_all_f))
has_errors.append(run_all_f)
if stop_on_error:
raise
print('\nModules that raised exception:')
print(has_errors)
#reenable show after closing windows
plt.close('all')
plt.show = plt_show
plt.show()
| bsd-3-clause |
bikong2/scikit-learn | sklearn/decomposition/tests/test_kernel_pca.py | 57 | 8062 | import numpy as np
import scipy.sparse as sp
from sklearn.utils.testing import (assert_array_almost_equal, assert_less,
assert_equal, assert_not_equal,
assert_raises)
from sklearn.decomposition import PCA, KernelPCA
from sklearn.datasets import make_circles
from sklearn.linear_model import Perceptron
from sklearn.pipeline import Pipeline
from sklearn.grid_search import GridSearchCV
from sklearn.metrics.pairwise import rbf_kernel
def test_kernel_pca():
rng = np.random.RandomState(0)
X_fit = rng.random_sample((5, 4))
X_pred = rng.random_sample((2, 4))
def histogram(x, y, **kwargs):
# Histogram kernel implemented as a callable.
assert_equal(kwargs, {}) # no kernel_params that we didn't ask for
return np.minimum(x, y).sum()
for eigen_solver in ("auto", "dense", "arpack"):
for kernel in ("linear", "rbf", "poly", histogram):
# histogram kernel produces singular matrix inside linalg.solve
# XXX use a least-squares approximation?
inv = not callable(kernel)
# transform fit data
kpca = KernelPCA(4, kernel=kernel, eigen_solver=eigen_solver,
fit_inverse_transform=inv)
X_fit_transformed = kpca.fit_transform(X_fit)
X_fit_transformed2 = kpca.fit(X_fit).transform(X_fit)
assert_array_almost_equal(np.abs(X_fit_transformed),
np.abs(X_fit_transformed2))
# non-regression test: previously, gamma would be 0 by default,
# forcing all eigenvalues to 0 under the poly kernel
assert_not_equal(X_fit_transformed.size, 0)
# transform new data
X_pred_transformed = kpca.transform(X_pred)
assert_equal(X_pred_transformed.shape[1],
X_fit_transformed.shape[1])
# inverse transform
if inv:
X_pred2 = kpca.inverse_transform(X_pred_transformed)
assert_equal(X_pred2.shape, X_pred.shape)
def test_invalid_parameters():
assert_raises(ValueError, KernelPCA, 10, fit_inverse_transform=True,
kernel='precomputed')
def test_kernel_pca_sparse():
rng = np.random.RandomState(0)
X_fit = sp.csr_matrix(rng.random_sample((5, 4)))
X_pred = sp.csr_matrix(rng.random_sample((2, 4)))
for eigen_solver in ("auto", "arpack"):
for kernel in ("linear", "rbf", "poly"):
# transform fit data
kpca = KernelPCA(4, kernel=kernel, eigen_solver=eigen_solver,
fit_inverse_transform=False)
X_fit_transformed = kpca.fit_transform(X_fit)
X_fit_transformed2 = kpca.fit(X_fit).transform(X_fit)
assert_array_almost_equal(np.abs(X_fit_transformed),
np.abs(X_fit_transformed2))
# transform new data
X_pred_transformed = kpca.transform(X_pred)
assert_equal(X_pred_transformed.shape[1],
X_fit_transformed.shape[1])
# inverse transform
# X_pred2 = kpca.inverse_transform(X_pred_transformed)
# assert_equal(X_pred2.shape, X_pred.shape)
def test_kernel_pca_linear_kernel():
rng = np.random.RandomState(0)
X_fit = rng.random_sample((5, 4))
X_pred = rng.random_sample((2, 4))
# for a linear kernel, kernel PCA should find the same projection as PCA
# modulo the sign (direction)
# fit only the first four components: fifth is near zero eigenvalue, so
# can be trimmed due to roundoff error
assert_array_almost_equal(
np.abs(KernelPCA(4).fit(X_fit).transform(X_pred)),
np.abs(PCA(4).fit(X_fit).transform(X_pred)))
def test_kernel_pca_n_components():
rng = np.random.RandomState(0)
X_fit = rng.random_sample((5, 4))
X_pred = rng.random_sample((2, 4))
for eigen_solver in ("dense", "arpack"):
for c in [1, 2, 4]:
kpca = KernelPCA(n_components=c, eigen_solver=eigen_solver)
shape = kpca.fit(X_fit).transform(X_pred).shape
assert_equal(shape, (2, c))
def test_remove_zero_eig():
X = np.array([[1 - 1e-30, 1], [1, 1], [1, 1 - 1e-20]])
# n_components=None (default) => remove_zero_eig is True
kpca = KernelPCA()
Xt = kpca.fit_transform(X)
assert_equal(Xt.shape, (3, 0))
kpca = KernelPCA(n_components=2)
Xt = kpca.fit_transform(X)
assert_equal(Xt.shape, (3, 2))
kpca = KernelPCA(n_components=2, remove_zero_eig=True)
Xt = kpca.fit_transform(X)
assert_equal(Xt.shape, (3, 0))
def test_kernel_pca_precomputed():
rng = np.random.RandomState(0)
X_fit = rng.random_sample((5, 4))
X_pred = rng.random_sample((2, 4))
for eigen_solver in ("dense", "arpack"):
X_kpca = KernelPCA(4, eigen_solver=eigen_solver).\
fit(X_fit).transform(X_pred)
X_kpca2 = KernelPCA(
4, eigen_solver=eigen_solver, kernel='precomputed').fit(
np.dot(X_fit, X_fit.T)).transform(np.dot(X_pred, X_fit.T))
X_kpca_train = KernelPCA(
4, eigen_solver=eigen_solver,
kernel='precomputed').fit_transform(np.dot(X_fit, X_fit.T))
X_kpca_train2 = KernelPCA(
4, eigen_solver=eigen_solver, kernel='precomputed').fit(
np.dot(X_fit, X_fit.T)).transform(np.dot(X_fit, X_fit.T))
assert_array_almost_equal(np.abs(X_kpca),
np.abs(X_kpca2))
assert_array_almost_equal(np.abs(X_kpca_train),
np.abs(X_kpca_train2))
def test_kernel_pca_invalid_kernel():
rng = np.random.RandomState(0)
X_fit = rng.random_sample((2, 4))
kpca = KernelPCA(kernel="tototiti")
assert_raises(ValueError, kpca.fit, X_fit)
def test_gridsearch_pipeline():
# Test if we can do a grid-search to find parameters to separate
# circles with a perceptron model.
X, y = make_circles(n_samples=400, factor=.3, noise=.05,
random_state=0)
kpca = KernelPCA(kernel="rbf", n_components=2)
pipeline = Pipeline([("kernel_pca", kpca), ("Perceptron", Perceptron())])
param_grid = dict(kernel_pca__gamma=2. ** np.arange(-2, 2))
grid_search = GridSearchCV(pipeline, cv=3, param_grid=param_grid)
grid_search.fit(X, y)
assert_equal(grid_search.best_score_, 1)
def test_gridsearch_pipeline_precomputed():
# Test if we can do a grid-search to find parameters to separate
# circles with a perceptron model using a precomputed kernel.
X, y = make_circles(n_samples=400, factor=.3, noise=.05,
random_state=0)
kpca = KernelPCA(kernel="precomputed", n_components=2)
pipeline = Pipeline([("kernel_pca", kpca), ("Perceptron", Perceptron())])
param_grid = dict(Perceptron__n_iter=np.arange(1, 5))
grid_search = GridSearchCV(pipeline, cv=3, param_grid=param_grid)
X_kernel = rbf_kernel(X, gamma=2.)
grid_search.fit(X_kernel, y)
assert_equal(grid_search.best_score_, 1)
def test_nested_circles():
# Test the linear separability of the first 2D KPCA transform
X, y = make_circles(n_samples=400, factor=.3, noise=.05,
random_state=0)
# 2D nested circles are not linearly separable
train_score = Perceptron().fit(X, y).score(X, y)
assert_less(train_score, 0.8)
# Project the circles data into the first 2 components of a RBF Kernel
# PCA model.
# Note that the gamma value is data dependent. If this test breaks
# and the gamma value has to be updated, the Kernel PCA example will
# have to be updated too.
kpca = KernelPCA(kernel="rbf", n_components=2,
fit_inverse_transform=True, gamma=2.)
X_kpca = kpca.fit_transform(X)
# The data is perfectly linearly separable in that space
train_score = Perceptron().fit(X_kpca, y).score(X_kpca, y)
assert_equal(train_score, 1.0)
| bsd-3-clause |
TickSmith/tickvault-python-api | setup.py | 1 | 1907 | # ----------------------------------------------------------------------
# setup.py -- tksapi setup script
#
# Copyright (C) 2017, TickSmith Corp.
# ----------------------------------------------------------------------
from setuptools import find_packages, setup
from codecs import open
from os import path
here = path.abspath(path.dirname(__file__))
def read(*parts):
with open(path.join(here,*parts), encoding='utf-8') as f:
return f.read()
def main():
setup_options = {
"name": "tickvault-python-api",
"version": "1.2.5",
"description": "TickVault Python Query API",
"long_description": read("README.md"),
"author": "TickSmith Corp.",
"author_email": "support@ticksmith.com",
"url": "https://github.com/ticksmith/tickvault-python-api",
"license": "MIT",
"install_requires": ['requests', 'ujson', 'numpy', 'pandas', 'pyparsing'],
"packages": find_packages(),
"data_files": [('', ['LICENSE.txt'])],
"platforms": ["any"],
"zip_safe": True,
"keywords": "tickvault python api client",
"classifiers": [
'Development Status :: 4 - Beta',
'Environment :: Console',
'Intended Audience :: Developers',
'Intended Audience :: Financial and Insurance Industry',
'License :: OSI Approved :: MIT License',
'Natural Language :: English',
'Operating System :: OS Independent',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Topic :: Software Development',
'Topic :: Software Development :: Libraries'
]
}
# Run the setup
setup(**setup_options)
if __name__ == '__main__':
main()
| mit |
SKIRT/PTS | core/basics/colour.py | 1 | 13317 | #!/usr/bin/env python
# -*- coding: utf8 -*-
# *****************************************************************
# ** PTS -- Python Toolkit for working with SKIRT **
# ** © Astronomical Observatory, Ghent University **
# *****************************************************************
## \package pts.core.basics.colour Contains the Colour class.
# Compatibility between python 2 and 3
from __future__ import print_function
# Import standard modules
from collections import OrderedDict
from matplotlib import colors as mcolors
# Get matplotlib colors
mpl_colors = dict(mcolors.BASE_COLORS, **mcolors.CSS4_COLORS)
# Sort colors by hue, saturation, value and name.
#by_hsv = sorted((tuple(mcolors.rgb_to_hsv(mcolors.to_rgba(color)[:3])), name) for name, color in colors.items())
#mpl_color_names = [name for hsv, name in by_hsv]
# -----------------------------------------------------------------
normal = "\033[38;5;%sm"
bold = "\033[1;38;5;%sm"
reset = "\033[0m"
# -----------------------------------------------------------------
def hex_to_rgb(hex):
"""
This function ...
#FFFFFF" -> [255,255,255]
"""
# Pass 16 to the integer function for change of base
return [int(hex[i:i+2], 16) for i in range(1,6,2)]
# -----------------------------------------------------------------
def rgb_to_hex(rgb):
"""
[255,255,255] -> "#FFFFFF"
"""
# Components need to be integers for hex to make sense
rgb = [int(x) for x in rgb]
return "#"+"".join(["0{0:x}".format(v) if v < 16 else
"{0:x}".format(v) for v in rgb])
# -----------------------------------------------------------------
def parse_colour(string):
"""
This function ...
:param string:
:return:
"""
from ..tools import types
if types.is_string_type(string):
if string.startswith("#"): return Colour.from_hex(string)
elif list(string).count(",") == 2:
red = float(string.split(",")[0])
green = float(string.split(",")[1])
blue = float(string.split(",")[2])
return Colour.from_rgb(red, green, blue)
else: return Colour.from_name(string)
elif types.is_sequence(string): return Colour.from_rgb(string[0], string[1], string[2])
elif isinstance(string, Colour): return string
else: raise ValueError("Invalid input")
# -----------------------------------------------------------------
# 16 basic HTML color names (HTML 4.01 specification)
predefined = OrderedDict()
predefined["black"] = ("#000000", (0,0,0))
predefined["white"] = ("#FFFFFF", (255,255,255))
predefined["red"] = ("#FF0000", (255,0,0))
predefined["lime"] = ("#00FF00", (0,255,0))
predefined["blue"] = ("#0000FF", (0,0,255))
predefined["yellow"] = ("#FFFF00", (255,255,0))
predefined["cyan"] = ("#00FFFF", (0,255,255))
predefined["aqua"] = predefined["cyan"]
predefined["magenta"] = ("#FF00FF", (255,0,255))
predefined["fuchsia"] = predefined["magenta"]
predefined["silver"] = ("#C0C0C0", (192,192,192))
predefined["gray"] = ("#808080", (128,128,128))
predefined["maroon"] = ("#800000", (128,0,0))
predefined["olive"] = ("#808000", (128,128,0))
predefined["green"] = ("#008000", (0,128,0))
predefined["purple"] = ("#800080", (128,0,128))
predefined["teal"] = ("#008080", (0,128,128))
predefined["navy"] = ("#000080", (0,0,128))
# X11 colours
predefined["alice_blue"] = ("#F0F8FF", None)
predefined["antique_white"] = ("#FAEBD7", None)
#predefined["aqua"] = ("#00FFFF", None)
predefined["aquamarine"] = ("#7FFFD4", None)
predefined["azure"] = ("#F0FFFF", None)
predefined["beige"] = ("#F5F5DC", None)
predefined["bisque"] = ("#FFE4C4", None)
#predefined["black"] = ("#000000", None)
predefined["blanched_almond"] = ("#FFEBCD", None)
#predefined["blue"] = ("#0000FF", None)
predefined["blue_violet"] = ("#8A2BE2", None)
predefined["brown"] = ("#A52A2A", None)
predefined["burlywood"] = ("#DEB887", None)
predefined["cadet_blue"] = ("#5F9EA0", None)
predefined["chartreuse"] = ("#7FFF00", None)
predefined["chocolate"] = ("#D2691E", None)
predefined["coral"] = ("#FF7F50", None)
predefined["cornflower"] = ("#6495ED", None)
predefined["cornsilk"] = ("#FFF8DC", None)
predefined["crimson"] = ("#DC143C", None)
#predefined["cyan"] = ("#00FFFF", None)
predefined["dark_blue"] = ("#00008B", None)
predefined["dark_cyan"] = ("#008B8B", None)
predefined["dark_goldenrod"] = ("#B8860B", None)
predefined["dark_gray"] = ("#A9A9A9", None)
predefined["dark_green"] = ("#006400", None)
predefined["dark_khaki"] = ("#BDB76B", None)
predefined["dark_magenta"] = ("#8B008B", None)
predefined["dark_olive_green"] = ("#556B2F", None)
predefined["dark_orange"] = ("#FF8C00", None)
predefined["dark_orchid"] = ("#9932CC", None)
predefined["dark_red"] = ("#8B0000", None)
predefined["dark_salmon"] = ("#E9967A", None)
predefined["dark_sea_green"] = ("#8FBC8F", None)
predefined["dark_slate_blue"] = ("#483D8B", None)
predefined["dark_slate_gray"] = ("#2F4F4F", None)
predefined["dark_turquoise"] = ("#00CED1", None)
predefined["dark_violet"] = ("#9400D3", None)
predefined["deep_pink"] = ("#FF1493", None)
predefined["deep_sky_blue"] = ("#00BFFF", None)
predefined["dim_gray"] = ("#696969", None)
predefined["dodger_blue"] = ("#1E90FF", None)
predefined["firebrick"] = ("#B22222", None)
predefined["floral_white"] = ("#FFFAF0", None)
predefined["forest_green"] = ("#228B22", None)
#predefined["fuchsia"] = ("#FF00FF", None)
predefined["gainsboro"] = ("#DCDCDC", None)
predefined["ghost_white"] = ("#F8F8FF", None)
predefined["gold"] = ("#FFD700", None)
predefined["goldenrod"] = ("#DAA520", None)
#predefined["gray"] = ("#BEBEBE", None)
predefined["web_gray"] = ("#808080", None)
#predefined["green"] = ("#00FF00", None)
predefined["web_green"] = ("#008000", None)
predefined["green_yellow"] = ("#ADFF2F", None)
predefined["honeydew"] = ("#F0FFF0", None)
predefined["hot_pink"] = ("#FF69B4", None)
predefined["indian_red"] = ("#CD5C5C", None)
predefined["indigo"] = ("#4B0082", None)
predefined["ivory"] = ("#FFFFF0", None)
predefined["khaki"] = ("#F0E68C", None)
predefined["lavender"] = ("#E6E6FA", None)
predefined["lavender_blush"] = ("#FFF0F5", None)
predefined["lawn_green"] = ("#7CFC00", None)
predefined["lemon_chiffon"] = ("#FFFACD", None)
predefined["light_blue"] = ("#ADD8E6", None)
predefined["light_coral"] = ("#F08080", None)
predefined["light_cyan"] = ("#E0FFFF", None)
predefined["light_goldenrod"] = ("#FAFAD2", None)
predefined["light_gray"] = ("#D3D3D3", None)
predefined["light_green"] = ("#90EE90", None)
predefined["light_pink"] = ("#FFB6C1", None)
predefined["light_salmon"] = ("#FFA07A", None)
predefined["light_sea_green"] = ("#20B2AA", None)
predefined["light_sky_blue"] = ("#87CEFA", None)
predefined["light_slate_gray"] = ("#778899", None)
predefined["light_steel_blue"] = ("#B0C4DE", None)
predefined["light_yellow"] = ("#FFFFE0", None)
#predefined["lime"] = ("#00FF00", None)
predefined["lime_green"] = ("#32CD32", None)
predefined["linen"] = ("#FAF0E6", None)
#predefined["magenta"] = ("#FF00FF", None)
#predefined["maroon"] = ("#B03060", None)
predefined["web_maroon"] = ("#7F0000", None)
predefined["medium_aquamarine"] = ("#66CDAA", None)
predefined["medium_blue"] = ("#0000CD", None)
predefined["medium_orchid"] = ("#BA55D3", None)
predefined["medium_purple"] = ("#9370DB", None)
predefined["medium_sea_green"] = ("#3CB371", None)
predefined["medium_slate_blue"] = ("#7B68EE", None)
predefined["medium_spring_green"] = ("#00FA9A", None)
predefined["medium_turquoise"] = ("#48D1CC", None)
predefined["medium_violet_red"] = ("#C71585", None)
predefined["midnight_blue"] = ("#191970", None)
predefined["mint_cream"] = ("#F5FFFA", None)
predefined["misty_rose"] = ("#FFE4E1", None)
predefined["moccasin"] = ("#FFE4B5", None)
predefined["navajo_white"] = ("#FFDEAD", None)
predefined["navy_blue"] = ("#000080", None)
predefined["old_lace"] = ("#FDF5E6", None)
#predefined["olive"] = ("#808000", None)
predefined["olive_drab"] = ("#6B8E23", None)
predefined["orange"] = ("#FFA500", None)
predefined["orange_red"] = ("#FF4500", None)
predefined["orchid"] = ("#DA70D6", None)
predefined["pale_goldenrod"] = ("#EEE8AA", None)
predefined["pale_green"] = ("#98FB98", None)
predefined["pale_turquoise"] = ("#AFEEEE", None)
predefined["pale_violet_red"] = ("#DB7093", None)
predefined["papaya_whip"] = ("#FFEFD5", None)
predefined["peach_puff"] = ("#FFDAB9", None)
predefined["peru"] = ("#CD853F", None)
predefined["pink"] = ("#FFC0CB", None)
predefined["plum"] = ("#DDA0DD", None)
predefined["powder_blue"] = ("#B0E0E6", None)
#predefined["purple"] = ("#A020F0", None)
predefined["web_purple"] = ("#7F007F", None)
predefined["rebecca_purple"] = ("#663399", None)
#predefined["red"] = ("#FF0000", None)
predefined["rosy_brown"] = ("#BC8F8F", None)
predefined["royal_blue"] = ("#4169E1", None)
predefined["saddle_brown"] = ("#8B4513", None)
predefined["salmon"] = ("#FA8072", None)
predefined["sandy_brown"] = ("#F4A460", None)
predefined["sea_green"] = ("#2E8B57", None)
predefined["seashell"] = ("#FFF5EE", None)
predefined["sienna"] = ("#A0522D", None)
#predefined["silver"] = ("#C0C0C0", None)
predefined["sky_blue"] = ("#87CEEB", None)
predefined["slate_blue"] = ("#6A5ACD", None)
predefined["slate_gray"] = ("#708090", None)
predefined["snow"] = ("#FFFAFA", None)
predefined["spring_green"] = ("#00FF7F", None)
predefined["steel_blue"] = ("#4682B4", None)
predefined["tan"] = ("#D2B48C", None)
#predefined["teal"] = ("#008080", None)
predefined["thistle"] = ("#D8BFD8", None)
predefined["tomato"] = ("#FF6347", None)
predefined["turquoise"] = ("#40E0D0", None)
predefined["violet"] = ("#EE82EE", None)
predefined["wheat"] = ("#F5DEB3", None)
#predefined["white"] = ("#FFFFFF", None)
predefined["white_smoke"] = ("#F5F5F5", None)
#predefined["yellow"] = ("#FFFF00", None)
predefined["yellow_green"] = ("#9ACD3", None)
# -----------------------------------------------------------------
def get_colour_names():
"""
Thisf unction ...
:return:
"""
return predefined.keys()
# -----------------------------------------------------------------
class Colour(object):
"""
This function ...
"""
def __init__(self, red, green, blue):
"""
This function ...
:param red:
:param green:
:param blue:
"""
self.red = red
self.green = green
self.blue = blue
# -----------------------------------------------------------------
@classmethod
def from_rgb(cls, red, green, blue):
"""
This function ...
:param red:
:param green:
:param blue:
:return:
"""
return cls(red, green, blue)
# -----------------------------------------------------------------
@classmethod
def from_hex(cls, hex):
"""
This function ...
:param hex:
:return:
"""
red, green, blue = hex_to_rgb(hex)
return cls.from_rgb(red, green, blue)
# -----------------------------------------------------------------
@classmethod
def from_name(cls, name):
"""
:param name:
:return:
"""
if name.lower() in predefined: return cls.from_hex(predefined[name.lower()][0])
elif name in mpl_colors: return cls.from_hex(mpl_colors[name])
else: raise ValueError("Colour '" + name + "' not recognized")
# -----------------------------------------------------------------
@property
def rgb(self):
return self.red, self.green, self.blue
# -----------------------------------------------------------------
@property
def hex(self):
return rgb_to_hex([self.red, self.green, self.blue])
# -----------------------------------------------------------------
@property
def hex1(self):
return self.hex[1:3]
# -----------------------------------------------------------------
@property
def hex2(self):
return self.hex[3:5]
# -----------------------------------------------------------------
@property
def hex3(self):
return self.hex[5:]
# -----------------------------------------------------------------
@property
def hex_slashes(self):
return self.hex1 + "/" + self.hex2 + "/" + self.hex3
# -----------------------------------------------------------------
def __str__(self):
"""
This function ...
:return:
"""
return self.hex
# -----------------------------------------------------------------
# def __repr__(self):
#
# """
# This function ...
# :return:
# """
#
# # ## TODO: doesn't really work: only shows red or?
# # #string = '\e]4;1;rgb:' + self.hex1 + '/' + self.hex2 + '/' + self.hex3 + '\e\\\e[31m██ = ' + self.hex + '\e[m' + '\e]104\a'
# # #return string.replace("\e", "\033")
# #
# # #string = normal + self.hex1 + "/" + self.hex2 + "/" + self.hex3 + "██ = " + self.hex + reset
# #
# # #i = 0
# # #string = (normal + "%s" + reset) % (i, self.hex_slashes)
# # string = (normal + "%s" + reset) % self.hex_slashes
# # return string
# -----------------------------------------------------------------
| agpl-3.0 |
arahuja/scikit-learn | examples/mixture/plot_gmm_sin.py | 248 | 2747 | """
=================================
Gaussian Mixture Model Sine Curve
=================================
This example highlights the advantages of the Dirichlet Process:
complexity control and dealing with sparse data. The dataset is formed
by 100 points loosely spaced following a noisy sine curve. The fit by
the GMM class, using the expectation-maximization algorithm to fit a
mixture of 10 Gaussian components, finds too-small components and very
little structure. The fits by the Dirichlet process, however, show
that the model can either learn a global structure for the data (small
alpha) or easily interpolate to finding relevant local structure
(large alpha), never falling into the problems shown by the GMM class.
"""
import itertools
import numpy as np
from scipy import linalg
import matplotlib.pyplot as plt
import matplotlib as mpl
from sklearn import mixture
from sklearn.externals.six.moves import xrange
# Number of samples per component
n_samples = 100
# Generate random sample following a sine curve
np.random.seed(0)
X = np.zeros((n_samples, 2))
step = 4 * np.pi / n_samples
for i in xrange(X.shape[0]):
x = i * step - 6
X[i, 0] = x + np.random.normal(0, 0.1)
X[i, 1] = 3 * (np.sin(x) + np.random.normal(0, .2))
color_iter = itertools.cycle(['r', 'g', 'b', 'c', 'm'])
for i, (clf, title) in enumerate([
(mixture.GMM(n_components=10, covariance_type='full', n_iter=100),
"Expectation-maximization"),
(mixture.DPGMM(n_components=10, covariance_type='full', alpha=0.01,
n_iter=100),
"Dirichlet Process,alpha=0.01"),
(mixture.DPGMM(n_components=10, covariance_type='diag', alpha=100.,
n_iter=100),
"Dirichlet Process,alpha=100.")]):
clf.fit(X)
splot = plt.subplot(3, 1, 1 + i)
Y_ = clf.predict(X)
for i, (mean, covar, color) in enumerate(zip(
clf.means_, clf._get_covars(), color_iter)):
v, w = linalg.eigh(covar)
u = w[0] / linalg.norm(w[0])
# as the DP will not use every component it has access to
# unless it needs it, we shouldn't plot the redundant
# components.
if not np.any(Y_ == i):
continue
plt.scatter(X[Y_ == i, 0], X[Y_ == i, 1], .8, color=color)
# Plot an ellipse to show the Gaussian component
angle = np.arctan(u[1] / u[0])
angle = 180 * angle / np.pi # convert to degrees
ell = mpl.patches.Ellipse(mean, v[0], v[1], 180 + angle, color=color)
ell.set_clip_box(splot.bbox)
ell.set_alpha(0.5)
splot.add_artist(ell)
plt.xlim(-6, 4 * np.pi - 6)
plt.ylim(-5, 5)
plt.title(title)
plt.xticks(())
plt.yticks(())
plt.show()
| bsd-3-clause |
phoebe-project/phoebe2 | tests/nosetests/test_dynamics/test_dynamics_grid.py | 1 | 9061 | """
"""
import phoebe
from phoebe import u
import numpy as np
import matplotlib.pyplot as plt
def _keplerian_v_nbody(b, ltte, period, plot=False):
"""
test a single bundle for the phoebe backend's kepler vs nbody dynamics methods
"""
# TODO: loop over ltte=True,False (once keplerian dynamics supports the switch)
b.set_value('dynamics_method', 'bs')
times = np.linspace(0, 5*period, 101)
nb_ts, nb_us, nb_vs, nb_ws, nb_vus, nb_vvs, nb_vws = phoebe.dynamics.nbody.dynamics_from_bundle(b, times, ltte=ltte)
k_ts, k_us, k_vs, k_ws, k_vus, k_vvs, k_vws = phoebe.dynamics.keplerian.dynamics_from_bundle(b, times, ltte=ltte)
assert(np.allclose(nb_ts, k_ts, 1e-8))
for ci in range(len(b.hierarchy.get_stars())):
# TODO: make rtol lower if possible
assert(np.allclose(nb_us[ci], k_us[ci], rtol=1e-5, atol=1e-2))
assert(np.allclose(nb_vs[ci], k_vs[ci], rtol=1e-5, atol=1e-2))
assert(np.allclose(nb_ws[ci], k_ws[ci], rtol=1e-5, atol=1e-2))
# nbody ltte velocities are wrong so only check velocities if ltte off
if not ltte:
assert(np.allclose(nb_vus[ci], k_vus[ci], rtol=1e-5, atol=1e-2))
assert(np.allclose(nb_vvs[ci], k_vvs[ci], rtol=1e-5, atol=1e-2))
assert(np.allclose(nb_vws[ci], k_vws[ci], rtol=1e-5, atol=1e-2))
def _phoebe_v_photodynam(b, period, plot=False):
"""
test a single bundle for phoebe's nbody vs photodynam via the frontend
"""
times = np.linspace(0, 5*period, 21)
b.add_dataset('orb', times=times, dataset='orb01', component=b.hierarchy.get_stars())
# photodynam and phoebe should have the same nbody defaults... if for some reason that changes,
# then this will probably fail
b.add_compute('photodynam', compute='pdcompute')
# photodynam backend ONLY works with ltte=True, so we will run the phoebe backend with that as well
# TODO: remove distortion_method='nbody' once that is supported
b.set_value('dynamics_method', 'bs')
b.set_value('ltte', True)
b.run_compute('pdcompute', model='pdresults')
b.run_compute('phoebe01', model='phoeberesults')
for comp in b.hierarchy.get_stars():
# TODO: check to see how low we can make atol (or change to rtol?)
# TODO: look into justification of flipping x and y for both dynamics (photodynam & phoebe)
# TODO: why the small discrepancy (visible especially in y, still <1e-11) - possibly a difference in time0 or just a precision limit in the photodynam backend since loading from a file??
if plot:
for k in ['us', 'vs', 'ws', 'vus', 'vvs', 'vws']:
plt.cla()
plt.plot(b.get_value('times', model='phoeberesults', component=comp, unit=u.d), b.get_value(k, model='phoeberesults', component=comp), 'r-')
plt.plot(b.get_value('times', model='phoeberesults', component=comp, unit=u.d), b.get_value(k, model='pdresults', component=comp), 'b-')
diff = abs(b.get_value(k, model='phoeberesults', component=comp) - b.get_value(k, model='pdresults', component=comp))
print("*** max abs: {}".format(max(diff)))
plt.xlabel('t')
plt.ylabel(k)
plt.show()
assert(np.allclose(b.get_value('times', model='phoeberesults', component=comp, unit=u.d), b.get_value('times', model='pdresults', component=comp, unit=u.d), rtol=0, atol=1e-05))
assert(np.allclose(b.get_value('us', model='phoeberesults', component=comp, unit=u.AU), b.get_value('us', model='pdresults', component=comp, unit=u.AU), rtol=0, atol=1e-05))
assert(np.allclose(b.get_value('vs', model='phoeberesults', component=comp, unit=u.AU), b.get_value('vs', model='pdresults', component=comp, unit=u.AU), rtol=0, atol=1e-05))
assert(np.allclose(b.get_value('ws', model='phoeberesults', component=comp, unit=u.AU), b.get_value('ws', model='pdresults', component=comp, unit=u.AU), rtol=0, atol=1e-05))
#assert(np.allclose(b.get_value('vxs', model='phoeberesults', component=comp, unit=u.solRad/u.d), b.get_value('vxs', model='pdresults', component=comp, unit=u.solRad/u.d), rtol=0, atol=1e-05))
#assert(np.allclose(b.get_value('vys', model='phoeberesults', component=comp, unit=u.solRad/u.d), b.get_value('vys', model='pdresults', component=comp, unit=u.solRad/u.d), rtol=0, atol=1e-05))
#assert(np.allclose(b.get_value('vzs', model='phoeberesults', component=comp, unit=u.solRad/u.d), b.get_value('vzs', model='pdresults', component=comp, unit=u.solRad/u.d), rtol=0, atol=1e-05))
def _frontend_v_backend(b, ltte, period, plot=False):
"""
test a single bundle for the frontend vs backend access to both kepler and nbody dynamics
"""
# TODO: loop over ltte=True,False
times = np.linspace(0, 5*period, 101)
b.add_dataset('orb', times=times, dataset='orb01', component=b.hierarchy.get_stars())
b.rename_compute('phoebe01', 'nbody')
b.set_value('dynamics_method', 'bs')
b.set_value('ltte', ltte)
b.add_compute('phoebe', dynamics_method='keplerian', compute='keplerian', ltte=ltte)
# NBODY
# do backend Nbody
b_ts, b_us, b_vs, b_ws, b_vus, b_vvs, b_vws = phoebe.dynamics.nbody.dynamics_from_bundle(b, times, compute='nbody', ltte=ltte)
# do frontend Nbody
b.run_compute('nbody', model='nbodyresults')
for ci,comp in enumerate(b.hierarchy.get_stars()):
# TODO: can we lower tolerance?
assert(np.allclose(b.get_value('times', model='nbodyresults', component=comp, unit=u.d), b_ts, rtol=0, atol=1e-6))
assert(np.allclose(b.get_value('us', model='nbodyresults', component=comp, unit=u.solRad), b_us[ci], rtol=1e-7, atol=1e-4))
assert(np.allclose(b.get_value('vs', model='nbodyresults', component=comp, unit=u.solRad), b_vs[ci], rtol=1e-7, atol=1e-4))
assert(np.allclose(b.get_value('ws', model='nbodyresults', component=comp, unit=u.solRad), b_ws[ci], rtol=1e-7, atol=1e-4))
if not ltte:
assert(np.allclose(b.get_value('vus', model='nbodyresults', component=comp, unit=u.solRad/u.d), b_vus[ci], rtol=1e-7, atol=1e-4))
assert(np.allclose(b.get_value('vvs', model='nbodyresults', component=comp, unit=u.solRad/u.d), b_vvs[ci], rtol=1e-7, atol=1e-4))
assert(np.allclose(b.get_value('vws', model='nbodyresults', component=comp, unit=u.solRad/u.d), b_vws[ci], rtol=1e-7, atol=1e-4))
# KEPLERIAN
# do backend keplerian
b_ts, b_us, b_vs, b_ws, b_vus, b_vvs, b_vws = phoebe.dynamics.keplerian.dynamics_from_bundle(b, times, compute='keplerian', ltte=ltte)
# do frontend keplerian
b.run_compute('keplerian', model='keplerianresults')
for ci,comp in enumerate(b.hierarchy.get_stars()):
# TODO: can we lower tolerance?
assert(np.allclose(b.get_value('times', model='keplerianresults', component=comp, unit=u.d), b_ts, rtol=0, atol=1e-08))
assert(np.allclose(b.get_value('us', model='keplerianresults', component=comp, unit=u.solRad), b_us[ci], rtol=0, atol=1e-08))
assert(np.allclose(b.get_value('vs', model='keplerianresults', component=comp, unit=u.solRad), b_vs[ci], rtol=0, atol=1e-08))
assert(np.allclose(b.get_value('ws', model='keplerianresults', component=comp, unit=u.solRad), b_ws[ci], rtol=0, atol=1e-08))
assert(np.allclose(b.get_value('vus', model='keplerianresults', component=comp, unit=u.solRad/u.d), b_vus[ci], rtol=0, atol=1e-08))
assert(np.allclose(b.get_value('vvs', model='keplerianresults', component=comp, unit=u.solRad/u.d), b_vvs[ci], rtol=0, atol=1e-08))
assert(np.allclose(b.get_value('vws', model='keplerianresults', component=comp, unit=u.solRad/u.d), b_vws[ci], rtol=0, atol=1e-08))
def test_binary(plot=False):
"""
"""
phoebe.devel_on() # required for nbody dynamics
# TODO: once ps.copy is implemented, just send b.copy() to each of these
# system = [sma (AU), period (d)]
system1 = [0.05, 2.575]
system2 = [1., 257.5]
system3 = [40., 65000.]
for system in [system1,system2,system3]:
for q in [0.5,1.]:
for ltte in [True, False]:
print("test_dynamics_grid: sma={}, period={}, q={}, ltte={}".format(system[0], system[1], q, ltte))
b = phoebe.default_binary()
b.get_parameter('dynamics_method')._choices = ['keplerian', 'bs']
b.set_default_unit_all('sma', u.AU)
b.set_default_unit_all('period', u.d)
b.set_value('sma@binary',system[0])
b.set_value('period@binary', system[1])
b.set_value('q', q)
_keplerian_v_nbody(b, ltte, system[1], plot=plot)
_frontend_v_backend(b, ltte, system[1], plot=plot)
phoebe.devel_off() # reset for future tests
if __name__ == '__main__':
logger = phoebe.logger(clevel='INFO')
test_binary(plot=True)
# TODO: create tests for both triple configurations (A--B-C, A-B--C) - these should first be default bundles
| gpl-3.0 |
dpaiton/OpenPV | pv-core/analysis/python/plot_time_stability_all_patches.py | 1 | 10838 | """
Plots the time stability
"""
import os
import sys
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
import matplotlib.cm as cm
import PVReadWeights as rw
import PVConversions as conv
import scipy.cluster.vq as sp
import math
if len(sys.argv) < 5:
print "usage: time_stability filename on, filename off, filename-on post, filename-off post"
print len(sys.argv)
sys.exit()
w = rw.PVReadWeights(sys.argv[1])
wOff = rw.PVReadWeights(sys.argv[2])
space = 1
d = np.zeros((4,4))
nx = w.nx
ny = w.ny
nxp = w.nxp
nyp = w.nyp
numpat = w.numPatches
nf = w.nf
margin = 10
marginstart = margin
marginend = nx - margin
acount = 0
patchposition = []
def format_coord(x, y):
col = int(x+0.5)
row = int(y+0.5)
x2 = (x / 16.0)
y2 = (y / 16.0)
x = (x / 4.0)
y = (y / 4.0)
if col>=0 and col<numcols and row>=0 and row<numrows:
z = P[row,col]
return 'x=%1.4f, y=%1.4f, z=%1.4f'%(x, y, z)
else:
return 'x=%1.4d, y=%1.4d, x2=%1.4d, y2=%1.4d'%(int(x), int(y), int(x2), int(y2))
k = 16
for ko in range(numpat):
kxOn = conv.kxPos(ko, nx, ny, nf)
kyOn = conv.kyPos(ko, nx, ny, nf)
p = w.next_patch()
poff = wOff.next_patch()
if marginstart < kxOn < marginend:
if marginstart < kyOn < marginend:
acount = acount + 1
if kxOn == margin + 1 and kyOn == margin + 1:
else:
total = []
logtotal = []
def k_stability_analysis(k, forwardjump):
w = rw.PVReadWeights(sys.argv[1])
feature = k - 1
count = 0
d = np.zeros((nxp,nyp))
w.rewind()
for ko in np.arange(numpat):
kxOn = conv.kxPos(ko, nx, ny, nf)
kyOn = conv.kyPos(ko, nx, ny, nf)
p = w.next_patch()
if marginstart < kxOn < marginend:
if marginstart < kyOn < marginend:
##########
# Find Valuse of K-cluster[x] Patches
##########
w = rw.PVReadWeights(sys.argv[3])
wOff = rw.PVReadWeights(sys.argv[4])
w.rewind()
wOff.rewind()
patpla = patchposition
lenpat = len(patpla)
number = w.numPatches
count = 0
exp = []
expOff = []
exppn = []
exppnOff = []
body = w.recSize + 4
hs = w.headerSize
filesize = os.path.getsize(sys.argv[3])
bint = filesize / body
bint = bint - forwardjump - 1
if forwardjump == 0:
print "43110"
else:
leap = (body * forwardjump)
w.file.seek(leap, os.SEEK_CUR)
for i in range(bint):
if i == 0:
for j in range(lenpat):
if j == 0:
p = w.next_patch()
pOff = wOff.next_patch()
if len(p) == 0:
print"STOPPEP SUPER EARLY"
sys.exit()
don = p
doff = pOff
d = np.append(don, doff)
p = w.normalize(d)
pn = p
pn = np.reshape(np.matrix(pn),(1,32))
p = np.reshape(np.matrix(p),(32,1))
pm = pn * p
exppn = np.append(exppn, pn)
exp = np.append(exp,pm)
else:
p = w.next_patch()
pOff = wOff.next_patch()
if len(pOff) == 0:
print"STOPPED EARLY"
sys.exit()
don = p
doff = pOff
d = np.append(don, doff)
p = w.normalize(d)
pn = p
pn = np.reshape(np.matrix(pn),(1,32))
p = np.reshape(np.matrix(p),(32,1))
pm = pn * p
exppn = np.append(exppn, pn)
exp = np.append(exp,pm)
else:
count = 0
prejump = body - patpla[lenpat-1] + hs
w.file.seek(prejump, os.SEEK_CUR)
wOff.file.seek(prejump, os.SEEK_CUR)
for j in range(lenpat):
if j == 0:
p = w.next_patch()
pOff = wOff.next_patch()
test = p
if len(test) == 0:
print "stop"
input('Press Enter to Continue')
sys.exit()
don = p
doff = pOff
d = np.append(don, doff)
p = w.normalize(d)
p = np.reshape(np.matrix(p),(32,1))
j1 = 0
j2 = 32
pm = np.matrix(exppn[j1:j2]) * p
exp = np.append(exp,pm)
count += 1
else:
p = w.next_patch()
pOff = wOff.next_patch()
test = pOff
if len(test) == 0:
print "stop"
input('Press Enter to Continue')
sys.exit()
don = p
doff = pOff
d = np.append(don, doff)
p = w.normalize(d)
p = np.reshape(np.matrix(p),(32,1))
j1 = 32 * j
j2 = 32 * (j +1)
pm = np.matrix(exppn[j1:j2]) * p
exp = np.append(exp,pm)
count += 1
##########
# Find Average of K-cluster[x] Weights
##########
thenumber = lenpat
thenumberf = float(thenumber)
patpla = exp
lenpat = len(patpla)
howlong = lenpat / thenumber
total = []
logtotal = []
for i in range(thenumber):
subtotal = []
logsubtotal = []
for j in range(howlong):
if i == 0:
value = patpla[i + (thenumber * j)]
total = np.append(total, value)
#logvalue = patpla[i + (thenumber * j)]
#logvalue = math.log10(logvalue)
#logtotal = np.append(logtotal, logvalue)
else:
value = patpla[i + (thenumber * j)]
subtotal = np.append(subtotal, value)
#logvalue = patpla[i + (thenumber * j)]
#logvalue = math.log10(logvalue)
#logsubtotal = np.append(logsubtotal, logvalue)
if i > 0:
total = total + subtotal
#if i > 0:
#logtotal = logtotal + logsubtotal
total = total / thenumberf
#logtotal = logtotal / thenumberf
global total1
global total2
global total3
global total4
global total5
global total6
global total7
global total8
global total9
global total10
global total11
global total12
global total13
global total14
global total15
global total16
#global logtotal1
#global logtotal2
#global logtotal3
#global logtotal4
#global logtotal5
#global logtotal6
#global logtotal7
#global logtotal8
#global logtotal9
#global logtotal10
#global logtotal11
#global logtotal12
#global logtotal13
#global logtotal14
#global logtotal15
#global logtotal16
if feature == 0:
total1 = total
if feature == 1:
total2 = total
if feature == 2:
total3 = total
if feature == 3:
total4 = total
if feature == 4:
total5 = total
if feature == 5:
total6 = total
if feature == 6:
total7 = total
if feature == 7:
total8 = total
if feature == 8:
total9 = total
if feature == 9:
total10 = total
if feature == 10:
total11 = total
if feature == 11:
total12 = total
if feature == 12:
total13 = total
if feature == 13:
total14 = total
if feature == 14:
total15 = total
if feature == 15:
total16 = total
return
w = rw.PVReadWeights(sys.argv[3])
body = w.recSize + 4
hs = w.headerSize
filesize = os.path.getsize(sys.argv[3])
bint = filesize / body
print
print "Number of steps = ", bint
forwardjump = input('How many steps forward:')
count = 0
for i in range(16):
i = i + 1
k_stability_analysis(i, forwardjump)
count += 1
print count
if len(total1) == 0:
total1 = .5
if len(total2) == 0:
total2 = .5
if len(total3) == 0:
total3 = .5
if len(total4) == 0:
total4 = .5
if len(total5) == 0:
total5 = .5
if len(total6) == 0:
total6 = .5
if len(total7) == 0:
total7 = .5
if len(total8) == 0:
total8 = .5
if len(total9) == 0:
total9 = .5
if len(total10) == 0:
total10 = .5
if len(total11) == 0:
total11 = .5
if len(total12) == 0:
total12 = .5
if len(total13) == 0:
total13 = .5
if len(total14) == 0:
total14 = .5
if len(total15) == 0:
total15 = .5
if len(total16) == 0:
total16 = .5
##########
# Plot Time Stability Curve
##########
fig = plt.figure()
ax = fig.add_subplot(111)
fig2 = plt.figure()
ax2 = fig2.add_subplot(111, axisbg='darkslategray')
fig3 = plt.figure()
ax3 = fig3.add_subplot(111, axisbg='darkslategray')
textx = (-7/16.0) * k
texty = (10/16.0) * k
ax.set_title('On and Off K-means')
ax.set_axis_off()
ax.text(textx, texty,'ON\n\nOff', fontsize='xx-large', rotation='horizontal')
ax.text( -5, 12, "Percent %.2f %.2f %.2f %.2f %.2f %.2f %.2f %.2f %.2f %.2f %.2f %.2f %.2f %.2f %.2f %.2f" %(kcountper1, kcountper2, kcountper3, kcountper4, kcountper5, kcountper6, kcountper7, kcountper8, kcountper9, kcountper10, kcountper11, kcountper12, kcountper13, kcountper14, kcountper15, kcountper16), fontsize='large', rotation='horizontal')
ax.text(-4, 14, "Patch 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16", fontsize='x-large', rotation='horizontal')
ax.imshow(im, cmap=cm.jet, interpolation='nearest', vmin=w.min, vmax=w.max)
ax2.plot(np.arange(len(total1)), total1, '-o', color='y')
ax2.plot(np.arange(len(total2)), total2, '-o', color='r')
ax2.plot(np.arange(len(total3)), total3, '-o', color='b')
ax2.plot(np.arange(len(total4)), total4, '-o', color='c')
ax2.plot(np.arange(len(total5)), total5, '-o', color='m')
ax2.plot(np.arange(len(total6)), total6, '-o', color='k')
ax2.plot(np.arange(len(total7)), total7, '-o', color='w')
ax2.plot(np.arange(len(total8)), total8, '-o', color='g')
print "yellow = 1, 9"
print "red = 2, 10"
print "blue = 3, 11"
print "cyan = 4, 12"
print "magenta = 5, 13"
print "black = 6, 14"
print "white = 7, 15"
print "green = 8, 16"
ax3.plot(np.arange(len(total9)), total9, '-o', color='y')
ax3.plot(np.arange(len(total10)), total10, '-o', color='r')
ax3.plot(np.arange(len(total11)), total11, '-o', color='b')
ax3.plot(np.arange(len(total12)), total12, '-o', color='c')
ax3.plot(np.arange(len(total13)), total13, '-o', color='m')
ax3.plot(np.arange(len(total14)), total14, '-o', color='k')
ax3.plot(np.arange(len(total15)), total15, '-o', color='w')
ax3.plot(np.arange(len(total16)), total16, '-o', color='g')
ax2.set_xlabel('Time')
ax2.set_ylabel('Avg Correlation')
ax2.set_title('Time Stability k 1-8')
ax2.set_xlim(0, len(total1))
ax2.grid(True)
ax3.set_xlabel('Time')
ax3.set_ylabel('Avg Correlation')
ax3.set_title('Time Stability k 9-16')
ax3.set_xlim(0, len(total1))
ax3.grid(True)
plt.show()
| epl-1.0 |
gimli-org/gimli | pygimli/physics/sNMR/mrs.py | 1 | 30988 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Magnetic resonance sounding module."""
# general modules to import according to standards
import time
import numpy as np
import matplotlib.pyplot as plt
import pygimli as pg
from pygimli.utils import iterateBounds
from pygimli.utils.base import gmat2numpy
from pygimli.viewer.mpl import drawModel1D
# local functions in package
from pygimli.physics.sNMR.modelling import MRS1dBlockQTModelling
from pygimli.physics.sNMR.plotting import showErrorBars, showWC, showT2
class MRS():
"""Magnetic resonance sounding (MRS) manager class.
Attributes
----------
t, q : ndarray - time and pulse moment vectors
data, error : 2d ndarray - data and error cubes
K, z : ndarray - (complex) kernel and its vertical discretization
model, modelL, modelU : vectors - model vector and lower/upper bound to it
Methods
-------
loadMRSI - load MRSI (MRSmatlab format) data
showCube - show any data/error/misfit as data cube (over q and t)
showDataAndError - show data and error cubes
showKernel - show Kernel matrix
createFOP - create forward operator
createInv - create pygimli Inversion instance
run - run block-mono (alternatively smooth-mono) inversion (with bootstrap)
calcMCM - compute model covariance matrix and thus uncertainties
splitModel - return thickness, water content and T2* time from vector
showResult/showResultAndFit - show inversion result (with fit)
runEA - run evolutionary algorithm (GA, PSO etc.) using inspyred
plotPopulation - plot final population of an EA run
"""
def __init__(self, name=None, verbose=True, **kwargs):
"""MRS init with optional data load from mrsi file
Parameters
----------
name : string
Filename with load data and kernel (*.mrsi) or just data (*.mrsd)
verbose : bool
be verbose
kwargs - see :func:`MRS.loadMRSI`.
"""
self.verbose = verbose
self.t, self.q, self.z = None, None, None
self.data, self.error = None, None
self.K, self.fop, self.INV = None, None, None
self.dcube, self.ecube = None, None
self.lLB, self.lUB = None, None
self.nlay = 0
self.model, self.modelL, self.modelU = None, None, None
self.lowerBound = [1.0, 0.0, 0.02] # d, theta, T2*
self.upperBound = [30., 0.45, 1.00] # d, theta, T2*
self.startval = [10., 0.30, 0.20] # d, theta, T2*
self.logpar = False
self.basename = 'new'
self.figs = {}
if name is not None: # load data and kernel
# check for mrsi/d/k
if name[-5:-1].lower() == '.mrs': # mrsi or mrsd
self.loadMRSI(name, **kwargs)
self.basename = name.rstrip('.mrsi')
# elif name[-5:].lower() == '.mrsd':
# self.loadMRSD(name, **kwargs)
elif name.lower().endswith('npz'):
self.loadDataNPZ(name, **kwargs)
else:
self.loadDir(name)
def __repr__(self): # for print function
"""String representation."""
out = ""
if len(self.t) > 0 and len(self.q) > 0:
out = "<MRSdata: %d qs, %d times" % \
(len(self.q), len(self.t))
if hasattr(self.z, '__iter__') and len(self.z) > 0:
out += ", %d layers" % len(self.z)
return out + ">"
def loadDataNPZ(self, filename, **kwargs):
"""Load data and kernel from numpy gzip packed file.
The npz file contains the fields: q, t, D, (E), z, K
"""
self.basename = filename.rstrip('.npz')
DATA = np.load(filename)
self.q = DATA['q']
self.t = DATA['t']
self.z = np.absolute(DATA['z'])
self.K = DATA['K']
self.dcube = DATA['D']
ndcubet = len(self.dcube[0])
if len(self.dcube) == len(self.q) and ndcubet == len(self.t):
if kwargs.pop('usereal', False):
self.data = np.real(self.dcube.flat)
else:
self.data = np.abs(self.dcube.flat)
if 'E' in DATA:
self.ecube = DATA['E']
else:
self.ecube = np.zeros_like(self.dcube)
self.checkData(**kwargs)
def loadKernelNPZ(self, filename, **kwargs):
"""Load data and kernel from numpy gzip packed file.
The npz file contains the fields: q, t, D, (E), z, K
"""
self.basename = filename.rstrip('.npz')
DATA = np.load(filename)
self.q = DATA['pulseMoments']
self.z = np.absolute(DATA['zVector'])
self.K = DATA['kernel']
def loadMRSI(self, filename, **kwargs):
"""Load data, error and kernel from mrsi or mrsd file
Parameters
----------
usereal : bool [False]
use real parts (after data rotation) instead of amplitudes
mint/maxt : float [0.0/2.0]
minimum/maximum time to restrict time series
"""
from scipy.io import loadmat # loading Matlab mat files
if filename[-5:].lower() == '.mrsd':
idata = None
pl = loadmat(filename, struct_as_record=False,
squeeze_me=True)['proclog']
self.q = np.array([q.q for q in pl.Q])
self.t = pl.Q[0].rx.sig[0].t + pl.Q[0].timing.tau_dead1
nq = len(pl.Q)
nt = len(self.t)
self.dcube = np.zeros((nq, nt))
self.ecube = np.zeros((nq, nt))
# self.ecube = np.ones((nq, nt))*20e-9
for i in range(nq):
self.dcube[i, :] = pl.Q[i].rx.sig[1].V
self.ecube[i, :] = np.real(pl.Q[i].rx.sig[1].E)
else:
idata = loadmat(filename, struct_as_record=False,
squeeze_me=True)['idata']
self.t = idata.data.t + idata.data.effDead
self.q = idata.data.q
self.K = idata.kernel.K
self.z = np.hstack((0., idata.kernel.z))
self.dcube = idata.data.dcube
self.ecube = idata.data.ecube
defaultNoise = kwargs.get("defaultNoise", 100e-9)
if self.ecube[0][0] == 0:
self.ecube = np.ones_like(self.dcube) * defaultNoise
if self.verbose:
print("no errors in file, assuming", defaultNoise*1e9, "nV")
self.ecube = np.ones((len(self.q), len(self.t))) * defaultNoise
if idata is not None:
self.ecube /= np.sqrt(idata.data.gateL)
self.checkData(**kwargs)
# load model from matlab file (result of MRSQTInversion)
if filename[-5:].lower() == '.mrsi' and hasattr(idata, 'inv1Dqt'):
if hasattr(idata.inv1Dqt, 'blockMono'):
sol = idata.inv1Dqt.blockMono.solution[0]
self.model = np.hstack((sol.thk, sol.w, sol.T2))
self.nlay = len(sol.w)
if self.verbose:
print("loaded file: " + filename)
def checkData(self, **kwargs):
"""Check data and retrieve data and error vector."""
mint = kwargs.pop('mint', 0)
maxt = kwargs.pop('maxt', 1000)
good = (self.t <= maxt) & (self.t >= mint)
self.t = self.t[good]
self.dcube = self.dcube[:, good]
self.ecube = self.ecube[:, good]
ndcubet = len(self.dcube[0])
if len(self.dcube) == len(self.q) and ndcubet == len(self.t):
if kwargs.pop('usereal', False):
self.data = np.real(self.dcube.flat)
else:
self.data = np.abs(self.dcube.flat)
else:
print('Dimensions do not match!')
necubet = len(self.dcube[0])
if len(self.ecube) == len(self.q) and necubet == len(self.t):
self.error = self.ecube.ravel()
if min(self.error) <= 0.:
print("Warning: negative errors present! Taking absolute value")
self.error = np.absolute(self.error)
defaultNoise = kwargs.pop("defaultNoise", 100e-9)
if min(self.error) == 0.:
if self.verbose:
print("Warning: zero error, assuming", defaultNoise)
self.error[self.error == 0.] = defaultNoise
# clip data if desired (using vmin and vmax keywords)
if "vmax" in kwargs:
vmax = kwargs['vmax']
self.error[self.data > vmax] = max(self.error)*3
self.data[self.data > vmax] = vmax
if "vmin" in kwargs:
vmin = kwargs['vmin']
self.error[self.data < vmin] = max(self.error)*3
self.data[self.data < vmin] = vmin
if self.verbose:
print(self)
def loadMRSD(self, filename, usereal=False, mint=0., maxt=2.0):
"""Load mrsd (MRS data) file: not really used as in MRSD."""
from scipy.io import loadmat # loading Matlab mat files
print("Currently not using mint/maxt & usereal:", mint, maxt, usereal)
pl = loadmat(filename, struct_as_record=False,
squeeze_me=True)['proclog']
self.q = np.array([q.q for q in pl.Q])
self.t = pl.Q[0].rx.sig[0].t + pl.Q[0].timing.tau_dead1
nq = len(pl.Q)
nt = len(self.t)
self.dcube = np.zeros((nq, nt))
for i in range(nq):
self.dcube[i, :] = np.abs(pl.Q[i].rx.sig[1].V)
self.ecube = np.ones((nq, nt))*20e-9
def loadDataCube(self, filename='datacube.dat'):
"""Load data cube from single ascii file (old stuff)"""
A = np.loadtxt(filename).T
self.q = A[1:, 0]
self.t = A[0, 1:]
self.data = A[1:, 1:].ravel()
def loadErrorCube(self, filename='errorcube.dat'):
"""Load error cube from a single ascii file (old stuff)."""
A = np.loadtxt(filename).T
if len(A) == len(self.q) and len(A[0]) == len(self.t):
self.error = A.ravel()
elif len(A) == len(self.q) + 1 and len(A[0]) == len(self.t) + 1:
self.error = A[1:, 1:].ravel()
else:
self.error = np.ones(len(self.q) * len(self.t)) * 100e-9
def loadKernel(self, name=''):
"""Load kernel matrix from mrsk or two bmat files."""
from scipy.io import loadmat # loading Matlab mat files
if name[-5:].lower() == '.mrsk':
kdata = loadmat(name, struct_as_record=False,
squeeze_me=True)['kdata']
self.K = kdata.K
self.z = np.hstack((0., kdata.model.z))
else: # try load real/imag parts (backward compat.)
KR = pg.Matrix(name + 'KR.bmat')
KI = pg.Matrix(name + 'KI.bmat')
self.K = np.zeros((KR.rows(), KR.cols()), dtype='complex')
for i in range(KR.rows()):
self.K[i] = np.array(KR[i]) + np.array(KI[i]) * 1j
def loadZVector(self, filename='zkernel.vec'):
"""Load the kernel vertical discretisation (z) vector."""
self.z = pg.Vector(filename)
def loadDir(self, dirname):
"""Load several standard files from dir (old Borkum stage)."""
if not dirname[-1] == '/':
dirname += '/'
self.loadDataCube(dirname + 'datacube.dat')
self.loadErrorCube(dirname + 'errorcube.dat')
self.loadKernel(dirname)
self.loadZVector(dirname + 'zkernel.vec')
self.dirname = dirname # to save results etc.
def showCube(self, ax=None, vec=None, islog=None, clim=None, clab=None):
"""Plot any data (or response, error, misfit) cube nicely."""
if vec is None:
vec = np.array(self.data).flat
print(len(vec))
mul = 1.0
if max(vec) < 1e-3: # Volts
mul = 1e9
if ax is None:
_, ax = plt.subplots(1, 1)
if islog is None:
print(len(vec))
islog = (min(vec) > 0.)
negative = (min(vec) < 0)
if islog:
vec = np.log10(np.abs(vec))
if clim is None:
if negative:
cmax = max(max(vec), -min(vec))
clim = (-cmax, cmax)
else:
cmax = max(vec)
if islog:
cmin = cmax - 1.5
else:
cmin = 0.
clim = (cmin, cmax)
xt = range(0, len(self.t), 10)
xtl = [str(ti) for ti in np.round(self.t[xt] * 1000.)]
qt = range(0, len(self.q), 5)
qtl = [str(qi) for qi in np.round(np.asarray(self.q)[qt] * 10.) / 10.]
mat = np.array(vec).reshape((len(self.q), len(self.t)))*mul
im = ax.imshow(mat, interpolation='nearest', aspect='auto')
im.set_clim(clim)
ax.set_xticks(xt)
ax.set_xticklabels(xtl)
ax.set_yticks(qt)
ax.set_yticklabels(qtl)
ax.set_xlabel('$t$ [ms]')
ax.set_ylabel('$q$ [As]')
cb = plt.colorbar(im, ax=ax, orientation='horizontal')
if clab is not None:
cb.ax.set_title(clab)
return clim
def showDataAndError(self, figsize=(10, 8), show=False):
"""Show data cube along with error cube."""
fig, ax = plt.subplots(1, 2, figsize=figsize)
self.showCube(ax[0], self.data * 1e9, islog=False)
self.showCube(ax[1], self.error * 1e9, islog=True)
if show:
plt.show()
self.figs['data+error'] = fig
return fig, ax
def showKernel(self, ax=None):
"""Show the kernel as matrix (Q over z)."""
if ax is None:
fig, ax = plt.subplots()
self.figs['kernel'] = fig
# ax.imshow(self.K.T, interpolation='nearest', aspect='auto')
ax.matshow(self.K.T, aspect='auto')
yt = ax.get_yticks()
maxzi = self.K.shape[1]
yt = yt[(yt >= 0) & (yt < maxzi)]
if yt[-1] < maxzi-2:
yt = np.hstack((yt, maxzi))
ytl = [str(self.z[int(yti)]) for yti in yt]
zl = self.z[[int(yti) for yti in yt]]
ytl = [str(zi) for zi in np.round(zl, 1)]
ax.set_yticks(yt)
ax.set_yticklabels(ytl)
xt = ax.get_xticks()
maxqi = self.K.shape[0]
xt = xt[(xt >= 0) & (xt < maxqi)]
xtl = [np.round(self.q[iq], 2) for iq in xt]
ax.set_xticks(xt)
ax.set_xticklabels(xtl)
return fig, ax
@staticmethod
def createFOP(nlay, K, z, t): # , verbose=True, **kwargs):
"""Create forward operator instance."""
fop = MRS1dBlockQTModelling(nlay, K, z, t)
return fop
def setBoundaries(self):
"""Set parameter boundaries for inversion."""
for i in range(3):
self.fop.region(i).setParameters(self.startval[i],
self.lowerBound[i],
self.upperBound[i], "log")
def createInv(self, nlay=3, lam=100., verbose=True, **kwargs):
"""Create inversion instance (and fop if necessary with nlay)."""
self.fop = MRS.createFOP(nlay, self.K, self.z, self.t)
self.setBoundaries()
self.INV = pg.Inversion(self.data, self.fop, verbose)
self.INV.setLambda(lam)
self.INV.setMarquardtScheme(kwargs.pop('lambdaFactor', 0.8))
self.INV.stopAtChi1(False) # now in MarquardtScheme
self.INV.setDeltaPhiAbortPercent(0.5)
self.INV.setAbsoluteError(np.abs(self.error))
self.INV.setRobustData(kwargs.pop('robust', False))
return self.INV
@staticmethod
def simulate(model, K, z, t):
"""Do synthetic modelling."""
nlay = int(len(model) / 3) + 1
fop = MRS.createFOP(nlay, K, z, t)
return fop.response(model)
def invert(self, nlay=3, lam=100., startvec=None,
verbose=True, uncertainty=False, **kwargs):
"""Easiest variant doing all (create fop and inv) in one call."""
if self.INV is None or self.nlay != nlay:
self.INV = self.createInv(nlay, lam, verbose, **kwargs)
self.INV.setVerbose(verbose)
if startvec is not None:
self.INV.setModel(startvec)
if verbose:
print("Doing inversion...")
self.model = np.array(self.INV.run())
return self.model
def run(self, verbose=True, uncertainty=False, **kwargs):
"""Easiest variant doing all (create fop and inv) in one call."""
self.invert(verbose=verbose, **kwargs)
if uncertainty:
if verbose:
print("Computing uncertainty...")
self.modelL, self.modelU = iterateBounds(
self.INV, dchi2=self.INV.chi2() / 2, change=1.2)
if verbose:
print("ready")
def splitModel(self, model=None):
"""Split model vector into d, theta and T2*."""
if model is None:
model = self.model
nl = int(len(self.model)/3) + 1 # self.nlay
thk = model[:nl - 1]
wc = model[nl - 1:2 * nl - 1]
t2 = model[2 * nl - 1:3 * nl - 1]
return thk, wc, t2
def result(self):
"""Return block model results (thk, wc and T2 vectors)."""
return self.splitModel()
def showResult(self, figsize=(10, 8), save='', fig=None, ax=None):
"""Show theta(z) and T2*(z) (+uncertainties if there)."""
if ax is None:
fig, ax = plt.subplots(1, 2, sharey=True, figsize=figsize)
self.figs['result'] = fig
thk, wc, t2 = self.splitModel()
showWC(ax[0], thk, wc)
showT2(ax[1], thk, t2)
if self.modelL is not None and self.modelU is not None:
thkL, wcL, t2L = self.splitModel(self.modelL)
thkU, wcU, t2U = self.splitModel(self.modelU)
showErrorBars(ax[0], thk, wc, thkL, thkU, wcL, wcU)
showErrorBars(ax[1], thk, t2*1e3, thkL, thkU, t2L*1e3, t2U*1e3)
if fig is not None:
if save:
fig.savefig(save, bbox_inches='tight')
return fig, ax
def showResultAndFit(self, figsize=(12, 10), save='', plotmisfit=False,
maxdep=0, clim=None):
"""Show ec(z), T2*(z), data and model response."""
fig, ax = plt.subplots(2, 2 + plotmisfit, figsize=figsize)
self.figs['result+fit'] = fig
thk, wc, t2 = self.splitModel()
showWC(ax[0, 0], thk, wc, maxdep=maxdep)
showT2(ax[0, 1], thk, t2, maxdep=maxdep)
ax[0, 0].set_title(r'MRS water content $\theta$')
ax[0, 1].set_title(r'MRS decay time $T_2^*$')
ax[0, 0].set_ylabel('$z$ [m]')
ax[0, 1].set_ylabel('$z$ [m]')
if self.modelL is not None and self.modelU is not None:
thkL, wcL, t2L = self.splitModel(self.modelL)
thkU, wcU, t2U = self.splitModel(self.modelU)
showErrorBars(ax[0, 0], thk, wc, thkL, thkU, wcL, wcU)
showErrorBars(ax[0, 1], thk, t2*1e3, thkL, thkU, t2L*1e3, t2U*1e3)
if maxdep > 0.:
ax[0, 0].set_ylim([maxdep, 0.])
ax[0, 1].set_ylim([maxdep, 0.])
clim = self.showCube(ax[1, 0], self.data * 1e9, islog=False, clim=clim)
ax[1, 0].set_title('measured data [nV]') # log10
self.showCube(
ax[1, 1], self.INV.response() * 1e9, clim=clim, islog=False)
ax[1, 1].set_title('simulated data [nV]') # log10
if plotmisfit:
self.showCube(ax[0, 2], (self.data - self.INV.response()) * 1e9,
islog=False)
ax[0, 2].set_title('misfit [nV]') # log10
ewmisfit = (self.data - self.INV.response()) / self.error
self.showCube(ax[1, 2], ewmisfit, islog=False)
ax[1, 2].set_title('error-weighted misfit')
if save:
if not isinstance(save, str):
save = self.basename
fig.savefig(save, bbox_inches='tight')
return fig, ax
def saveResult(self, filename):
"""Save inversion result to column text file for later use."""
thk, wc, t2 = self.splitModel()
z = np.hstack((0., np.cumsum(thk)))
ALL = np.column_stack((z, wc, t2))
if self.modelL is not None and self.modelU is not None:
thkL, wcL, t2L = self.splitModel(self.modelL)
thkU, wcU, t2U = self.splitModel(self.modelU)
zL = z.copy()
zL[1:] += (thkL - thk)
zU = z.copy()
zU[1:] += (thkU - thk)
ALL = np.column_stack((z, wc, t2, zL, zU, wcL, wcU, t2L, t2U))
np.savetxt(filename, ALL, fmt='%.3f')
def loadResult(self, filename):
"""Load inversion result from column file."""
A = np.loadtxt(filename)
z, wc, t2 = A[:, 0], A[:, 1], A[:, 2]
thk = np.diff(z)
self.nlay = len(wc)
self.model = np.hstack((thk, wc, t2))
if len(A[0]) > 8:
zL, wcL, t2L = A[:, 3], A[:, 5], A[:, 7]
zU, wcU, t2U = A[:, 4], A[:, 6], A[:, 8]
thkL = thk + zL[1:] - z[1:]
thkU = thk + zU[1:] - z[1:]
t2L[t2L < 0.01] = 0.01
self.modelL = np.hstack((thkL, wcL, t2L))
t2U[t2U > 1.0] = 1.0
self.modelU = np.hstack((thkU, wcU, t2U))
def calcMCM(self):
"""Compute linear model covariance matrix."""
J = gmat2numpy(self.fop.jacobian()) # (linear) jacobian matrix
D = np.diag(1 / self.error)
DJ = D.dot(J)
JTJ = DJ.T.dot(DJ)
MCM = np.linalg.inv(JTJ) # model covariance matrix
var = np.sqrt(np.diag(MCM)) # standard deviations from main diagonal
di = (1. / var) # variances as column vector
# scaled model covariance (=correlation) matrix
MCMs = di.reshape(len(di), 1) * MCM * di
return var, MCMs
def calcMCMbounds(self):
"""Compute model bounds using covariance matrix diagonals."""
mcm = self.calcMCM()[0]
self.modelL = self.model - mcm
self.modelU = self.model + mcm
def genMod(self, individual):
"""Generate (GA) model from random vector (0-1) using model bounds."""
model = np.asarray(individual) * (self.lUB - self.lLB) + self.lLB
if self.logpar:
return pg.exp(model)
else:
return model
def runEA(self, nlay=None, eatype='GA', pop_size=100, num_gen=100,
runs=1, mp_num_cpus=8, **kwargs):
"""Run evolutionary algorithm using the inspyred library
Parameters
----------
nlay : int [taken from classic fop if not given]
number of layers
pop_size : int [100]
population size
num_gen : int [100]
number of generations
runs : int [pop_size*num_gen]
number of independent runs (with random population)
eatype : string ['GA']
algorithm, choose among:
'GA' - Genetic Algorithm [default]
'SA' - Simulated Annealing
'DEA' - Discrete Evolutionary Algorithm
'PSO' - Particle Swarm Optimization
'ACS' - Ant Colony Strategy
'ES' - Evolutionary Strategy
"""
import inspyred
import random
def mygenerate(random, args):
"""generate a random vector of model size"""
return [random.random() for i in range(nlay * 3 - 1)]
def my_observer(population, num_generations, num_evaluations, args):
""" print fitness over generation number """
best = min(population)
print('{0:6} -- {1}'.format(num_generations, best.fitness))
@inspyred.ec.evaluators.evaluator
def datafit(individual, args):
""" error-weighted data misfit as basis for evaluating fitness """
misfit = (self.data -
self.fop.response(self.genMod(individual))) / self.error
return np.mean(misfit**2)
# prepare forward operator
if self.fop is None or (nlay is not None and nlay is not self.nlay):
self.fop = MRS.createFOP(nlay)
lowerBound = pg.cat(pg.cat(pg.Vector(self.nlay - 1,
self.lowerBound[0]),
pg.Vector(self.nlay, self.lowerBound[1])),
pg.Vector(self.nlay, self.lowerBound[2]))
upperBound = pg.cat(pg.cat(pg.Vector(self.nlay - 1,
self.upperBound[0]),
pg.Vector(self.nlay, self.upperBound[1])),
pg.Vector(self.nlay, self.upperBound[2]))
if self.logpar:
self.lLB, self.lUB = pg.log(lowerBound), pg.log(
upperBound) # ready mapping functions
else:
self.lLB, self.lUB = lowerBound, upperBound
# self.f = MRS1dBlockQTModelling(nlay, self.K, self.z, self.t)
# setup random generator
rand = random.Random()
# choose among different evolution algorithms
if eatype == 'GA':
ea = inspyred.ec.GA(rand)
ea.variator = [
inspyred.ec.variators.blend_crossover,
inspyred.ec.variators.gaussian_mutation]
ea.selector = inspyred.ec.selectors.tournament_selection
ea.replacer = inspyred.ec.replacers.generational_replacement
if eatype == 'SA':
ea = inspyred.ec.SA(rand)
if eatype == 'DEA':
ea = inspyred.ec.DEA(rand)
if eatype == 'PSO':
ea = inspyred.swarm.PSO(rand)
if eatype == 'ACS':
ea = inspyred.swarm.ACS(rand, [])
if eatype == 'ES':
ea = inspyred.ec.ES(rand)
ea.terminator = [inspyred.ec.terminators.evaluation_termination,
inspyred.ec.terminators.diversity_termination]
else:
ea.terminator = inspyred.ec.terminators.evaluation_termination
# ea.observer = my_observer
ea.observer = [
inspyred.ec.observers.stats_observer,
inspyred.ec.observers.file_observer]
tstr = '{0}'.format(time.strftime('%y%m%d-%H%M%S'))
self.EAstatfile = self.basename + '-' + eatype + 'stat' + tstr + '.csv'
with open(self.EAstatfile, 'w') as fid:
self.pop = []
for i in range(runs):
rand.seed(int(time.time()))
self.pop.extend(ea.evolve(
evaluator=datafit, generator=mygenerate, maximize=False,
pop_size=pop_size, max_evaluations=pop_size*num_gen,
bounder=inspyred.ec.Bounder(0., 1.), num_elites=1,
statistics_file=fid, **kwargs))
# self.pop.extend(ea.evolve(
# generator=mygenerate, maximize=False,
# evaluator=inspyred.ec.evaluators.parallel_evaluation_mp,
# mp_evaluator=datafit, mp_num_cpus=mp_num_cpus,
# pop_size=pop_size, max_evaluations=pop_size*num_gen,
# bounder=inspyred.ec.Bounder(0., 1.), num_elites=1,
# statistics_file=fid, **kwargs))
self.pop.sort(reverse=True)
self.fits = [ind.fitness for ind in self.pop]
print('minimum fitness of ' + str(min(self.fits)))
def plotPopulation(self, maxfitness=None, fitratio=1.05, savefile=True):
"""Plot fittest individuals (fitness<maxfitness) as 1d models
Parameters
----------
maxfitness : float
maximum fitness value (absolute) OR
fitratio : float [1.05]
maximum ratio to minimum fitness
"""
if maxfitness is None:
maxfitness = min(self.fits) * fitratio
fig, ax = plt.subplots(1, 2, sharey=True)
self.figs['population'] = fig
maxz = 0
for ind in self.pop:
if ind.fitness < maxfitness:
model = np.asarray(self.genMod(ind.candidate))
thk = model[:self.nlay - 1]
wc = model[self.nlay - 1:self.nlay * 2 - 1]
t2 = model[self.nlay * 2 - 1:]
drawModel1D(ax[0], thk, wc * 100, color='grey')
drawModel1D(ax[1], thk, t2 * 1000, color='grey')
maxz = max(maxz, sum(thk))
model = np.asarray(self.genMod(self.pop[0].candidate))
thk = model[:self.nlay - 1]
wc = model[self.nlay - 1:self.nlay * 2 - 1]
t2 = model[self.nlay * 2 - 1:]
drawModel1D(ax[0], thk, wc * 100, color='black', linewidth=3)
drawModel1D(ax[1], thk, t2 * 1000, color='black', linewidth=3,
plotfunction='semilogx')
ax[0].set_xlim(self.lowerBound[1] * 100, self.upperBound[1] * 100)
ax[0].set_ylim((maxz * 1.2, 0))
ax[1].set_xlim(self.lowerBound[2] * 1000, self.upperBound[2] * 1000)
ax[1].set_ylim((maxz * 1.2, 0))
xt = [10, 20, 50, 100, 200, 500, 1000]
ax[1].set_xticks(xt)
ax[1].set_xticklabels([str(xti) for xti in xt])
if savefile:
fig.savefig(self.EAstatfile.replace('.csv', '.pdf'),
bbox_inches='tight')
plt.show()
def plotEAstatistics(self, fname=None):
"""Plot EA statistics (best, worst, ...) over time."""
if fname is None:
fname = self.EAstatfile
gen, psize, worst, best, med, avg, std = np.genfromtxt(
fname, unpack=True, usecols=range(7), delimiter=',')
stderr = std / np.sqrt(psize)
data = [avg, med, best, worst]
colors = ['black', 'blue', 'green', 'red']
labels = ['average', 'median', 'best', 'worst']
fig, ax = plt.subplots()
self.figs['statistics'] = fig
ax.errorbar(gen, avg, stderr, color=colors[0], label=labels[0])
ax.set_yscale('log')
for d, col, lab in zip(data[1:], colors[1:], labels[1:]):
ax.plot(gen, d, color=col, label=lab)
ax.fill_between(gen, data[2], data[3], color='#e6f2e6')
ax.grid(True)
ymin = min([min(d) for d in data])
ymax = max([max(d) for d in data])
yrange = ymax - ymin
ax.set_ylim((ymin - 0.1*yrange, ymax + 0.1*yrange))
ax.legend(loc='upper left') # , prop=prop)
ax.set_xlabel('Generation')
ax.set_ylabel('Fitness')
def saveFigs(self, basename=None, extension="pdf"):
"""Save all figures to (pdf) files."""
if basename is None:
basename = self.basename
for key in self.figs:
self.figs[key].savefig(basename+"-"+key+"."+extension,
bbox_inches='tight')
if __name__ == "__main__":
datafile = 'example.mrsi'
numlayers = 4
mrs = MRS(datafile)
mrs.run(numlayers, uncertainty=True)
outThk, outWC, outT2 = mrs.result()
mrs.saveResult(mrs.basename+'.result')
mrs.showResultAndFit(save=mrs.basename+'.pdf')
plt.show()
| apache-2.0 |
bmazin/ARCONS-pipeline | legacy/arcons_control/lib/pulses_v1.py | 1 | 21557 |
# encoding: utf-8
"""
pulses.py
Created by Ben Mazin on 2011-05-04.
Copyright (c) 2011 . All rights reserved.
"""
import numpy as np
import time
import os
from tables import *
import matplotlib
import scipy as sp
import scipy.signal
from matplotlib.pyplot import plot, figure, show, rc, grid
import matplotlib.pyplot as plt
#import matplotlib.image as mpimg
import mpfit
#import numexpr
#from iqsweep import *
class Photon(IsDescription):
"""The pytables derived class that holds pulse packet data on the disk.
Put in a marker pulse with at = int(time.time()) and phase = -32767 every second.
"""
at = UInt32Col() # pulse arrival time in microseconds since last sync pulse
# phase = Int16Col() # optimally filtered phase pulse height
class RawPulse(IsDescription):
"""The pytables derived class that hold raw pulse data on the disk.
"""
starttime = Float64Col() # start time of pulse data
samprate = Float32Col() # sample rate of the data in samples/sec
npoints = Int32Col() # number of data points in the pulse
f0 = Float32Col() # resonant frequency data was taken at
atten1 = Float32Col() # attenuator 1 setting data was taken at
atten2 = Float32Col() # attenuator 2 setting data was taken at
Tstart = Float32Col() # temp data was taken at
I = Float32Col(2000) # I pulse data, up to 5000 points.
Q = Float32Col(2000)
class PulseAnalysis(IsDescription): # contains final template info
flag = Int16Col() # flag for quality of template. If this could be a bad template set > 0
count = Float32Col() # number of pulses going into this template
pstart = Int16Col() # index of peak of template
phasetemplate = Float64Col(2000)
phasenoise = Float64Col(800)
phasenoiseidx = Float64Col(800)
#optfilt = Complex128(800)
# fit quantities
trise = Float32Col() # fit value of rise time
tfall = Float32Col() # fit value of fall time
# optimal filter parameters
coeff = Float32Col(100) # coefficients for the near-optimal filter
nparam = Int16Col() # number of parameters in the filter
class BeamMap(IsDescription):
roach = UInt16Col() # ROACH board number (0-15) for now!
resnum = UInt16Col() # resonator number on roach board (corresponds to res # in optimal pulse packets)
f0 = Float32Col() # resonant frequency of center of sweep (can be used to get group name)
pixel = UInt32Col() # actual pixel number - bottom left of array is 0, increasing up
xpos = Float32Col() # physical X location in mm
ypos = Float32Col() # physical Y location in mm
scale = Float32Col(3) # polynomial to convert from degrees to eV
class ObsHeader(IsDescription):
target = StringCol(80)
datadir = StringCol(80) # directory where observation data is stored
calfile = StringCol(80) # path and filename of calibration file
beammapfile = StringCol(80) # path and filename of beam map file
version = StringCol(80)
instrument = StringCol(80)
telescope = StringCol(80)
focus = StringCol(80)
parallactic = Float64Col()
ra = Float64Col()
dec = Float64Col()
alt = Float64Col()
az = Float64Col()
airmass = Float64Col()
equinox = Float64Col()
epoch = Float64Col()
obslat = Float64Col()
obslong = Float64Col()
obsalt = Float64Col()
timezone = Int32Col()
localtime = StringCol(80)
ut = Float64Col()
lst = StringCol(80)
jd = Float64Col()
platescl = Float64Col()
exptime = Int32Col()
# Make a fake observation file
def FakeObservation(obsname, start, exptime):
# simulation parameters
nroach = 4 # number of roach boards
nres = 256 # number of pixels on each roach
xpix = 32 # pixels in x dir
ypix = 32 # pixels in y dir
R = 15 # mean energy resolution
good = 0.85 # fraction of resonators that are good
#exptime = 10 # duration of fake exposure in seconds
fullobspath = obsname.split("/")
obsfile = fullobspath.pop()
obspath = "/".join(fullobspath)+"/"
h5file = openFile(obsname, mode = "r")
carray = h5file.root.beammap.beamimage.read()
h5file.close()
filt1 = Filters(complevel=1, complib='zlib', fletcher32=False) # without minimal compression the files sizes are ridiculous...
h5file = openFile(obsname, mode = "a")
''' beam map inserted from beam map file during header gen
# make beamap table
bgroup = h5file.createGroup('/','beammap','Beam Map of Array')
filt = Filters(complevel=0, complib='zlib', fletcher32=False)
filt1 = Filters(complevel=1, complib='blosc', fletcher32=False) # without minimal compression the files sizes are ridiculous...
btable = h5file.createTable(bgroup, 'beammap', BeamMap, "Table of anaylzed beam map data",filters=filt1)
w = btable.row
# make beammap array - this is a 2d array (top left is 0,0. first index is column, second is row) containing a string with the name of the group holding the photon data
ca = h5file.createCArray(bgroup, 'beamimage', StringAtom(itemsize=40), (32,32), filters=filt1)
for i in xrange(nroach):
for j in xrange(nres):
w['roach'] = i
w['resnum'] = ((41*j)%256)
w['f0'] = 3.5 + (i%2)*.512 + 0.002*j
w['pixel'] = ((41*j)%256) + 256*i
w['xpos'] = np.floor(j/16)*0.1
w['ypos'] = (j%16)*0.1
if i == 1 or i == 3:
w['ypos'] = (j%16)*0.1 + 1.6
if i == 2 or i == 3:
w['xpos'] = np.floor(j/16)*0.1 + 1.6
w.append()
colidx = int(np.floor(j/16))
rowidx = 31 - j%16
if i == 1 or i == 3:
rowidx -= 16
if i >= 2:
colidx += 16
ca[rowidx,colidx] = 'r'+str(i)+'/p'+str( ((41*j)%256) )
h5file.flush()
carray = ca.read()
'''
# load up the 32x32 image we want to simulate
sourceim = plt.imread('/Users/ourhero/Documents/python/MazinLab/Arcons/ucsblogo.png')
sourceim = sourceim[:,:,0]
# make directory structure for pulse data
dptr = []
for i in xrange(nroach):
group = h5file.createGroup('/','r'+str(i),'Roach ' + str(i))
for j in xrange(nres):
subgroup = h5file.createGroup(group,'p'+str(j))
dptr.append(subgroup)
'''
# now go in an update the beamimages array to contain the name of the actual data array
for i in xrange(32):
for j in xrange(32):
name = h5file.getNode('/',name=ca[i,j])
for leaf in name._f_walkNodes('Leaf'):
newname = ca[i,j]+'/'+leaf.name
ca[i,j] = newname
'''
# create fake photon data
#start = np.floor(time.time())
# make VLArray tables for the photon data
vlarr=[]
for i in dptr:
tmpvlarr = h5file.createVLArray(i, 't'+str(int(start)), UInt32Atom(shape=()),expectedsizeinMB=0.1,filters=filt1)
vlarr.append(tmpvlarr)
idx = np.arange(2000)
for i in xrange(exptime):
print i
t1 = time.time()
for j in vlarr:
# sky photons
nphot = 1000 + int(np.random.randn()*np.sqrt(1000))
#arrival = np.uint32(idx[:nphot]*700.0 + np.random.randn(nphot)*100.0)
arrival = np.uint64(np.random.random(nphot)*1e6)
energy = np.uint64(np.round((20.0 + np.random.random(nphot)*80.0)*20.0))
photon = np.bitwise_or( np.left_shift(energy,12), arrival )
# source photons
# figure out where this group is on the array
pgroup = j._g_getparent().__str__()
#print "printing pgroup", pgroup
ngroup = (pgroup.split(' '))[0]+'/t'+str(start)
#print "printing ngroup", ngroup
cidx = np.where(carray == ngroup[1:])
#print "printing ngroup 1:" ,ngroup[1:]
#print "printing cidx", cidx
#print sourceim[cidx]
sphot = 100.0 * (sourceim[cidx])[0]
sphot += np.sqrt(sphot)*np.random.randn()
sphot = np.uint32(sphot)
#print sphot
if sphot >= 1.0:
arrival = np.uint64(np.random.random(sphot)*1e6)
energy = np.uint64( (60.0 + np.random.randn(sphot)*3.0)*20.0 )
source = np.bitwise_or( np.left_shift(energy,12), arrival )
plist = np.concatenate((photon,source))
else:
plist = photon
#splist = np.sort(plist)
j.append(plist)
t2 = time.time()
dt = t2-t1
if t2-t1 < 1:
#delay for 1 second between creating seconds of false data
time.sleep(1-dt)
'''
idx = np.arange(2000)
for i in xrange(exptime):
print i
t1 = time.time()
for j in vlarr:
# sky photons
nphot = 1000 + int(np.random.randn()*np.sqrt(1000))
#arrival = np.uint32(idx[:nphot]*700.0 + np.random.randn(nphot)*100.0)
arrival = np.uint32(np.random.random(nphot)*1e6)
energy = np.uint32(np.round((20.0 + np.random.random(nphot)*80.0)*20.0))
photon = np.bitwise_or( np.left_shift(arrival,12), energy )
# source photons
# figure out where this group is on the array
pgroup = j._g_getparent().__str__()
ngroup = (pgroup.split(' '))[0]
cidx = np.where(carray == ngroup[1:])
#print sourceim[cidx]
sphot = 100.0 * (sourceim[cidx])[0]
sphot += np.sqrt(sphot)*np.random.randn()
sphot = np.uint32(sphot)
#print sphot
if sphot >= 1.0:
arrival = np.uint32(np.random.random(sphot)*1e6)
energy = np.uint32( (60.0 + np.random.randn(sphot)*3.0)*20.0 )
source = np.bitwise_or( np.left_shift(arrival,12), energy )
plist = np.concatenate((photon,source))
else:
plist = photon
#splist = np.sort(plist)
j.append(plist)
'''
h5file.close()
# make a preview image from obsfile
def QuickLook(obsfile,tstart,tend):
h5file = openFile(obsfile, mode = "r")
image = np.zeros((32,32))
#mask = np.repeat(np.uint32(4095),2000)
# load beamimage
bmap = h5file.root.beammap.beamimage
for i in xrange(32):
for j in xrange(32):
photons = h5file.root._f_getChild(bmap[i][j])
for k in range(tstart,tend):
#energy = np.bitwise_and( mask[:len(photons[0])],photons[0])
image[i][j] += len(photons[k])
# subtract off sky
skysub = np.float32(image - np.median(image))
h5file.close()
# display the image
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.imshow(skysub,cmap='gray', interpolation='nearest')
cbar = fig.colorbar(cax)
plt.show()
# Make a pulse template from the pulses saved in filename
def MakeTemplate(pulsedat):
# open the pulse file
h5file = openFile(pulsedat, mode = "r")
r1 = h5file.root.r1
# create the template file
tfile = openFile(pulsedat.replace('.h5','-template.h5'), mode = "w", title = "Optimal filter data file created " + time.asctime() )
tempr1 = tfile.createGroup('/','r1','ROACH 1')
# loop through pulse data
for group in r1._f_walkGroups():
if group == r1: # walkgroups returns itself as first entry, so skip it - there is probably a more elegant way!
continue
print group
# go through all the raw pulses in table and generate the template
tP=np.zeros(2000,dtype='float64')
tA=np.zeros(2000,dtype='float64')
tPf=np.zeros(2000,dtype='float64')
tAf=np.zeros(2000,dtype='float64')
noise = np.zeros(800,dtype='float64')
# read the table into memory (too slow otherwise!)
dat = group.iqpulses.read()
N = len(dat)
count = 0.0
peaklist = []
idx = np.arange(2000)*2.0
fitidx = np.concatenate((idx[:900],idx[1800:]))
# center of loop
xc = 0.0
yc = 0.0
# determine median prepulse levels for first 100 pulses
I1m = np.median(dat['I'][:100,:900])
Q1m = np.median(dat['Q'][:100,:900])
# make a prelimiary template with 1000 pulses, then a better one with all of them
if N > 1000:
N = 1000
# first pass
for j in xrange(N):
I = dat['I'][j]
Q = dat['Q'][j]
# reference all pulses to first 100 pulses (1/f removal)
I += (I1m - np.median(I[1:900]))
Q += (Q1m - np.median(Q[1:900]))
# transform to phase
P1 = np.arctan2( Q-yc, I-xc )
#P1 = numexpr.evaluate('arctan2( Q-yc, I-xc )')
# remove phase wraps and convert to degrees
P2 = np.rad2deg(np.unwrap(P1))
# subtract baseline
fit = np.poly1d(np.polyfit(fitidx,np.concatenate((P2[:900],P2[1800:])),1))
P3 = P2 - fit(idx)
# skip pulses with bad baseline subtraction
stdev = np.std(P3[:100])
if np.abs(np.mean(P3[:100])-np.mean(P3[1900:])) > stdev*2.0 :
continue
# eliminate doubles
# first pass fit all non-noise pulses
peak = np.max(P3[980:1050])
peaklist.append(peak)
if peak < 15.0 or peak > 120.0:
continue
# if peak not near the center skip
ploc = (np.where(P3 == peak))[0]
if ploc < 980 or ploc > 1020:
continue
# align pulse so peak happens at center
P4 = np.roll(P3,1000-ploc)
# normalize and add to template
tP += P4/np.max(P4)
count += 1
print 'First Pass =',int(count),'pulses'
tP /= count
tA /= count
# make a second pass through using the initial template as the kernel to determine pulse start time
peaklist = np.asarray(peaklist)
pm = np.median(peaklist[np.where(peaklist>15)])
pdev = np.std(peaklist[np.where(peaklist>15)])
print pm,'+-',pdev,'degrees'
N = len(dat)
count = 0.0
t1 = time.time()
for j in xrange(N):
I = dat['I'][j]
Q = dat['Q'][j]
# reference all pulses to first 100 pulses (1/f removal)
I += (I1m - np.median(I[1:900]))
Q += (Q1m - np.median(Q[1:900]))
# transform to phase
P1 = np.arctan2( Q-yc, I-xc )
# remove phase wraps and convert to degrees
P2 = np.rad2deg(np.unwrap(P1))
# subtract baseline - this step is slow - speed up!
fit = np.poly1d(np.polyfit(fitidx,np.concatenate((P2[:900],P2[1800:])),1))
P3 = P2 - fit(idx)
# skip pulses with bad baseline subtraction
stdev = np.std(P3[:100])
if np.abs(np.mean(P3[:100])-np.mean(P3[1900:])) > stdev*2.0 :
continue
# eliminate doubles
# Only fit pulses near the peak
conv = np.convolve(tP[900:1500],P3)
#conv = scipy.signal.fftconvolve(tP[950:1462],np.concatenate( (P3,P3[0:48]) ) )
ploc = int((np.where(conv == np.max(conv)))[0] - 1160.0)
peak = np.max(P3[1000+ploc])
#print ploc,peak
if peak < pm - 4.0*pdev or peak > pm + 4.0*pdev:
continue
# if peak not near the center skip
if ploc < -30 or ploc > 30:
continue
# align pulse so peak happens at center
P4 = np.roll(P3,-ploc)
# normalize and add to template
tPf += P4/np.max(P4)
count += 1
# compute noise PSD
noise += np.abs( np.fft.fft(np.deg2rad(P4[50:850])) )**2
t2 = time.time()
tPf /= count
noise /= count
noiseidx = np.fft.fftfreq(len(noise),d=0.000002)
print 'Second Pass =',int(count),'pulses'
print 'Pulses per second = ', N/(t2-t1)
# calculate optimal filter parameters
# save the template information in a new file
# create a group off root for each resonator that contains iq sweep, pulse template, noise, and optimal filter coefficents
pgroup = tfile.createGroup(tempr1,group._v_name, 'data to set up optimal filtering' )
group.iqsweep.copy(newparent=pgroup) # copy in IQ sweep data
#filt = Filters(complevel=5, complib='zlib', fletcher32=True)
filt = Filters(complevel=0, complib='zlib', fletcher32=False)
table = tfile.createTable(pgroup, 'opt', PulseAnalysis, "optimal filter data",filters=filt)
w = table.row
if( count < 500 or pm < 10 or pm > 150):
w['flag'] = 1
else:
w['flag'] = 0
w['count'] = count
w['pstart'] = (np.where( tPf == np.max(tPf)))[0]
w['phasetemplate'] = tPf
w['phasenoise'] = noise
w['phasenoiseidx'] = noiseidx
w.append()
break
#plot(tPf)
plot(noiseidx,noise)
show()
h5file.close()
tfile.close()
def FakeTemplateData(): # make fake data and write it to a h5 file
filename = '/Users/bmazin/Data/Projects/pytest/fakepulse2.h5'
h5file = openFile(filename, mode='w', title = "Fake Pulse file created " + time.asctime() )
r1 = h5file.createGroup('/','r1','ROACH 1')
# open IQ sweep file
sweepdat = '/Users/bmazin/Data/Projects/pytest/ps_20110505-172336.h5'
iqfile = openFile(sweepdat, mode = "r")
swp = iqfile.root.sweeps
# loop through each IQ sweep in sweepddat and create fake pulses for it
for group in swp._f_walkGroups():
if group == swp: # walkgroups returns itself as first entry, so skip it - there is probably a more elegant way!
continue
print group
pgroup = h5file.createGroup(r1,group._v_name, 'IQ pulse data' )
pname = 'iqpulses'
#filt = Filters(complevel=5, complib='zlib', fletcher32=True)
filt = Filters(complevel=0, complib='zlib', fletcher32=False)
table = h5file.createTable(pgroup, pname, RawPulse, "IQ Pulse Data",filters=filt)
p = table.row
# copy the IQ sweep data into the file
group._f_copyChildren(pgroup)
trise = 0.1
tfall = 65.0
for j in xrange(1000):
p['starttime'] = time.time()
p['samprate'] = 500000.0
p['npoints'] = 2000
p['f0'] = 3.65
p['atten1'] = 30
p['atten2'] = 0
p['Tstart'] = 0.1
I = np.zeros(2000)
Q = np.zeros(2000)
idx = np.arange(1000,dtype='float32')
I[1000:2000] = (1.0 - np.exp( -idx/trise ) ) * np.exp(-idx/tfall) * 0.25
Q[1000:2000] = (1.0 - np.exp( -idx/trise ) ) * np.exp(-idx/tfall)
I += 2.0 - np.random.normal(size=2000)*.01 # add noise
Q += np.random.normal(size=2000)*.01
# move arrival time
I = np.roll(I, int((np.random.normal()*10.0)+0.5) )
Q = np.roll(Q, int((np.random.normal()*10.0)+0.5) )
p['I'] = np.concatenate( (I,np.zeros(2000-len(I))),axis=0 )
p['Q'] = np.concatenate( (Q,np.zeros(2000-len(Q))),axis=0 )
p.append()
table.flush()
h5file.close()
iqfile.close()
#print 'Running!'
#FakeTemplateData()
#pulsedat = '/Users/bmazin/Data/Projects/pytest/fakepulse2.h5'
#MakeTemplate(pulsedat)
#fakedat = '/Users/bmazin/Data/Projects/pytest/fakeobs.h5'
#FakeObservation(fakedat)
#QuickLook(fakedat,0,10)
#print 'Done.'
| gpl-2.0 |
KasperPRasmussen/bokeh | examples/plotting/file/clustering.py | 6 | 2136 | """ Example inspired by an example from the scikit-learn project:
http://scikit-learn.org/stable/auto_examples/cluster/plot_cluster_comparison.html
"""
import numpy as np
try:
from sklearn import cluster, datasets
from sklearn.preprocessing import StandardScaler
except ImportError:
raise ImportError('This example requires scikit-learn (conda install sklearn)')
from bokeh.plotting import Figure, show, output_file, vplot, hplot
N = 50000
PLOT_SIZE = 400
# generate datasets.
np.random.seed(0)
noisy_circles = datasets.make_circles(n_samples=N, factor=.5, noise=.04)
noisy_moons = datasets.make_moons(n_samples=N, noise=.05)
centers = [(-2, 3), (2, 3), (-2, -3), (2, -3)]
blobs1 = datasets.make_blobs(centers=centers, n_samples=N, cluster_std=0.4, random_state=8)
blobs2 = datasets.make_blobs(centers=centers, n_samples=N, cluster_std=0.7, random_state=8)
colors = np.array([x for x in ('#00f', '#0f0', '#f00', '#0ff', '#f0f', '#ff0')])
colors = np.hstack([colors] * 20)
# create clustering algorithms
dbscan = cluster.DBSCAN(eps=.2)
birch = cluster.Birch(n_clusters=2)
means = cluster.MiniBatchKMeans(n_clusters=2)
spectral = cluster.SpectralClustering(n_clusters=2, eigen_solver='arpack', affinity="nearest_neighbors")
affinity = cluster.AffinityPropagation(damping=.9, preference=-200)
# change here, to select clustering algorithm (note: spectral is slow)
algorithm = dbscan # <- SELECT ALG
plots =[]
for dataset in (noisy_circles, noisy_moons, blobs1, blobs2):
X, y = dataset
X = StandardScaler().fit_transform(X)
# predict cluster memberships
algorithm.fit(X)
if hasattr(algorithm, 'labels_'):
y_pred = algorithm.labels_.astype(np.int)
else:
y_pred = algorithm.predict(X)
p = Figure(webgl=True, title=algorithm.__class__.__name__,
plot_width=PLOT_SIZE, plot_height=PLOT_SIZE)
p.scatter(X[:, 0], X[:, 1], color=colors[y_pred].tolist(), alpha=0.1,)
plots.append(p)
# generate layout for the plots
layout = vplot(hplot(*plots[:2]), hplot(*plots[2:]))
output_file("clustering.html", title="clustering with sklearn")
show(layout)
| bsd-3-clause |
zrhans/pythonanywhere | .virtualenvs/django19/lib/python3.4/site-packages/matplotlib/tests/test_artist.py | 6 | 6247 | from __future__ import (absolute_import, division, print_function,
unicode_literals)
import warnings
from matplotlib.externals import six
import io
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import matplotlib.lines as mlines
import matplotlib.path as mpath
import matplotlib.transforms as mtrans
import matplotlib.collections as mcollections
from matplotlib.testing.decorators import image_comparison, cleanup
from nose.tools import (assert_true, assert_false)
@cleanup
def test_patch_transform_of_none():
# tests the behaviour of patches added to an Axes with various transform
# specifications
ax = plt.axes()
ax.set_xlim([1, 3])
ax.set_ylim([1, 3])
# Draw an ellipse over data coord (2,2) by specifying device coords.
xy_data = (2, 2)
xy_pix = ax.transData.transform_point(xy_data)
# Not providing a transform of None puts the ellipse in data coordinates .
e = mpatches.Ellipse(xy_data, width=1, height=1, fc='yellow', alpha=0.5)
ax.add_patch(e)
assert e._transform == ax.transData
# Providing a transform of None puts the ellipse in device coordinates.
e = mpatches.Ellipse(xy_pix, width=120, height=120, fc='coral',
transform=None, alpha=0.5)
assert e.is_transform_set() is True
ax.add_patch(e)
assert isinstance(e._transform, mtrans.IdentityTransform)
# Providing an IdentityTransform puts the ellipse in device coordinates.
e = mpatches.Ellipse(xy_pix, width=100, height=100,
transform=mtrans.IdentityTransform(), alpha=0.5)
ax.add_patch(e)
assert isinstance(e._transform, mtrans.IdentityTransform)
# Not providing a transform, and then subsequently "get_transform" should
# not mean that "is_transform_set".
e = mpatches.Ellipse(xy_pix, width=120, height=120, fc='coral',
alpha=0.5)
intermediate_transform = e.get_transform()
assert e.is_transform_set() is False
ax.add_patch(e)
assert e.get_transform() != intermediate_transform
assert e.is_transform_set() is True
assert e._transform == ax.transData
@cleanup
def test_collection_transform_of_none():
# tests the behaviour of collections added to an Axes with various
# transform specifications
ax = plt.axes()
ax.set_xlim([1, 3])
ax.set_ylim([1, 3])
#draw an ellipse over data coord (2,2) by specifying device coords
xy_data = (2, 2)
xy_pix = ax.transData.transform_point(xy_data)
# not providing a transform of None puts the ellipse in data coordinates
e = mpatches.Ellipse(xy_data, width=1, height=1)
c = mcollections.PatchCollection([e], facecolor='yellow', alpha=0.5)
ax.add_collection(c)
# the collection should be in data coordinates
assert c.get_offset_transform() + c.get_transform() == ax.transData
# providing a transform of None puts the ellipse in device coordinates
e = mpatches.Ellipse(xy_pix, width=120, height=120)
c = mcollections.PatchCollection([e], facecolor='coral',
alpha=0.5)
c.set_transform(None)
ax.add_collection(c)
assert isinstance(c.get_transform(), mtrans.IdentityTransform)
# providing an IdentityTransform puts the ellipse in device coordinates
e = mpatches.Ellipse(xy_pix, width=100, height=100)
c = mcollections.PatchCollection([e], transform=mtrans.IdentityTransform(),
alpha=0.5)
ax.add_collection(c)
assert isinstance(c._transOffset, mtrans.IdentityTransform)
@image_comparison(baseline_images=["clip_path_clipping"], remove_text=True)
def test_clipping():
exterior = mpath.Path.unit_rectangle().deepcopy()
exterior.vertices *= 4
exterior.vertices -= 2
interior = mpath.Path.unit_circle().deepcopy()
interior.vertices = interior.vertices[::-1]
clip_path = mpath.Path(vertices=np.concatenate([exterior.vertices,
interior.vertices]),
codes=np.concatenate([exterior.codes,
interior.codes]))
star = mpath.Path.unit_regular_star(6).deepcopy()
star.vertices *= 2.6
ax1 = plt.subplot(121)
col = mcollections.PathCollection([star], lw=5, edgecolor='blue',
facecolor='red', alpha=0.7, hatch='*')
col.set_clip_path(clip_path, ax1.transData)
ax1.add_collection(col)
ax2 = plt.subplot(122, sharex=ax1, sharey=ax1)
patch = mpatches.PathPatch(star, lw=5, edgecolor='blue', facecolor='red',
alpha=0.7, hatch='*')
patch.set_clip_path(clip_path, ax2.transData)
ax2.add_patch(patch)
ax1.set_xlim([-3, 3])
ax1.set_ylim([-3, 3])
@cleanup
def test_cull_markers():
x = np.random.random(20000)
y = np.random.random(20000)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x, y, 'k.')
ax.set_xlim(2, 3)
pdf = io.BytesIO()
fig.savefig(pdf, format="pdf")
assert len(pdf.getvalue()) < 8000
svg = io.BytesIO()
fig.savefig(svg, format="svg")
assert len(svg.getvalue()) < 20000
@cleanup
def test_remove():
fig, ax = plt.subplots()
im = ax.imshow(np.arange(36).reshape(6, 6))
ln, = ax.plot(range(5))
assert_true(fig.stale)
assert_true(ax.stale)
fig.canvas.draw()
assert_false(fig.stale)
assert_false(ax.stale)
assert_false(ln.stale)
assert_true(im in ax.mouseover_set)
assert_true(ln not in ax.mouseover_set)
assert_true(im.axes is ax)
im.remove()
ln.remove()
for art in [im, ln]:
assert_true(art.axes is None)
assert_true(art.figure is None)
assert_true(im not in ax.mouseover_set)
assert_true(fig.stale)
assert_true(ax.stale)
@cleanup
def test_properties():
ln = mlines.Line2D([], [])
with warnings.catch_warnings(record=True) as w:
# Cause all warnings to always be triggered.
warnings.simplefilter("always")
ln.properties()
assert len(w) == 0
if __name__ == '__main__':
import nose
nose.runmodule(argv=['-s', '--with-doctest'], exit=False)
| apache-2.0 |
lazywei/scikit-learn | examples/ensemble/plot_forest_importances.py | 241 | 1761 | """
=========================================
Feature importances with forests of trees
=========================================
This examples shows the use of forests of trees to evaluate the importance of
features on an artificial classification task. The red bars are the feature
importances of the forest, along with their inter-trees variability.
As expected, the plot suggests that 3 features are informative, while the
remaining are not.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.ensemble import ExtraTreesClassifier
# Build a classification task using 3 informative features
X, y = make_classification(n_samples=1000,
n_features=10,
n_informative=3,
n_redundant=0,
n_repeated=0,
n_classes=2,
random_state=0,
shuffle=False)
# Build a forest and compute the feature importances
forest = ExtraTreesClassifier(n_estimators=250,
random_state=0)
forest.fit(X, y)
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(10):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(10), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(10), indices)
plt.xlim([-1, 10])
plt.show()
| bsd-3-clause |
raymondxyang/tensorflow | tensorflow/examples/learn/wide_n_deep_tutorial.py | 18 | 8111 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Example code for TensorFlow Wide & Deep Tutorial using TF.Learn API."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import shutil
import sys
import tempfile
import pandas as pd
from six.moves import urllib
import tensorflow as tf
CSV_COLUMNS = [
"age", "workclass", "fnlwgt", "education", "education_num",
"marital_status", "occupation", "relationship", "race", "gender",
"capital_gain", "capital_loss", "hours_per_week", "native_country",
"income_bracket"
]
gender = tf.feature_column.categorical_column_with_vocabulary_list(
"gender", ["Female", "Male"])
education = tf.feature_column.categorical_column_with_vocabulary_list(
"education", [
"Bachelors", "HS-grad", "11th", "Masters", "9th",
"Some-college", "Assoc-acdm", "Assoc-voc", "7th-8th",
"Doctorate", "Prof-school", "5th-6th", "10th", "1st-4th",
"Preschool", "12th"
])
marital_status = tf.feature_column.categorical_column_with_vocabulary_list(
"marital_status", [
"Married-civ-spouse", "Divorced", "Married-spouse-absent",
"Never-married", "Separated", "Married-AF-spouse", "Widowed"
])
relationship = tf.feature_column.categorical_column_with_vocabulary_list(
"relationship", [
"Husband", "Not-in-family", "Wife", "Own-child", "Unmarried",
"Other-relative"
])
workclass = tf.feature_column.categorical_column_with_vocabulary_list(
"workclass", [
"Self-emp-not-inc", "Private", "State-gov", "Federal-gov",
"Local-gov", "?", "Self-emp-inc", "Without-pay", "Never-worked"
])
# To show an example of hashing:
occupation = tf.feature_column.categorical_column_with_hash_bucket(
"occupation", hash_bucket_size=1000)
native_country = tf.feature_column.categorical_column_with_hash_bucket(
"native_country", hash_bucket_size=1000)
# Continuous base columns.
age = tf.feature_column.numeric_column("age")
education_num = tf.feature_column.numeric_column("education_num")
capital_gain = tf.feature_column.numeric_column("capital_gain")
capital_loss = tf.feature_column.numeric_column("capital_loss")
hours_per_week = tf.feature_column.numeric_column("hours_per_week")
# Transformations.
age_buckets = tf.feature_column.bucketized_column(
age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
# Wide columns and deep columns.
base_columns = [
gender, education, marital_status, relationship, workclass, occupation,
native_country, age_buckets,
]
crossed_columns = [
tf.feature_column.crossed_column(
["education", "occupation"], hash_bucket_size=1000),
tf.feature_column.crossed_column(
[age_buckets, "education", "occupation"], hash_bucket_size=1000),
tf.feature_column.crossed_column(
["native_country", "occupation"], hash_bucket_size=1000)
]
deep_columns = [
tf.feature_column.indicator_column(workclass),
tf.feature_column.indicator_column(education),
tf.feature_column.indicator_column(gender),
tf.feature_column.indicator_column(relationship),
# To show an example of embedding
tf.feature_column.embedding_column(native_country, dimension=8),
tf.feature_column.embedding_column(occupation, dimension=8),
age,
education_num,
capital_gain,
capital_loss,
hours_per_week,
]
def maybe_download(train_data, test_data):
"""Maybe downloads training data and returns train and test file names."""
if train_data:
train_file_name = train_data
else:
train_file = tempfile.NamedTemporaryFile(delete=False)
urllib.request.urlretrieve(
"https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data",
train_file.name) # pylint: disable=line-too-long
train_file_name = train_file.name
train_file.close()
print("Training data is downloaded to %s" % train_file_name)
if test_data:
test_file_name = test_data
else:
test_file = tempfile.NamedTemporaryFile(delete=False)
urllib.request.urlretrieve(
"https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test",
test_file.name) # pylint: disable=line-too-long
test_file_name = test_file.name
test_file.close()
print("Test data is downloaded to %s"% test_file_name)
return train_file_name, test_file_name
def build_estimator(model_dir, model_type):
"""Build an estimator."""
if model_type == "wide":
m = tf.estimator.LinearClassifier(
model_dir=model_dir, feature_columns=base_columns + crossed_columns)
elif model_type == "deep":
m = tf.estimator.DNNClassifier(
model_dir=model_dir,
feature_columns=deep_columns,
hidden_units=[100, 50])
else:
m = tf.estimator.DNNLinearCombinedClassifier(
model_dir=model_dir,
linear_feature_columns=crossed_columns,
dnn_feature_columns=deep_columns,
dnn_hidden_units=[100, 50])
return m
def input_fn(data_file, num_epochs, shuffle):
"""Input builder function."""
df_data = pd.read_csv(
tf.gfile.Open(data_file),
names=CSV_COLUMNS,
skipinitialspace=True,
engine="python",
skiprows=1)
# remove NaN elements
df_data = df_data.dropna(how="any", axis=0)
labels = df_data["income_bracket"].apply(lambda x: ">50K" in x).astype(int)
return tf.estimator.inputs.pandas_input_fn(
x=df_data,
y=labels,
batch_size=100,
num_epochs=num_epochs,
shuffle=shuffle,
num_threads=5)
def train_and_eval(model_dir, model_type, train_steps, train_data, test_data):
"""Train and evaluate the model."""
train_file_name, test_file_name = maybe_download(train_data, test_data)
# Specify file path below if want to find the output easily
model_dir = tempfile.mkdtemp() if not model_dir else model_dir
m = build_estimator(model_dir, model_type)
# set num_epochs to None to get infinite stream of data.
m.train(
input_fn=input_fn(train_file_name, num_epochs=None, shuffle=True),
steps=train_steps)
# set steps to None to run evaluation until all data consumed.
results = m.evaluate(
input_fn=input_fn(test_file_name, num_epochs=1, shuffle=False),
steps=None)
print("model directory = %s" % model_dir)
for key in sorted(results):
print("%s: %s" % (key, results[key]))
# Manual cleanup
shutil.rmtree(model_dir)
FLAGS = None
def main(_):
train_and_eval(FLAGS.model_dir, FLAGS.model_type, FLAGS.train_steps,
FLAGS.train_data, FLAGS.test_data)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.register("type", "bool", lambda v: v.lower() == "true")
parser.add_argument(
"--model_dir",
type=str,
default="",
help="Base directory for output models."
)
parser.add_argument(
"--model_type",
type=str,
default="wide_n_deep",
help="Valid model types: {'wide', 'deep', 'wide_n_deep'}."
)
parser.add_argument(
"--train_steps",
type=int,
default=2000,
help="Number of training steps."
)
parser.add_argument(
"--train_data",
type=str,
default="",
help="Path to the training data."
)
parser.add_argument(
"--test_data",
type=str,
default="",
help="Path to the test data."
)
FLAGS, unparsed = parser.parse_known_args()
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
| apache-2.0 |
Thomsen22/MissingMoney | Premium - 24 Bus/premium_function.py | 1 | 18638 | # Python standard modules
import numpy as np
import pandas as pd
from collections import defaultdict
import optimization as results
def premiumfunction(timeperiod, bidtype, newpremium, reservemargin):
df_price0, zones, gens_for_zones, df_zonalconsumption, df_zonalwindproduction, df_zonalsolarproduction, df_windprodload = DayAheadOptimization()
df_cost = missingmoney(timeperiod, bidtype)
# Find highest amount of missing money and determine premium
if newpremium == 'yes':
df_cost, typegenerator, premiumfind = premiumdetermination(df_cost)
elif newpremium == 'no':
df_cost = df_cost
chosengenerator = df_cost['Premium'].argmax()
typegenerator = df_cost['PrimaryFuel'][df_cost['Premium'].argmax()]
premiumfind = df_cost['Premium'][chosengenerator]
# Run DA optimization once again
df_price1, zones, gens_for_zones, df_zonalconsumption, df_zonalwindproduction, df_zonalsolarproduction, df_windprodload = DayAheadOptimization()
df_cost = missingmoney(timeperiod, bidtype)
df_cost.to_csv('revenue_cost_gen.csv')
capacityreq = {}
for z in zones:
capacityreq[z] = df_zonalconsumption[z].max() * reservemargin
df_capacityreq = pd.DataFrame([[key,value] for key,value in capacityreq.items()],columns=["Zones","CapacityReq"]).set_index('Zones')
df_generators = pd.read_csv('generators.csv').set_index('ID')
# Check the reserve margin
for z in zones:
if sum(df_generators['capacity'][g] for g in gens_for_zones[z]) > df_capacityreq['CapacityReq'][z]:
df_generators = plantdeactivation(zones, gens_for_zones, df_cost, df_capacityreq)
elif sum(df_generators['capacity'][g] for g in gens_for_zones[z]) < df_capacityreq['CapacityReq'][z]:
df_generators = plantactivation(zones, gens_for_zones, df_capacityreq)
df_generators, df_cost = plantinvestment(df_price1, zones, gens_for_zones, timeperiod, df_capacityreq, typegenerator, premiumfind)
df_cost.to_csv('revenue_cost_gen.csv')
windcost = {}
for z in df_price1.columns:
for t in df_price1.index:
windcost[z,t] = df_zonalwindproduction[z][t] * df_price1[z][t]
totalwindcost = sum(windcost.values())
solarcost = {}
for z in df_price1.columns:
for t in df_price1.index:
windcost[z,t] = df_zonalsolarproduction[z][t] * df_price1[z][t]
totalsolarcost = sum(solarcost.values())
windpenlevel = df_windprodload['WindPenetration[%]'].mean()
solarpenlevel = df_windprodload['SolarPenetration[%]'].mean()
windproduction = df_windprodload['WindProduction[MW]'].sum()
return df_cost, df_generators, totalwindcost, totalsolarcost, windpenlevel, solarpenlevel
def premiumdetermination(df_cost):
df_generators = pd.read_csv('generators.csv').set_index('ID')
df_cost_temp = df_cost
for g in df_cost_temp.index:
if df_cost_temp['Premium'][g] > 0:
df_cost_temp['MissingMoney'][g] = 0
elif df_cost_temp['Premium'][g] == 0:
df_cost_temp['MissingMoney'][g] = df_cost_temp['MissingMoney'][g]
for g in df_cost_temp.index:
if df_cost_temp['TotalProduction'][g] > 0:
df_cost_temp['MissingMoney'][g] = df_cost_temp['MissingMoney'][g]
elif df_cost_temp['TotalProduction'][g] == 0:
df_cost_temp['MissingMoney'][g] = 0
# Choose a generator from generators.csv
chosengenerator = 'g16'
typegenerator = 'GasCCGT'
premiumfind = df_cost_temp['MissingMoney'][chosengenerator] / df_cost_temp['TotalProduction'][chosengenerator]
premium = {}
for g in df_cost.index:
if df_cost['Premium'][g] > 0:
premium[g] = df_cost['Premium'][g]
elif df_cost['Premium'][g] == 0:
if df_cost['PrimaryFuel'][g] == typegenerator:
premium[g] = premiumfind
elif df_cost['PrimaryFuel'][g] != typegenerator:
premium[g] = 0
premium_df = pd.DataFrame([[key,value] for key,value in premium.items()],columns=["Generator","Premium"]).set_index('Generator')
df_cost['Premium'] = premium_df['Premium']
df_generators['lincost'] = df_generators['lincostold'] - df_cost['Premium']
for g in df_generators.index:
if df_generators['lincost'][g] < 0:
df_generators['lincost'][g] = -0.1
elif df_generators['lincost'][g] >= 0:
df_generators['lincost'][g] = df_generators['lincost'][g]
df_cost.to_csv('revenue_cost_gen.csv')
df_generators.to_csv('generators.csv')
return df_cost, typegenerator, premiumfind
def DayAheadOptimization():
df_price, df_genprod, df_lineflow, df_loadshed, df_windsolarload, df_revenueprod, network, times, generators, startup_number_df, df_zonalconsumption, df_windprod, df_solarprod, zones, gens_for_zones = results.optimization()
revenue_cost_gen = pd.read_csv('revenue_cost_gen.csv').set_index('Generator')
Gen_dataframe = df_revenueprod
revenue_cost_gen['TotalRevenue'] = Gen_dataframe['Total Revenue'].map('{:.2f}'.format)
revenue_cost_gen['TotalProduction'] = Gen_dataframe['Total Production'].map('{:.2f}'.format)
revenue_cost_gen['NumberofS/U'] = startup_number_df['Total Start-Ups']
revenue_cost_gen['Capacity'] = generators.capacity
revenue_cost_gen['MarginalCost'] = generators.lincost
revenue_cost_gen['S/Ucost'] = generators.cyclecost
revenue_cost_gen['FixedO&MCost'] = generators.fixedomcost
revenue_cost_gen['VarO&MCost'] = generators.varomcost
revenue_cost_gen['LevelizedCapitalCost'] = generators.levcapcost
revenue_cost_gen['PrimaryFuel'] = generators.primaryfuel
revenue_cost_gen.to_csv('revenue_cost_gen.csv')
return df_price, zones, gens_for_zones, df_zonalconsumption, df_windprod, df_solarprod, df_windsolarload
def missingmoney(timeperiod, bidtype):
df_cost = pd.read_csv('revenue_cost_gen.csv').set_index('Generator')
df_generators = pd.read_csv('generators.csv').set_index('ID')
generators = df_generators.index
if timeperiod == 'Week':
period = 52
elif timeperiod == 'Year':
period = 1
# The remaining missing money stemming from variable, fixed and capital cost is calculated seperately
totalrevenue = {}
for g in generators:
totalrevenue[g] = df_cost['TotalRevenue'][g] + (df_cost['TotalProduction'][g] * df_cost['Premium'][g])
# Missing money from variable cost
varcost = {}
for g in generators:
varcost[g] = (df_cost['TotalProduction'][g]*df_cost['MarginalCost'][g]+df_cost['TotalProduction'][g]*df_cost['VarO&MCost'][g]+df_cost['S/Ucost'][g]*df_cost['NumberofS/U'][g])
mmvarcost = {}
for g in generators:
if totalrevenue[g] - varcost[g] >= 0:
mmvarcost[g] = 0
elif totalrevenue[g] - varcost[g] < 0:
mmvarcost[g] = totalrevenue[g] - varcost[g]
# Missing money including fixed costs
fixedcost = {}
for g in generators:
fixedcost[g] = (df_cost['FixedO&MCost'][g]*df_cost['Capacity'][g])/period
mmfixedcost = {}
for g in generators:
if totalrevenue[g] - (varcost[g] + fixedcost[g]) >= 0:
mmfixedcost[g] = 0
elif totalrevenue[g] - (varcost[g] + fixedcost[g]) < 0:
mmfixedcost[g] = totalrevenue[g] - (varcost[g] + fixedcost[g])
# Missing money including capital costs
capcost = {}
for g in generators:
capcost[g] = (df_cost['LevelizedCapitalCost'][g]*1000000*df_cost['Capacity'][g])/period
mmcapcost = {}
for g in generators:
if totalrevenue[g] - (varcost[g] + fixedcost[g] + capcost[g]) >= 0:
mmcapcost[g] = 0
elif totalrevenue[g] - (varcost[g] + fixedcost[g] + capcost[g]) < 0:
mmcapcost[g] = totalrevenue[g] - (varcost[g] + fixedcost[g] + capcost[g])
# Remaining amount of missing money
if bidtype == 'Variable':
missingmoney = {}
for g in generators:
missingmoney[g] = - mmvarcost[g]
if bidtype == 'Fixed':
missingmoney = {}
for g in generators:
missingmoney[g] = - mmfixedcost[g]
if bidtype == 'Capital':
missingmoney = {}
for g in generators:
missingmoney[g] = - mmcapcost[g]
missingmoney_df = pd.DataFrame([[key,value] for key,value in missingmoney.items()],columns=["Generator","MissingMoney"]).set_index('Generator')
df_cost['MissingMoney'] = missingmoney_df['MissingMoney']
return df_cost
def plantdeactivation(zones, gens_for_zones, df_cost, df_capacityreq):
df_generators = pd.read_csv('generators.csv').set_index('ID')
# Finding the missing-money treshold in each zone (2 plants in each zone can be mothballed)
missingmoney= {}
treshold = {}
for z in zones:
for g in gens_for_zones[z]:
if df_cost['MissingMoney'][g] > 0:
missingmoney[g] = df_cost['MissingMoney'][g]
mmlist = list(missingmoney.values())
elif df_cost['MissingMoney'][g] == 0:
missingmoney[g] = 0
mmlist = list(missingmoney.values())
if max(mmlist) == 0:
treshold[z] = max(mmlist)
elif max(mmlist) != 0:
treshold[z] = max(n for n in mmlist if n!=max(mmlist))
missingmoney.clear()
# Mothballing the two generators with highest missing money
for z in zones:
for g in gens_for_zones[z]:
if df_cost['MissingMoney'][g] >= treshold[z] and df_cost['MissingMoney'][g] > 0:
df_generators['capacity'][g] = 0
if df_generators['capacity'].sum(axis=0) - df_generators['capacity'][g] < sum(df_capacityreq['CapacityReq']):
df_generators['capacity'][g] = df_generators['capacityold'][g]
df_generators.to_csv('generators.csv')
return df_generators
def plantactivation(zones, gens_for_zones, df_capacityreq):
df_generators = pd.read_csv('generators.csv').set_index('ID')
zonalcapacity = {}
capacityfind = {}
for z in zones:
for g in gens_for_zones[z]:
if df_generators['capacity'][g] == 0:
zonalcapacity[g] = df_generators['capacityold'][g]
zonalcap = list(zonalcapacity.values())
elif df_generators['capacity'][g] != 0:
zonalcapacity[g] = 0
zonalcap = list(zonalcapacity.values())
capacityfind[z] = min(zonalcap, key=lambda x:abs(x-df_capacityreq['CapacityReq'][z]))
zonalcapacity.clear()
summation = df_generators['capacity'].sum(axis=0)
# Only one generator can be activated in each zone
for z in zones:
for g in gens_for_zones[z]:
if df_generators['capacity'][g] == 0 and df_generators['capacityold'][g] == capacityfind[z]:
df_generators['capacity'][g] = df_generators['capacityold'][g]
if df_generators['capacity'].sum(axis=0) - df_generators['capacity'][g] > summation:
df_generators['capacity'][g] = 0
df_generators.to_csv('generators.csv')
return df_generators
def plantinvestment(df_price_DA, zones, gens_for_zones, timeperiod, df_capacityreq, typegenerator, premiumfind):
df_generators = pd.read_csv('generators.csv').set_index('ID') # This must include new generators or be a seperate dataframe?
df_cost = pd.read_csv('revenue_cost_gen.csv').set_index('Generator')
times = df_price_DA.index
# Used to name generators
df_nodes = pd.read_csv('nodes.csv')
extragen = len(df_generators) - 12
nnodes = len(df_nodes)
if timeperiod == 'Week':
period = 52
elif timeperiod == 'Year':
period = 1
# Dictionary containing zonal market prices
MarketPriceDict = {}
for t in times:
for z in np.arange(len(zones)):
MarketPriceDict[df_price_DA.columns[z], t] = df_price_DA.loc[df_price_DA.index[t], df_price_DA.columns[z]]
# Revenue from Day-Ahead market for each generator in each zone
# Each generator will produce when the market price is above marg. cost.
energyrevenues = {}
energyrevenue = {}
productions = {}
production = {}
for g in df_generators.index:
for z in zones:
for t in times:
if df_generators['lincostold'][g] < (MarketPriceDict[z,t]):
energyrevenues[g,z,t] = (MarketPriceDict[z,t] * df_generators['capacityold'][g]) -\
((MarketPriceDict[z,t]-(MarketPriceDict[z,t]-df_generators['lincostold'][g])) * df_generators['capacityold'][g])
productions[g,z,t] = df_generators['capacityold'][g]
elif df_generators['lincostold'][g] >= (MarketPriceDict[z,t]):
energyrevenues[g,z,t] = 0
productions[g,z,t] = 0
energyrevenue[g,z] = sum(energyrevenues[g,z,t] for t in times)
production[g,z] = sum(productions[g,z,t] for t in times)
# Revenue from premium, generator assumed dispatched
capacityrevenue = {}
for z in zones:
for g in df_generators.index:
capacityrevenue[g,z] = df_cost['Premium'][g] * df_cost['TotalProduction'][g]
# Total revenue from capacity market and energy market
revenue = {}
for z in zones:
for g in df_generators.index:
revenue[g,z] = energyrevenue[g,z] + capacityrevenue[g,z]
# Variable cost
varcost = {}
for z in zones:
for g in df_generators.index:
varcost[g,z] = production[g,z] * df_generators['varomcost'][g]
# Fixed costs
fixedcost = {}
for z in zones:
for g in df_generators.index:
fixedcost[g,z] = (df_generators['fixedomcost'][g] * df_generators['capacityold'][g])/period
# Capital costs
capcost = {}
for z in zones:
for g in df_generators.index:
capcost[g,z] = (df_generators['levcapcost'][g]*1000000*df_generators['capacityold'][g])/period
sumcost = {}
for z in zones:
for g in df_generators.index:
sumcost[g,z] = varcost[g,z] + fixedcost[g,z] + capcost[g,z]
# Calculating possible profit for generators
profit = {}
for z in zones:
for g in df_generators.index:
if revenue[g,z] > sumcost[g,z]:
profit[g,z] = revenue[g,z] - sumcost[g,z]
elif revenue[g,z] < sumcost[g,z]:
profit[g,z] = 0
# Dictionary containing profitable generator types
zone_generator = df_generators[['country','name']].values.tolist()
gens_for_zone = defaultdict(list)
for country, generator in zone_generator:
gens_for_zone[country].append(generator)
newgens = {}
for z in zones:
for g in df_generators.index:
if sum(df_generators['capacity']) < sum(df_capacityreq['CapacityReq']):
if sum(df_generators['capacity'][i] for i in gens_for_zone[z]) < df_capacityreq['CapacityReq'][z]:
if profit[g,z] > 0:
newgens.setdefault(z,[]).append(g)
elif sum(df_generators['capacity']) > sum(df_capacityreq['CapacityReq']):
if sum(df_generators['capacity'][i] for i in gens_for_zone[z]) >= df_capacityreq['CapacityReq'][z]:
if profit[g,z] > 0 and production[g,z] > 0:
newgens.setdefault(z,[]).append(g)
# Dataframe with new generators
geninv = pd.DataFrame(columns=df_generators.columns)
for z in newgens.keys():
for g in newgens[z]:
tempdf = df_generators.loc[[g]]
tempdf.name = "{arg1}{arg2}".format(arg1=g, arg2=z)
tempdf.country = z
tempdf.age = 'new'
geninv = geninv.append(tempdf)
geninv.index = geninv.name
# Excluding hydro investments and two types of same generator in each zone
geninv = geninv[geninv.primaryfuel != 'Hydro']
geninv = geninv.drop_duplicates(subset=['country', 'lincostold'], keep='first')
geninv['cum_sum'] = geninv['latitude'].cumsum()
for g in geninv.index:
geninv.name[g] = 'g%d' % (nnodes+extragen+geninv['cum_sum'][g])
geninv['lincost'][g] = geninv['lincostold'][g]
geninv.index = geninv.name
del geninv['cum_sum']
df_generators = pd.concat([geninv, df_generators]).reset_index()
df_generators.rename(columns={'index': 'ID'}, inplace=True)
df_generators = df_generators.set_index('ID')
newgens = geninv.index
for g in newgens:
df_cost.loc[g] = [0,0,0,0,0,0,0,0,0,0,0,0]
df_cost['Capacity'] = df_generators.capacity
df_cost['MarginalCost'] = df_generators.lincost
df_cost['S/Ucost'] = df_generators.cyclecost
df_cost['FixedO&MCost'] = df_generators.fixedomcost
df_cost['VarO&MCost'] = df_generators.varomcost
df_cost['LevelizedCapitalCost'] = df_generators.levcapcost
df_cost['PrimaryFuel'] = df_generators.primaryfuel
if df_cost['PrimaryFuel'][g] == typegenerator:
df_cost['Premium'][g] = premiumfind
elif df_cost['PrimaryFuel'][g] != typegenerator:
df_cost['Premium'][g] = 0
if df_cost['PrimaryFuel'][g] == typegenerator:
df_cost['MarginalCost'][g] = df_generators.lincost[g] - df_cost['Premium'][g]
elif df_cost['PrimaryFuel'][g] != typegenerator:
df_cost['MarginalCost'][g] = df_generators.lincost[g]
df_generators['lincost'][g] = df_cost['MarginalCost'][g]
if df_generators['lincost'][g] < 0:
df_generators['lincost'][g] = -0.1
df_cost['MarginalCost'][g] = -0.1
elif df_generators['lincost'][g] >= 0:
df_generators['lincost'][g] = df_generators['lincost'][g]
df_cost['MarginalCost'][g] = df_cost['MarginalCost'][g]
df_generators.to_csv('generators.csv')
return df_generators, df_cost
| gpl-3.0 |
chemelnucfin/tensorflow | tensorflow/contrib/learn/python/learn/estimators/dnn_test.py | 6 | 60842 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for DNNEstimators."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import functools
import json
import tempfile
import numpy as np
from tensorflow.contrib.layers.python.layers import feature_column
from tensorflow.contrib.learn.python.learn import experiment
from tensorflow.contrib.learn.python.learn.datasets import base
from tensorflow.contrib.learn.python.learn.estimators import _sklearn
from tensorflow.contrib.learn.python.learn.estimators import dnn
from tensorflow.contrib.learn.python.learn.estimators import dnn_linear_combined
from tensorflow.contrib.learn.python.learn.estimators import estimator
from tensorflow.contrib.learn.python.learn.estimators import estimator_test_utils
from tensorflow.contrib.learn.python.learn.estimators import head as head_lib
from tensorflow.contrib.learn.python.learn.estimators import model_fn
from tensorflow.contrib.learn.python.learn.estimators import run_config
from tensorflow.contrib.learn.python.learn.estimators import test_data
from tensorflow.contrib.learn.python.learn.metric_spec import MetricSpec
from tensorflow.contrib.metrics.python.ops import metric_ops
from tensorflow.python.feature_column import feature_column_lib as fc_core
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import sparse_tensor
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import init_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.platform import test
from tensorflow.python.training import input as input_lib
from tensorflow.python.training import monitored_session
from tensorflow.python.training import server_lib
class EmbeddingMultiplierTest(test.TestCase):
"""dnn_model_fn tests."""
def testRaisesNonEmbeddingColumn(self):
one_hot_language = feature_column.one_hot_column(
feature_column.sparse_column_with_hash_bucket('language', 10))
params = {
'feature_columns': [one_hot_language],
'head': head_lib.multi_class_head(2),
'hidden_units': [1],
# Set lr mult to 0. to keep embeddings constant.
'embedding_lr_multipliers': {
one_hot_language: 0.0
},
}
features = {
'language':
sparse_tensor.SparseTensor(
values=['en', 'fr', 'zh'],
indices=[[0, 0], [1, 0], [2, 0]],
dense_shape=[3, 1]),
}
labels = constant_op.constant([[0], [0], [0]], dtype=dtypes.int32)
with self.assertRaisesRegexp(ValueError,
'can only be defined for embedding columns'):
dnn._dnn_model_fn(features, labels, model_fn.ModeKeys.TRAIN, params)
def testMultipliesGradient(self):
embedding_language = feature_column.embedding_column(
feature_column.sparse_column_with_hash_bucket('language', 10),
dimension=1,
initializer=init_ops.constant_initializer(0.1))
embedding_wire = feature_column.embedding_column(
feature_column.sparse_column_with_hash_bucket('wire', 10),
dimension=1,
initializer=init_ops.constant_initializer(0.1))
params = {
'feature_columns': [embedding_language, embedding_wire],
'head': head_lib.multi_class_head(2),
'hidden_units': [1],
# Set lr mult to 0. to keep embeddings constant.
'embedding_lr_multipliers': {
embedding_language: 0.0
},
}
features = {
'language':
sparse_tensor.SparseTensor(
values=['en', 'fr', 'zh'],
indices=[[0, 0], [1, 0], [2, 0]],
dense_shape=[3, 1]),
'wire':
sparse_tensor.SparseTensor(
values=['omar', 'stringer', 'marlo'],
indices=[[0, 0], [1, 0], [2, 0]],
dense_shape=[3, 1]),
}
labels = constant_op.constant([[0], [0], [0]], dtype=dtypes.int32)
model_ops = dnn._dnn_model_fn(features, labels, model_fn.ModeKeys.TRAIN,
params)
with monitored_session.MonitoredSession() as sess:
language_var = dnn_linear_combined._get_embedding_variable(
embedding_language, 'dnn', 'dnn/input_from_feature_columns')
wire_var = dnn_linear_combined._get_embedding_variable(
embedding_wire, 'dnn', 'dnn/input_from_feature_columns')
for _ in range(2):
_, language_value, wire_value = sess.run(
[model_ops.train_op, language_var, wire_var])
initial_value = np.full_like(language_value, 0.1)
self.assertTrue(np.all(np.isclose(language_value, initial_value)))
self.assertFalse(np.all(np.isclose(wire_value, initial_value)))
class ActivationFunctionTest(test.TestCase):
def _getModelForActivation(self, activation_fn):
embedding_language = feature_column.embedding_column(
feature_column.sparse_column_with_hash_bucket('language', 10),
dimension=1,
initializer=init_ops.constant_initializer(0.1))
params = {
'feature_columns': [embedding_language],
'head': head_lib.multi_class_head(2),
'hidden_units': [1],
'activation_fn': activation_fn,
}
features = {
'language':
sparse_tensor.SparseTensor(
values=['en', 'fr', 'zh'],
indices=[[0, 0], [1, 0], [2, 0]],
dense_shape=[3, 1]),
}
labels = constant_op.constant([[0], [0], [0]], dtype=dtypes.int32)
return dnn._dnn_model_fn(features, labels, model_fn.ModeKeys.TRAIN, params)
def testValidActivation(self):
_ = self._getModelForActivation('relu')
def testRaisesOnBadActivationName(self):
with self.assertRaisesRegexp(ValueError,
'Activation name should be one of'):
self._getModelForActivation('max_pool')
class DNNEstimatorTest(test.TestCase):
def _assertInRange(self, expected_min, expected_max, actual):
self.assertLessEqual(expected_min, actual)
self.assertGreaterEqual(expected_max, actual)
def testExperimentIntegration(self):
exp = experiment.Experiment(
estimator=dnn.DNNClassifier(
n_classes=3,
feature_columns=[
feature_column.real_valued_column(
'feature', dimension=4)
],
hidden_units=[3, 3]),
train_input_fn=test_data.iris_input_multiclass_fn,
eval_input_fn=test_data.iris_input_multiclass_fn)
exp.test()
def testEstimatorContract(self):
estimator_test_utils.assert_estimator_contract(self, dnn.DNNEstimator)
def testTrainWithWeights(self):
"""Tests training with given weight column."""
def _input_fn_train():
# Create 4 rows, one of them (y = x), three of them (y=Not(x))
# First row has more weight than others. Model should fit (y=x) better
# than (y=Not(x)) due to the relative higher weight of the first row.
labels = constant_op.constant([[1], [0], [0], [0]])
features = {
'x': array_ops.ones(shape=[4, 1], dtype=dtypes.float32),
'w': constant_op.constant([[100.], [3.], [2.], [2.]])
}
return features, labels
def _input_fn_eval():
# Create 4 rows (y = x)
labels = constant_op.constant([[1], [1], [1], [1]])
features = {
'x': array_ops.ones(shape=[4, 1], dtype=dtypes.float32),
'w': constant_op.constant([[1.], [1.], [1.], [1.]])
}
return features, labels
dnn_estimator = dnn.DNNEstimator(
head=head_lib.multi_class_head(2, weight_column_name='w'),
feature_columns=[feature_column.real_valued_column('x')],
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
dnn_estimator.fit(input_fn=_input_fn_train, steps=5)
scores = dnn_estimator.evaluate(input_fn=_input_fn_eval, steps=1)
self._assertInRange(0.0, 1.0, scores['accuracy'])
class DNNClassifierTest(test.TestCase):
def testExperimentIntegration(self):
exp = experiment.Experiment(
estimator=dnn.DNNClassifier(
n_classes=3,
feature_columns=[
feature_column.real_valued_column(
'feature', dimension=4)
],
hidden_units=[3, 3]),
train_input_fn=test_data.iris_input_multiclass_fn,
eval_input_fn=test_data.iris_input_multiclass_fn)
exp.test()
def _assertInRange(self, expected_min, expected_max, actual):
self.assertLessEqual(expected_min, actual)
self.assertGreaterEqual(expected_max, actual)
def testEstimatorContract(self):
estimator_test_utils.assert_estimator_contract(self, dnn.DNNClassifier)
def testEmbeddingMultiplier(self):
embedding_language = feature_column.embedding_column(
feature_column.sparse_column_with_hash_bucket('language', 10),
dimension=1,
initializer=init_ops.constant_initializer(0.1))
classifier = dnn.DNNClassifier(
feature_columns=[embedding_language],
hidden_units=[3, 3],
embedding_lr_multipliers={embedding_language: 0.8})
self.assertEqual({
embedding_language: 0.8
}, classifier.params['embedding_lr_multipliers'])
def testInputPartitionSize(self):
def _input_fn_float_label(num_epochs=None):
features = {
'language':
sparse_tensor.SparseTensor(
values=input_lib.limit_epochs(
['en', 'fr', 'zh'], num_epochs=num_epochs),
indices=[[0, 0], [0, 1], [2, 0]],
dense_shape=[3, 2])
}
labels = constant_op.constant([[0.8], [0.], [0.2]], dtype=dtypes.float32)
return features, labels
language_column = feature_column.sparse_column_with_hash_bucket(
'language', hash_bucket_size=20)
feature_columns = [
feature_column.embedding_column(language_column, dimension=1),
]
# Set num_ps_replica to be 10 and the min slice size to be extremely small,
# so as to ensure that there'll be 10 partititions produced.
config = run_config.RunConfig(tf_random_seed=1)
config._num_ps_replicas = 10
classifier = dnn.DNNClassifier(
n_classes=2,
feature_columns=feature_columns,
hidden_units=[3, 3],
optimizer='Adagrad',
config=config,
input_layer_min_slice_size=1)
# Ensure the param is passed in.
self.assertEqual(1, classifier.params['input_layer_min_slice_size'])
# Ensure the partition count is 10.
classifier.fit(input_fn=_input_fn_float_label, steps=50)
partition_count = 0
for name in classifier.get_variable_names():
if 'language_embedding' in name and 'Adagrad' in name:
partition_count += 1
self.assertEqual(10, partition_count)
def testLogisticRegression_MatrixData(self):
"""Tests binary classification using matrix data as input."""
cont_features = [feature_column.real_valued_column('feature', dimension=4)]
classifier = dnn.DNNClassifier(
feature_columns=cont_features,
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
input_fn = test_data.iris_input_logistic_fn
classifier.fit(input_fn=input_fn, steps=5)
scores = classifier.evaluate(input_fn=input_fn, steps=1)
self._assertInRange(0.0, 1.0, scores['accuracy'])
self.assertIn('loss', scores)
def testLogisticRegression_MatrixData_Labels1D(self):
"""Same as the last test, but label shape is [100] instead of [100, 1]."""
def _input_fn():
iris = test_data.prepare_iris_data_for_logistic_regression()
return {
'feature': constant_op.constant(
iris.data, dtype=dtypes.float32)
}, constant_op.constant(
iris.target, shape=[100], dtype=dtypes.int32)
cont_features = [feature_column.real_valued_column('feature', dimension=4)]
classifier = dnn.DNNClassifier(
feature_columns=cont_features,
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
classifier.fit(input_fn=_input_fn, steps=5)
scores = classifier.evaluate(input_fn=_input_fn, steps=1)
self.assertIn('loss', scores)
def testLogisticRegression_NpMatrixData(self):
"""Tests binary classification using numpy matrix data as input."""
iris = test_data.prepare_iris_data_for_logistic_regression()
train_x = iris.data
train_y = iris.target
feature_columns = [feature_column.real_valued_column('', dimension=4)]
classifier = dnn.DNNClassifier(
feature_columns=feature_columns,
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
classifier.fit(x=train_x, y=train_y, steps=5)
scores = classifier.evaluate(x=train_x, y=train_y, steps=1)
self._assertInRange(0.0, 1.0, scores['accuracy'])
def _assertBinaryPredictions(self, expected_len, predictions):
self.assertEqual(expected_len, len(predictions))
for prediction in predictions:
self.assertIn(prediction, (0, 1))
def _assertClassificationPredictions(
self, expected_len, n_classes, predictions):
self.assertEqual(expected_len, len(predictions))
for prediction in predictions:
self.assertIn(prediction, range(n_classes))
def _assertProbabilities(self, expected_batch_size, expected_n_classes,
probabilities):
self.assertEqual(expected_batch_size, len(probabilities))
for b in range(expected_batch_size):
self.assertEqual(expected_n_classes, len(probabilities[b]))
for i in range(expected_n_classes):
self._assertInRange(0.0, 1.0, probabilities[b][i])
def testEstimatorWithCoreFeatureColumns(self):
def _input_fn(num_epochs=None):
features = {
'age':
input_lib.limit_epochs(
constant_op.constant([[.8], [0.2], [.1]]),
num_epochs=num_epochs),
'language':
sparse_tensor.SparseTensor(
values=input_lib.limit_epochs(
['en', 'fr', 'zh'], num_epochs=num_epochs),
indices=[[0, 0], [0, 1], [2, 0]],
dense_shape=[3, 2])
}
return features, constant_op.constant([[1], [0], [0]], dtype=dtypes.int32)
language_column = fc_core.categorical_column_with_hash_bucket(
'language', hash_bucket_size=20)
feature_columns = [
fc_core.embedding_column(language_column, dimension=1),
fc_core.numeric_column('age')
]
classifier = dnn.DNNClassifier(
n_classes=2,
feature_columns=feature_columns,
hidden_units=[10, 10],
config=run_config.RunConfig(tf_random_seed=1))
classifier.fit(input_fn=_input_fn, steps=50)
scores = classifier.evaluate(input_fn=_input_fn, steps=1)
self._assertInRange(0.0, 1.0, scores['accuracy'])
self.assertIn('loss', scores)
predict_input_fn = functools.partial(_input_fn, num_epochs=1)
predicted_classes = list(
classifier.predict_classes(input_fn=predict_input_fn, as_iterable=True))
self._assertBinaryPredictions(3, predicted_classes)
predictions = list(
classifier.predict(input_fn=predict_input_fn, as_iterable=True))
self.assertAllEqual(predicted_classes, predictions)
def testLogisticRegression_TensorData(self):
"""Tests binary classification using tensor data as input."""
def _input_fn(num_epochs=None):
features = {
'age':
input_lib.limit_epochs(
constant_op.constant([[.8], [0.2], [.1]]),
num_epochs=num_epochs),
'language':
sparse_tensor.SparseTensor(
values=input_lib.limit_epochs(
['en', 'fr', 'zh'], num_epochs=num_epochs),
indices=[[0, 0], [0, 1], [2, 0]],
dense_shape=[3, 2])
}
return features, constant_op.constant([[1], [0], [0]], dtype=dtypes.int32)
language_column = feature_column.sparse_column_with_hash_bucket(
'language', hash_bucket_size=20)
feature_columns = [
feature_column.embedding_column(
language_column, dimension=1),
feature_column.real_valued_column('age')
]
classifier = dnn.DNNClassifier(
n_classes=2,
feature_columns=feature_columns,
hidden_units=[10, 10],
config=run_config.RunConfig(tf_random_seed=1))
classifier.fit(input_fn=_input_fn, steps=50)
scores = classifier.evaluate(input_fn=_input_fn, steps=1)
self._assertInRange(0.0, 1.0, scores['accuracy'])
self.assertIn('loss', scores)
predict_input_fn = functools.partial(_input_fn, num_epochs=1)
predicted_classes = list(
classifier.predict_classes(
input_fn=predict_input_fn, as_iterable=True))
self._assertBinaryPredictions(3, predicted_classes)
predictions = list(
classifier.predict(input_fn=predict_input_fn, as_iterable=True))
self.assertAllEqual(predicted_classes, predictions)
def testLogisticRegression_FloatLabel(self):
"""Tests binary classification with float labels."""
def _input_fn_float_label(num_epochs=None):
features = {
'age':
input_lib.limit_epochs(
constant_op.constant([[50], [20], [10]]),
num_epochs=num_epochs),
'language':
sparse_tensor.SparseTensor(
values=input_lib.limit_epochs(
['en', 'fr', 'zh'], num_epochs=num_epochs),
indices=[[0, 0], [0, 1], [2, 0]],
dense_shape=[3, 2])
}
labels = constant_op.constant([[0.8], [0.], [0.2]], dtype=dtypes.float32)
return features, labels
language_column = feature_column.sparse_column_with_hash_bucket(
'language', hash_bucket_size=20)
feature_columns = [
feature_column.embedding_column(
language_column, dimension=1),
feature_column.real_valued_column('age')
]
classifier = dnn.DNNClassifier(
n_classes=2,
feature_columns=feature_columns,
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
classifier.fit(input_fn=_input_fn_float_label, steps=50)
predict_input_fn = functools.partial(_input_fn_float_label, num_epochs=1)
predicted_classes = list(
classifier.predict_classes(
input_fn=predict_input_fn, as_iterable=True))
self._assertBinaryPredictions(3, predicted_classes)
predictions = list(
classifier.predict(
input_fn=predict_input_fn, as_iterable=True))
self.assertAllEqual(predicted_classes, predictions)
predictions_proba = list(
classifier.predict_proba(
input_fn=predict_input_fn, as_iterable=True))
self._assertProbabilities(3, 2, predictions_proba)
def testMultiClass_MatrixData(self):
"""Tests multi-class classification using matrix data as input."""
cont_features = [feature_column.real_valued_column('feature', dimension=4)]
classifier = dnn.DNNClassifier(
n_classes=3,
feature_columns=cont_features,
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
input_fn = test_data.iris_input_multiclass_fn
classifier.fit(input_fn=input_fn, steps=200)
scores = classifier.evaluate(input_fn=input_fn, steps=1)
self._assertInRange(0.0, 1.0, scores['accuracy'])
self.assertIn('loss', scores)
def testMultiClass_MatrixData_Labels1D(self):
"""Same as the last test, but label shape is [150] instead of [150, 1]."""
def _input_fn():
iris = base.load_iris()
return {
'feature': constant_op.constant(
iris.data, dtype=dtypes.float32)
}, constant_op.constant(
iris.target, shape=[150], dtype=dtypes.int32)
cont_features = [feature_column.real_valued_column('feature', dimension=4)]
classifier = dnn.DNNClassifier(
n_classes=3,
feature_columns=cont_features,
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
classifier.fit(input_fn=_input_fn, steps=200)
scores = classifier.evaluate(input_fn=_input_fn, steps=1)
self._assertInRange(0.0, 1.0, scores['accuracy'])
def testMultiClass_NpMatrixData(self):
"""Tests multi-class classification using numpy matrix data as input."""
iris = base.load_iris()
train_x = iris.data
train_y = iris.target
feature_columns = [feature_column.real_valued_column('', dimension=4)]
classifier = dnn.DNNClassifier(
n_classes=3,
feature_columns=feature_columns,
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
classifier.fit(x=train_x, y=train_y, steps=200)
scores = classifier.evaluate(x=train_x, y=train_y, steps=1)
self._assertInRange(0.0, 1.0, scores['accuracy'])
def testMultiClassLabelKeys(self):
"""Tests n_classes > 2 with label_keys vocabulary for labels."""
# Byte literals needed for python3 test to pass.
label_keys = [b'label0', b'label1', b'label2']
def _input_fn(num_epochs=None):
features = {
'age':
input_lib.limit_epochs(
constant_op.constant([[.8], [0.2], [.1]]),
num_epochs=num_epochs),
'language':
sparse_tensor.SparseTensor(
values=input_lib.limit_epochs(
['en', 'fr', 'zh'], num_epochs=num_epochs),
indices=[[0, 0], [0, 1], [2, 0]],
dense_shape=[3, 2])
}
labels = constant_op.constant(
[[label_keys[1]], [label_keys[0]], [label_keys[0]]],
dtype=dtypes.string)
return features, labels
language_column = feature_column.sparse_column_with_hash_bucket(
'language', hash_bucket_size=20)
feature_columns = [
feature_column.embedding_column(
language_column, dimension=1),
feature_column.real_valued_column('age')
]
classifier = dnn.DNNClassifier(
n_classes=3,
feature_columns=feature_columns,
hidden_units=[10, 10],
label_keys=label_keys,
config=run_config.RunConfig(tf_random_seed=1))
classifier.fit(input_fn=_input_fn, steps=50)
scores = classifier.evaluate(input_fn=_input_fn, steps=1)
self._assertInRange(0.0, 1.0, scores['accuracy'])
self.assertIn('loss', scores)
predict_input_fn = functools.partial(_input_fn, num_epochs=1)
predicted_classes = list(
classifier.predict_classes(
input_fn=predict_input_fn, as_iterable=True))
self.assertEqual(3, len(predicted_classes))
for pred in predicted_classes:
self.assertIn(pred, label_keys)
predictions = list(
classifier.predict(input_fn=predict_input_fn, as_iterable=True))
self.assertAllEqual(predicted_classes, predictions)
def testLoss(self):
"""Tests loss calculation."""
def _input_fn_train():
# Create 4 rows, one of them (y = x), three of them (y=Not(x))
# The logistic prediction should be (y = 0.25).
labels = constant_op.constant([[1], [0], [0], [0]])
features = {'x': array_ops.ones(shape=[4, 1], dtype=dtypes.float32),}
return features, labels
classifier = dnn.DNNClassifier(
n_classes=2,
feature_columns=[feature_column.real_valued_column('x')],
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
classifier.fit(input_fn=_input_fn_train, steps=5)
scores = classifier.evaluate(input_fn=_input_fn_train, steps=1)
self.assertIn('loss', scores)
def testLossWithWeights(self):
"""Tests loss calculation with weights."""
def _input_fn_train():
# 4 rows with equal weight, one of them (y = x), three of them (y=Not(x))
# The logistic prediction should be (y = 0.25).
labels = constant_op.constant([[1.], [0.], [0.], [0.]])
features = {
'x': array_ops.ones(
shape=[4, 1], dtype=dtypes.float32),
'w': constant_op.constant([[1.], [1.], [1.], [1.]])
}
return features, labels
def _input_fn_eval():
# 4 rows, with different weights.
labels = constant_op.constant([[1.], [0.], [0.], [0.]])
features = {
'x': array_ops.ones(
shape=[4, 1], dtype=dtypes.float32),
'w': constant_op.constant([[7.], [1.], [1.], [1.]])
}
return features, labels
classifier = dnn.DNNClassifier(
weight_column_name='w',
n_classes=2,
feature_columns=[feature_column.real_valued_column('x')],
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
classifier.fit(input_fn=_input_fn_train, steps=5)
scores = classifier.evaluate(input_fn=_input_fn_eval, steps=1)
self.assertIn('loss', scores)
def testTrainWithWeights(self):
"""Tests training with given weight column."""
def _input_fn_train():
# Create 4 rows, one of them (y = x), three of them (y=Not(x))
# First row has more weight than others. Model should fit (y=x) better
# than (y=Not(x)) due to the relative higher weight of the first row.
labels = constant_op.constant([[1], [0], [0], [0]])
features = {
'x': array_ops.ones(
shape=[4, 1], dtype=dtypes.float32),
'w': constant_op.constant([[100.], [3.], [2.], [2.]])
}
return features, labels
def _input_fn_eval():
# Create 4 rows (y = x)
labels = constant_op.constant([[1], [1], [1], [1]])
features = {
'x': array_ops.ones(
shape=[4, 1], dtype=dtypes.float32),
'w': constant_op.constant([[1.], [1.], [1.], [1.]])
}
return features, labels
classifier = dnn.DNNClassifier(
weight_column_name='w',
feature_columns=[feature_column.real_valued_column('x')],
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
classifier.fit(input_fn=_input_fn_train, steps=5)
scores = classifier.evaluate(input_fn=_input_fn_eval, steps=1)
self._assertInRange(0.0, 1.0, scores['accuracy'])
def testPredict_AsIterableFalse(self):
"""Tests predict and predict_prob methods with as_iterable=False."""
def _input_fn(num_epochs=None):
features = {
'age':
input_lib.limit_epochs(
constant_op.constant([[.8], [.2], [.1]]),
num_epochs=num_epochs),
'language':
sparse_tensor.SparseTensor(
values=input_lib.limit_epochs(
['en', 'fr', 'zh'], num_epochs=num_epochs),
indices=[[0, 0], [0, 1], [2, 0]],
dense_shape=[3, 2])
}
return features, constant_op.constant([[1], [0], [0]], dtype=dtypes.int32)
sparse_column = feature_column.sparse_column_with_hash_bucket(
'language', hash_bucket_size=20)
feature_columns = [
feature_column.embedding_column(
sparse_column, dimension=1)
]
n_classes = 3
classifier = dnn.DNNClassifier(
n_classes=n_classes,
feature_columns=feature_columns,
hidden_units=[10, 10],
config=run_config.RunConfig(tf_random_seed=1))
classifier.fit(input_fn=_input_fn, steps=100)
scores = classifier.evaluate(input_fn=_input_fn, steps=1)
self._assertInRange(0.0, 1.0, scores['accuracy'])
self.assertIn('loss', scores)
predicted_classes = classifier.predict_classes(
input_fn=_input_fn, as_iterable=False)
self._assertClassificationPredictions(3, n_classes, predicted_classes)
predictions = classifier.predict(input_fn=_input_fn, as_iterable=False)
self.assertAllEqual(predicted_classes, predictions)
probabilities = classifier.predict_proba(
input_fn=_input_fn, as_iterable=False)
self._assertProbabilities(3, n_classes, probabilities)
def testPredict_AsIterable(self):
"""Tests predict and predict_prob methods with as_iterable=True."""
def _input_fn(num_epochs=None):
features = {
'age':
input_lib.limit_epochs(
constant_op.constant([[.8], [.2], [.1]]),
num_epochs=num_epochs),
'language':
sparse_tensor.SparseTensor(
values=input_lib.limit_epochs(
['en', 'fr', 'zh'], num_epochs=num_epochs),
indices=[[0, 0], [0, 1], [2, 0]],
dense_shape=[3, 2])
}
return features, constant_op.constant([[1], [0], [0]], dtype=dtypes.int32)
language_column = feature_column.sparse_column_with_hash_bucket(
'language', hash_bucket_size=20)
feature_columns = [
feature_column.embedding_column(
language_column, dimension=1),
feature_column.real_valued_column('age')
]
n_classes = 3
classifier = dnn.DNNClassifier(
n_classes=n_classes,
feature_columns=feature_columns,
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
classifier.fit(input_fn=_input_fn, steps=300)
scores = classifier.evaluate(input_fn=_input_fn, steps=1)
self._assertInRange(0.0, 1.0, scores['accuracy'])
self.assertIn('loss', scores)
predict_input_fn = functools.partial(_input_fn, num_epochs=1)
predicted_classes = list(
classifier.predict_classes(
input_fn=predict_input_fn, as_iterable=True))
self._assertClassificationPredictions(3, n_classes, predicted_classes)
predictions = list(
classifier.predict(
input_fn=predict_input_fn, as_iterable=True))
self.assertAllEqual(predicted_classes, predictions)
predicted_proba = list(
classifier.predict_proba(
input_fn=predict_input_fn, as_iterable=True))
self._assertProbabilities(3, n_classes, predicted_proba)
def testCustomMetrics(self):
"""Tests custom evaluation metrics."""
def _input_fn(num_epochs=None):
# Create 4 rows, one of them (y = x), three of them (y=Not(x))
labels = constant_op.constant([[1], [0], [0], [0]])
features = {
'x':
input_lib.limit_epochs(
array_ops.ones(
shape=[4, 1], dtype=dtypes.float32),
num_epochs=num_epochs),
}
return features, labels
def _my_metric_op(predictions, labels):
# For the case of binary classification, the 2nd column of "predictions"
# denotes the model predictions.
labels = math_ops.cast(labels, dtypes.float32)
predictions = array_ops.strided_slice(
predictions, [0, 1], [-1, 2], end_mask=1)
labels = math_ops.cast(labels, predictions.dtype)
return math_ops.reduce_sum(math_ops.multiply(predictions, labels))
classifier = dnn.DNNClassifier(
feature_columns=[feature_column.real_valued_column('x')],
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
classifier.fit(input_fn=_input_fn, steps=5)
scores = classifier.evaluate(
input_fn=_input_fn,
steps=5,
metrics={
'my_accuracy':
MetricSpec(
metric_fn=metric_ops.streaming_accuracy,
prediction_key='classes'),
'my_precision':
MetricSpec(
metric_fn=metric_ops.streaming_precision,
prediction_key='classes'),
'my_metric':
MetricSpec(
metric_fn=_my_metric_op, prediction_key='probabilities')
})
self.assertTrue(
set(['loss', 'my_accuracy', 'my_precision', 'my_metric']).issubset(
set(scores.keys())))
predict_input_fn = functools.partial(_input_fn, num_epochs=1)
predictions = np.array(list(classifier.predict_classes(
input_fn=predict_input_fn)))
self.assertEqual(
_sklearn.accuracy_score([1, 0, 0, 0], predictions),
scores['my_accuracy'])
# Test the case where the 2nd element of the key is neither "classes" nor
# "probabilities".
with self.assertRaisesRegexp(KeyError, 'bad_type'):
classifier.evaluate(
input_fn=_input_fn,
steps=5,
metrics={
'bad_name':
MetricSpec(
metric_fn=metric_ops.streaming_auc,
prediction_key='bad_type')
})
def testTrainSaveLoad(self):
"""Tests that insures you can save and reload a trained model."""
def _input_fn(num_epochs=None):
features = {
'age':
input_lib.limit_epochs(
constant_op.constant([[.8], [.2], [.1]]),
num_epochs=num_epochs),
'language':
sparse_tensor.SparseTensor(
values=input_lib.limit_epochs(
['en', 'fr', 'zh'], num_epochs=num_epochs),
indices=[[0, 0], [0, 1], [2, 0]],
dense_shape=[3, 2])
}
return features, constant_op.constant([[1], [0], [0]], dtype=dtypes.int32)
sparse_column = feature_column.sparse_column_with_hash_bucket(
'language', hash_bucket_size=20)
feature_columns = [
feature_column.embedding_column(
sparse_column, dimension=1)
]
model_dir = tempfile.mkdtemp()
classifier = dnn.DNNClassifier(
model_dir=model_dir,
n_classes=3,
feature_columns=feature_columns,
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
classifier.fit(input_fn=_input_fn, steps=5)
predict_input_fn = functools.partial(_input_fn, num_epochs=1)
predictions1 = classifier.predict_classes(input_fn=predict_input_fn)
del classifier
classifier2 = dnn.DNNClassifier(
model_dir=model_dir,
n_classes=3,
feature_columns=feature_columns,
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
predictions2 = classifier2.predict_classes(input_fn=predict_input_fn)
self.assertEqual(list(predictions1), list(predictions2))
def testTrainWithPartitionedVariables(self):
"""Tests training with partitioned variables."""
def _input_fn(num_epochs=None):
features = {
'age':
input_lib.limit_epochs(
constant_op.constant([[.8], [.2], [.1]]),
num_epochs=num_epochs),
'language':
sparse_tensor.SparseTensor(
values=input_lib.limit_epochs(
['en', 'fr', 'zh'], num_epochs=num_epochs),
indices=[[0, 0], [0, 1], [2, 0]],
dense_shape=[3, 2])
}
return features, constant_op.constant([[1], [0], [0]], dtype=dtypes.int32)
# The given hash_bucket_size results in variables larger than the
# default min_slice_size attribute, so the variables are partitioned.
sparse_column = feature_column.sparse_column_with_hash_bucket(
'language', hash_bucket_size=2e7)
feature_columns = [
feature_column.embedding_column(
sparse_column, dimension=1)
]
tf_config = {
'cluster': {
run_config.TaskType.PS: ['fake_ps_0', 'fake_ps_1']
}
}
with test.mock.patch.dict('os.environ',
{'TF_CONFIG': json.dumps(tf_config)}):
config = run_config.RunConfig(tf_random_seed=1)
# Because we did not start a distributed cluster, we need to pass an
# empty ClusterSpec, otherwise the device_setter will look for
# distributed jobs, such as "/job:ps" which are not present.
config._cluster_spec = server_lib.ClusterSpec({})
classifier = dnn.DNNClassifier(
n_classes=3,
feature_columns=feature_columns,
hidden_units=[3, 3],
config=config)
classifier.fit(input_fn=_input_fn, steps=5)
scores = classifier.evaluate(input_fn=_input_fn, steps=1)
self._assertInRange(0.0, 1.0, scores['accuracy'])
self.assertIn('loss', scores)
def testExport(self):
"""Tests export model for servo."""
def input_fn():
return {
'age':
constant_op.constant([1]),
'language':
sparse_tensor.SparseTensor(
values=['english'], indices=[[0, 0]], dense_shape=[1, 1])
}, constant_op.constant([[1]])
language = feature_column.sparse_column_with_hash_bucket('language', 100)
feature_columns = [
feature_column.real_valued_column('age'),
feature_column.embedding_column(
language, dimension=1)
]
classifier = dnn.DNNClassifier(
feature_columns=feature_columns, hidden_units=[3, 3])
classifier.fit(input_fn=input_fn, steps=5)
export_dir = tempfile.mkdtemp()
classifier.export(export_dir)
def testEnableCenteredBias(self):
"""Tests that we can enable centered bias."""
cont_features = [feature_column.real_valued_column('feature', dimension=4)]
classifier = dnn.DNNClassifier(
n_classes=3,
feature_columns=cont_features,
hidden_units=[3, 3],
enable_centered_bias=True,
config=run_config.RunConfig(tf_random_seed=1))
input_fn = test_data.iris_input_multiclass_fn
classifier.fit(input_fn=input_fn, steps=5)
self.assertIn('dnn/multi_class_head/centered_bias_weight',
classifier.get_variable_names())
scores = classifier.evaluate(input_fn=input_fn, steps=1)
self._assertInRange(0.0, 1.0, scores['accuracy'])
self.assertIn('loss', scores)
def testDisableCenteredBias(self):
"""Tests that we can disable centered bias."""
cont_features = [feature_column.real_valued_column('feature', dimension=4)]
classifier = dnn.DNNClassifier(
n_classes=3,
feature_columns=cont_features,
hidden_units=[3, 3],
enable_centered_bias=False,
config=run_config.RunConfig(tf_random_seed=1))
input_fn = test_data.iris_input_multiclass_fn
classifier.fit(input_fn=input_fn, steps=5)
self.assertNotIn('centered_bias_weight', classifier.get_variable_names())
scores = classifier.evaluate(input_fn=input_fn, steps=1)
self._assertInRange(0.0, 1.0, scores['accuracy'])
self.assertIn('loss', scores)
class DNNRegressorTest(test.TestCase):
def testExperimentIntegration(self):
exp = experiment.Experiment(
estimator=dnn.DNNRegressor(
feature_columns=[
feature_column.real_valued_column(
'feature', dimension=4)
],
hidden_units=[3, 3]),
train_input_fn=test_data.iris_input_logistic_fn,
eval_input_fn=test_data.iris_input_logistic_fn)
exp.test()
def testEstimatorContract(self):
estimator_test_utils.assert_estimator_contract(self, dnn.DNNRegressor)
def testRegression_MatrixData(self):
"""Tests regression using matrix data as input."""
cont_features = [feature_column.real_valued_column('feature', dimension=4)]
regressor = dnn.DNNRegressor(
feature_columns=cont_features,
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
input_fn = test_data.iris_input_logistic_fn
regressor.fit(input_fn=input_fn, steps=200)
scores = regressor.evaluate(input_fn=input_fn, steps=1)
self.assertIn('loss', scores)
def testRegression_MatrixData_Labels1D(self):
"""Same as the last test, but label shape is [100] instead of [100, 1]."""
def _input_fn():
iris = test_data.prepare_iris_data_for_logistic_regression()
return {
'feature': constant_op.constant(
iris.data, dtype=dtypes.float32)
}, constant_op.constant(
iris.target, shape=[100], dtype=dtypes.int32)
cont_features = [feature_column.real_valued_column('feature', dimension=4)]
regressor = dnn.DNNRegressor(
feature_columns=cont_features,
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
regressor.fit(input_fn=_input_fn, steps=200)
scores = regressor.evaluate(input_fn=_input_fn, steps=1)
self.assertIn('loss', scores)
def testRegression_NpMatrixData(self):
"""Tests binary classification using numpy matrix data as input."""
iris = test_data.prepare_iris_data_for_logistic_regression()
train_x = iris.data
train_y = iris.target
feature_columns = [feature_column.real_valued_column('', dimension=4)]
regressor = dnn.DNNRegressor(
feature_columns=feature_columns,
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
regressor.fit(x=train_x, y=train_y, steps=200)
scores = regressor.evaluate(x=train_x, y=train_y, steps=1)
self.assertIn('loss', scores)
def testRegression_TensorData(self):
"""Tests regression using tensor data as input."""
def _input_fn(num_epochs=None):
features = {
'age':
input_lib.limit_epochs(
constant_op.constant([[.8], [.15], [0.]]),
num_epochs=num_epochs),
'language':
sparse_tensor.SparseTensor(
values=input_lib.limit_epochs(
['en', 'fr', 'zh'], num_epochs=num_epochs),
indices=[[0, 0], [0, 1], [2, 0]],
dense_shape=[3, 2])
}
return features, constant_op.constant([1., 0., 0.2], dtype=dtypes.float32)
language_column = feature_column.sparse_column_with_hash_bucket(
'language', hash_bucket_size=20)
feature_columns = [
feature_column.embedding_column(
language_column, dimension=1),
feature_column.real_valued_column('age')
]
regressor = dnn.DNNRegressor(
feature_columns=feature_columns,
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
regressor.fit(input_fn=_input_fn, steps=200)
scores = regressor.evaluate(input_fn=_input_fn, steps=1)
self.assertIn('loss', scores)
def testLoss(self):
"""Tests loss calculation."""
def _input_fn_train():
# Create 4 rows, one of them (y = x), three of them (y=Not(x))
# The algorithm should learn (y = 0.25).
labels = constant_op.constant([[1.], [0.], [0.], [0.]])
features = {'x': array_ops.ones(shape=[4, 1], dtype=dtypes.float32),}
return features, labels
regressor = dnn.DNNRegressor(
feature_columns=[feature_column.real_valued_column('x')],
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
regressor.fit(input_fn=_input_fn_train, steps=5)
scores = regressor.evaluate(input_fn=_input_fn_train, steps=1)
self.assertIn('loss', scores)
def testLossWithWeights(self):
"""Tests loss calculation with weights."""
def _input_fn_train():
# 4 rows with equal weight, one of them (y = x), three of them (y=Not(x))
# The algorithm should learn (y = 0.25).
labels = constant_op.constant([[1.], [0.], [0.], [0.]])
features = {
'x': array_ops.ones(
shape=[4, 1], dtype=dtypes.float32),
'w': constant_op.constant([[1.], [1.], [1.], [1.]])
}
return features, labels
def _input_fn_eval():
# 4 rows, with different weights.
labels = constant_op.constant([[1.], [0.], [0.], [0.]])
features = {
'x': array_ops.ones(
shape=[4, 1], dtype=dtypes.float32),
'w': constant_op.constant([[7.], [1.], [1.], [1.]])
}
return features, labels
regressor = dnn.DNNRegressor(
weight_column_name='w',
feature_columns=[feature_column.real_valued_column('x')],
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
regressor.fit(input_fn=_input_fn_train, steps=5)
scores = regressor.evaluate(input_fn=_input_fn_eval, steps=1)
self.assertIn('loss', scores)
def testTrainWithWeights(self):
"""Tests training with given weight column."""
def _input_fn_train():
# Create 4 rows, one of them (y = x), three of them (y=Not(x))
# First row has more weight than others. Model should fit (y=x) better
# than (y=Not(x)) due to the relative higher weight of the first row.
labels = constant_op.constant([[1.], [0.], [0.], [0.]])
features = {
'x': array_ops.ones(
shape=[4, 1], dtype=dtypes.float32),
'w': constant_op.constant([[100.], [3.], [2.], [2.]])
}
return features, labels
def _input_fn_eval():
# Create 4 rows (y = x)
labels = constant_op.constant([[1.], [1.], [1.], [1.]])
features = {
'x': array_ops.ones(
shape=[4, 1], dtype=dtypes.float32),
'w': constant_op.constant([[1.], [1.], [1.], [1.]])
}
return features, labels
regressor = dnn.DNNRegressor(
weight_column_name='w',
feature_columns=[feature_column.real_valued_column('x')],
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
regressor.fit(input_fn=_input_fn_train, steps=5)
scores = regressor.evaluate(input_fn=_input_fn_eval, steps=1)
self.assertIn('loss', scores)
def _assertRegressionOutputs(
self, predictions, expected_shape):
predictions_nparray = np.array(predictions)
self.assertAllEqual(expected_shape, predictions_nparray.shape)
self.assertTrue(np.issubdtype(predictions_nparray.dtype, np.floating))
def testPredict_AsIterableFalse(self):
"""Tests predict method with as_iterable=False."""
labels = [1., 0., 0.2]
def _input_fn(num_epochs=None):
features = {
'age':
input_lib.limit_epochs(
constant_op.constant([[0.8], [0.15], [0.]]),
num_epochs=num_epochs),
'language':
sparse_tensor.SparseTensor(
values=input_lib.limit_epochs(
['en', 'fr', 'zh'], num_epochs=num_epochs),
indices=[[0, 0], [0, 1], [2, 0]],
dense_shape=[3, 2])
}
return features, constant_op.constant(labels, dtype=dtypes.float32)
sparse_column = feature_column.sparse_column_with_hash_bucket(
'language', hash_bucket_size=20)
feature_columns = [
feature_column.embedding_column(
sparse_column, dimension=1),
feature_column.real_valued_column('age')
]
regressor = dnn.DNNRegressor(
feature_columns=feature_columns,
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
regressor.fit(input_fn=_input_fn, steps=200)
scores = regressor.evaluate(input_fn=_input_fn, steps=1)
self.assertIn('loss', scores)
predicted_scores = regressor.predict_scores(
input_fn=_input_fn, as_iterable=False)
self._assertRegressionOutputs(predicted_scores, [3])
predictions = regressor.predict(input_fn=_input_fn, as_iterable=False)
self.assertAllClose(predicted_scores, predictions)
def testPredict_AsIterable(self):
"""Tests predict method with as_iterable=True."""
labels = [1., 0., 0.2]
def _input_fn(num_epochs=None):
features = {
'age':
input_lib.limit_epochs(
constant_op.constant([[0.8], [0.15], [0.]]),
num_epochs=num_epochs),
'language':
sparse_tensor.SparseTensor(
values=input_lib.limit_epochs(
['en', 'fr', 'zh'], num_epochs=num_epochs),
indices=[[0, 0], [0, 1], [2, 0]],
dense_shape=[3, 2])
}
return features, constant_op.constant(labels, dtype=dtypes.float32)
sparse_column = feature_column.sparse_column_with_hash_bucket(
'language', hash_bucket_size=20)
feature_columns = [
feature_column.embedding_column(
sparse_column, dimension=1),
feature_column.real_valued_column('age')
]
regressor = dnn.DNNRegressor(
feature_columns=feature_columns,
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
regressor.fit(input_fn=_input_fn, steps=200)
scores = regressor.evaluate(input_fn=_input_fn, steps=1)
self.assertIn('loss', scores)
predict_input_fn = functools.partial(_input_fn, num_epochs=1)
predicted_scores = list(
regressor.predict_scores(
input_fn=predict_input_fn, as_iterable=True))
self._assertRegressionOutputs(predicted_scores, [3])
predictions = list(
regressor.predict(input_fn=predict_input_fn, as_iterable=True))
self.assertAllClose(predicted_scores, predictions)
def testCustomMetrics(self):
"""Tests custom evaluation metrics."""
def _input_fn(num_epochs=None):
# Create 4 rows, one of them (y = x), three of them (y=Not(x))
labels = constant_op.constant([[1.], [0.], [0.], [0.]])
features = {
'x':
input_lib.limit_epochs(
array_ops.ones(
shape=[4, 1], dtype=dtypes.float32),
num_epochs=num_epochs),
}
return features, labels
def _my_metric_op(predictions, labels):
return math_ops.reduce_sum(math_ops.multiply(predictions, labels))
regressor = dnn.DNNRegressor(
feature_columns=[feature_column.real_valued_column('x')],
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
regressor.fit(input_fn=_input_fn, steps=5)
scores = regressor.evaluate(
input_fn=_input_fn,
steps=1,
metrics={
'my_error': metric_ops.streaming_mean_squared_error,
('my_metric', 'scores'): _my_metric_op
})
self.assertIn('loss', set(scores.keys()))
self.assertIn('my_error', set(scores.keys()))
self.assertIn('my_metric', set(scores.keys()))
predict_input_fn = functools.partial(_input_fn, num_epochs=1)
predictions = np.array(list(regressor.predict_scores(
input_fn=predict_input_fn)))
self.assertAlmostEqual(
_sklearn.mean_squared_error(np.array([1, 0, 0, 0]), predictions),
scores['my_error'])
# Tests the case that the 2nd element of the key is not "scores".
with self.assertRaises(KeyError):
regressor.evaluate(
input_fn=_input_fn,
steps=1,
metrics={
('my_error', 'predictions'):
metric_ops.streaming_mean_squared_error
})
# Tests the case where the tuple of the key doesn't have 2 elements.
with self.assertRaises(ValueError):
regressor.evaluate(
input_fn=_input_fn,
steps=1,
metrics={
('bad_length_name', 'scores', 'bad_length'):
metric_ops.streaming_mean_squared_error
})
def testCustomMetricsWithMetricSpec(self):
"""Tests custom evaluation metrics that use MetricSpec."""
def _input_fn(num_epochs=None):
# Create 4 rows, one of them (y = x), three of them (y=Not(x))
labels = constant_op.constant([[1.], [0.], [0.], [0.]])
features = {
'x':
input_lib.limit_epochs(
array_ops.ones(
shape=[4, 1], dtype=dtypes.float32),
num_epochs=num_epochs),
}
return features, labels
def _my_metric_op(predictions, labels):
return math_ops.reduce_sum(math_ops.multiply(predictions, labels))
regressor = dnn.DNNRegressor(
feature_columns=[feature_column.real_valued_column('x')],
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
regressor.fit(input_fn=_input_fn, steps=5)
scores = regressor.evaluate(
input_fn=_input_fn,
steps=1,
metrics={
'my_error':
MetricSpec(
metric_fn=metric_ops.streaming_mean_squared_error,
prediction_key='scores'),
'my_metric':
MetricSpec(
metric_fn=_my_metric_op, prediction_key='scores')
})
self.assertIn('loss', set(scores.keys()))
self.assertIn('my_error', set(scores.keys()))
self.assertIn('my_metric', set(scores.keys()))
predict_input_fn = functools.partial(_input_fn, num_epochs=1)
predictions = np.array(list(regressor.predict_scores(
input_fn=predict_input_fn)))
self.assertAlmostEqual(
_sklearn.mean_squared_error(np.array([1, 0, 0, 0]), predictions),
scores['my_error'])
# Tests the case where the prediction_key is not "scores".
with self.assertRaisesRegexp(KeyError, 'bad_type'):
regressor.evaluate(
input_fn=_input_fn,
steps=1,
metrics={
'bad_name':
MetricSpec(
metric_fn=metric_ops.streaming_auc,
prediction_key='bad_type')
})
def testTrainSaveLoad(self):
"""Tests that insures you can save and reload a trained model."""
def _input_fn(num_epochs=None):
features = {
'age':
input_lib.limit_epochs(
constant_op.constant([[0.8], [0.15], [0.]]),
num_epochs=num_epochs),
'language':
sparse_tensor.SparseTensor(
values=input_lib.limit_epochs(
['en', 'fr', 'zh'], num_epochs=num_epochs),
indices=[[0, 0], [0, 1], [2, 0]],
dense_shape=[3, 2])
}
return features, constant_op.constant([1., 0., 0.2], dtype=dtypes.float32)
sparse_column = feature_column.sparse_column_with_hash_bucket(
'language', hash_bucket_size=20)
feature_columns = [
feature_column.embedding_column(
sparse_column, dimension=1),
feature_column.real_valued_column('age')
]
model_dir = tempfile.mkdtemp()
regressor = dnn.DNNRegressor(
model_dir=model_dir,
feature_columns=feature_columns,
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
regressor.fit(input_fn=_input_fn, steps=5)
predict_input_fn = functools.partial(_input_fn, num_epochs=1)
predictions = list(regressor.predict_scores(input_fn=predict_input_fn))
del regressor
regressor2 = dnn.DNNRegressor(
model_dir=model_dir,
feature_columns=feature_columns,
hidden_units=[3, 3],
config=run_config.RunConfig(tf_random_seed=1))
predictions2 = list(regressor2.predict_scores(input_fn=predict_input_fn))
self.assertAllClose(predictions, predictions2)
def testTrainWithPartitionedVariables(self):
"""Tests training with partitioned variables."""
def _input_fn(num_epochs=None):
features = {
'age':
input_lib.limit_epochs(
constant_op.constant([[0.8], [0.15], [0.]]),
num_epochs=num_epochs),
'language':
sparse_tensor.SparseTensor(
values=input_lib.limit_epochs(
['en', 'fr', 'zh'], num_epochs=num_epochs),
indices=[[0, 0], [0, 1], [2, 0]],
dense_shape=[3, 2])
}
return features, constant_op.constant([1., 0., 0.2], dtype=dtypes.float32)
# The given hash_bucket_size results in variables larger than the
# default min_slice_size attribute, so the variables are partitioned.
sparse_column = feature_column.sparse_column_with_hash_bucket(
'language', hash_bucket_size=2e7)
feature_columns = [
feature_column.embedding_column(
sparse_column, dimension=1),
feature_column.real_valued_column('age')
]
tf_config = {
'cluster': {
run_config.TaskType.PS: ['fake_ps_0', 'fake_ps_1']
}
}
with test.mock.patch.dict('os.environ',
{'TF_CONFIG': json.dumps(tf_config)}):
config = run_config.RunConfig(tf_random_seed=1)
# Because we did not start a distributed cluster, we need to pass an
# empty ClusterSpec, otherwise the device_setter will look for
# distributed jobs, such as "/job:ps" which are not present.
config._cluster_spec = server_lib.ClusterSpec({})
regressor = dnn.DNNRegressor(
feature_columns=feature_columns, hidden_units=[3, 3], config=config)
regressor.fit(input_fn=_input_fn, steps=5)
scores = regressor.evaluate(input_fn=_input_fn, steps=1)
self.assertIn('loss', scores)
def testEnableCenteredBias(self):
"""Tests that we can enable centered bias."""
def _input_fn(num_epochs=None):
features = {
'age':
input_lib.limit_epochs(
constant_op.constant([[0.8], [0.15], [0.]]),
num_epochs=num_epochs),
'language':
sparse_tensor.SparseTensor(
values=input_lib.limit_epochs(
['en', 'fr', 'zh'], num_epochs=num_epochs),
indices=[[0, 0], [0, 1], [2, 0]],
dense_shape=[3, 2])
}
return features, constant_op.constant([1., 0., 0.2], dtype=dtypes.float32)
sparse_column = feature_column.sparse_column_with_hash_bucket(
'language', hash_bucket_size=20)
feature_columns = [
feature_column.embedding_column(
sparse_column, dimension=1),
feature_column.real_valued_column('age')
]
regressor = dnn.DNNRegressor(
feature_columns=feature_columns,
hidden_units=[3, 3],
enable_centered_bias=True,
config=run_config.RunConfig(tf_random_seed=1))
regressor.fit(input_fn=_input_fn, steps=5)
self.assertIn('dnn/regression_head/centered_bias_weight',
regressor.get_variable_names())
scores = regressor.evaluate(input_fn=_input_fn, steps=1)
self.assertIn('loss', scores)
def testDisableCenteredBias(self):
"""Tests that we can disable centered bias."""
def _input_fn(num_epochs=None):
features = {
'age':
input_lib.limit_epochs(
constant_op.constant([[0.8], [0.15], [0.]]),
num_epochs=num_epochs),
'language':
sparse_tensor.SparseTensor(
values=input_lib.limit_epochs(
['en', 'fr', 'zh'], num_epochs=num_epochs),
indices=[[0, 0], [0, 1], [2, 0]],
dense_shape=[3, 2])
}
return features, constant_op.constant([1., 0., 0.2], dtype=dtypes.float32)
sparse_column = feature_column.sparse_column_with_hash_bucket(
'language', hash_bucket_size=20)
feature_columns = [
feature_column.embedding_column(
sparse_column, dimension=1),
feature_column.real_valued_column('age')
]
regressor = dnn.DNNRegressor(
feature_columns=feature_columns,
hidden_units=[3, 3],
enable_centered_bias=False,
config=run_config.RunConfig(tf_random_seed=1))
regressor.fit(input_fn=_input_fn, steps=5)
self.assertNotIn('centered_bias_weight', regressor.get_variable_names())
scores = regressor.evaluate(input_fn=_input_fn, steps=1)
self.assertIn('loss', scores)
def boston_input_fn():
boston = base.load_boston()
features = math_ops.cast(
array_ops.reshape(constant_op.constant(boston.data), [-1, 13]),
dtypes.float32)
labels = math_ops.cast(
array_ops.reshape(constant_op.constant(boston.target), [-1, 1]),
dtypes.float32)
return features, labels
class FeatureColumnTest(test.TestCase):
def testTrain(self):
feature_columns = estimator.infer_real_valued_columns_from_input_fn(
boston_input_fn)
est = dnn.DNNRegressor(feature_columns=feature_columns, hidden_units=[3, 3])
est.fit(input_fn=boston_input_fn, steps=1)
_ = est.evaluate(input_fn=boston_input_fn, steps=1)
if __name__ == '__main__':
test.main()
| apache-2.0 |
JosmanPS/scikit-learn | examples/neighbors/plot_nearest_centroid.py | 264 | 1804 | """
===============================
Nearest Centroid Classification
===============================
Sample usage of Nearest Centroid classification.
It will plot the decision boundaries for each class.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import datasets
from sklearn.neighbors import NearestCentroid
n_neighbors = 15
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features. We could
# avoid this ugly slicing by using a two-dim dataset
y = iris.target
h = .02 # step size in the mesh
# Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
for shrinkage in [None, 0.1]:
# we create an instance of Neighbours Classifier and fit the data.
clf = NearestCentroid(shrink_threshold=shrinkage)
clf.fit(X, y)
y_pred = clf.predict(X)
print(shrinkage, np.mean(y == y_pred))
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold)
plt.title("3-Class classification (shrink_threshold=%r)"
% shrinkage)
plt.axis('tight')
plt.show()
| bsd-3-clause |
maxlikely/scikit-learn | sklearn/neighbors/nearest_centroid.py | 4 | 5895 | # -*- coding: utf-8 -*-
"""
Nearest Centroid Classification
"""
# Author: Robert Layton <robertlayton@gmail.com>
# Olivier Grisel <olivier.grisel@ensta.org>
#
# License: BSD Style.
import numpy as np
from scipy import sparse as sp
from ..base import BaseEstimator, ClassifierMixin
from ..externals.six.moves import xrange
from ..metrics.pairwise import pairwise_distances
from ..utils.validation import check_arrays, atleast2d_or_csr
class NearestCentroid(BaseEstimator, ClassifierMixin):
"""Nearest centroid classifier.
Each class is represented by its centroid, with test samples classified to
the class with the nearest centroid.
Parameters
----------
metric: string, or callable
The metric to use when calculating distance between instances in a
feature array. If metric is a string or callable, it must be one of
the options allowed by metrics.pairwise.pairwise_distances for its
metric parameter.
shrink_threshold : float, optional (default = None)
Threshold for shrinking centroids to remove features.
Attributes
----------
`centroids_` : array-like, shape = [n_classes, n_features]
Centroid of each class
Examples
--------
>>> from sklearn.neighbors.nearest_centroid import NearestCentroid
>>> import numpy as np
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> y = np.array([1, 1, 1, 2, 2, 2])
>>> clf = NearestCentroid()
>>> clf.fit(X, y)
NearestCentroid(metric='euclidean', shrink_threshold=None)
>>> print(clf.predict([[-0.8, -1]]))
[1]
See also
--------
sklearn.neighbors.KNeighborsClassifier: nearest neighbors classifier
Notes
-----
When used for text classification with tf–idf vectors, this classifier is
also known as the Rocchio classifier.
References
----------
Tibshirani, R., Hastie, T., Narasimhan, B., & Chu, G. (2002). Diagnosis of
multiple cancer types by shrunken centroids of gene expression. Proceedings
of the National Academy of Sciences of the United States of America,
99(10), 6567-6572. The National Academy of Sciences.
"""
def __init__(self, metric='euclidean', shrink_threshold=None):
self.metric = metric
self.shrink_threshold = shrink_threshold
def fit(self, X, y):
"""
Fit the NearestCentroid model according to the given training data.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training vector, where n_samples in the number of samples and
n_features is the number of features.
Note that centroid shrinking cannot be used with sparse matrices.
y : array, shape = [n_samples]
Target values (integers)
"""
X, y = check_arrays(X, y, sparse_format="csr")
if sp.issparse(X) and self.shrink_threshold:
raise ValueError("threshold shrinking not supported"
" for sparse input")
n_samples, n_features = X.shape
classes = np.unique(y)
self.classes_ = classes
n_classes = classes.size
if n_classes < 2:
raise ValueError('y has less than 2 classes')
# Mask mapping each class to it's members.
self.centroids_ = np.empty((n_classes, n_features), dtype=np.float64)
for i, cur_class in enumerate(classes):
center_mask = y == cur_class
if sp.issparse(X):
center_mask = np.where(center_mask)[0]
self.centroids_[i] = X[center_mask].mean(axis=0)
if self.shrink_threshold:
dataset_centroid_ = np.array(X.mean(axis=0))[0]
# Number of clusters in each class.
nk = np.array([np.sum(classes == cur_class)
for cur_class in classes])
# m parameter for determining deviation
m = np.sqrt((1. / nk) + (1. / n_samples))
# Calculate deviation using the standard deviation of centroids.
variance = np.array(np.power(X - self.centroids_[y], 2))
variance = variance.sum(axis=0)
s = np.sqrt(variance / (n_samples - n_classes))
s += np.median(s) # To deter outliers from affecting the results.
mm = m.reshape(len(m), 1) # Reshape to allow broadcasting.
ms = mm * s
deviation = ((self.centroids_ - dataset_centroid_) / ms)
# Soft thresholding: if the deviation crosses 0 during shrinking,
# it becomes zero.
signs = np.sign(deviation)
deviation = (np.abs(deviation) - self.shrink_threshold)
deviation[deviation < 0] = 0
deviation = np.multiply(deviation, signs)
# Now adjust the centroids using the deviation
msd = np.multiply(ms, deviation)
self.centroids_ = np.array([dataset_centroid_ + msd[i]
for i in xrange(n_classes)])
return self
def predict(self, X):
"""Perform classification on an array of test vectors X.
The predicted class C for each sample in X is returned.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
C : array, shape = [n_samples]
Notes
-----
If the metric constructor parameter is "precomputed", X is assumed to
be the distance matrix between the data to be predicted and
``self.centroids_``.
"""
X = atleast2d_or_csr(X)
if not hasattr(self, "centroids_"):
raise AttributeError("Model has not been trained yet.")
return self.classes_[pairwise_distances(
X, self.centroids_, metric=self.metric).argmin(axis=1)]
| bsd-3-clause |
kadrlica/obztak | obztak/scratch/dither.py | 1 | 7433 | import os
import numpy as np
import pylab
import matplotlib.path
from matplotlib.collections import PolyCollection
import obztak.utils.projector
import obztak.utils.fileio as fileio
import obztak.utils.constants
pylab.ion()
############################################################
params = {
#'backend': 'eps',
'axes.labelsize': 16,
#'text.fontsize': 12,
'xtick.labelsize': 12,
'ytick.labelsize': 12,
'xtick.major.size': 3, # major tick size in points
'xtick.minor.size': 1.5, # minor tick size in points
'xtick.major.size': 3, # major tick size in points
'xtick.minor.size': 1.5, # minor tick size in points
#'text.usetex': True,
#'figure.figsize': fig_size,
'font.family':'serif',
'font.serif':'Computer Modern Roman',
'font.size': 12
}
matplotlib.rcParams.update(params)
############################################################
def rotateFocalPlane(ccd_array, ra_center, dec_center, ra_field, dec_field):
proj_center = obztak.utils.projector.Projector(ra_center, dec_center)
proj_field = obztak.utils.projector.Projector(ra_field, dec_field)
ccd_array_new = []
for ii in range(0, len(ccd_array)):
ra, dec = proj_field.imageToSphere(np.transpose(ccd_array[ii])[0],
np.transpose(ccd_array[ii])[1])
x, y = proj_center.sphereToImage(ra, dec)
ccd_array_new.append(zip(x, y))
return ccd_array_new
############################################################
def plotFocalPlane(ccd_array, ra_center, dec_center, ra_field, dec_field, ax):
ccd_array_new = rotateFocalPlane(ccd_array, ra_center, dec_center, ra_field, dec_field)
coll = PolyCollection(ccd_array_new, alpha=0.2, color='red', edgecolors='none')
ax.add_collection(coll)
############################################################
def applyDither(ra, dec, x, y):
proj = obztak.utils.projector.Projector(ra, dec)
ra_dither, dec_dither = proj.imageToSphere(x, y)
return ra_dither, dec_dither
############################################################
def makeDither():
X_CCD = 0.29878 # This is the FITS y-axis
Y_CCD = 0.14939 # This is the FITS x-axis
#ra_center, dec_center = 182., -88.0
ra_center, dec_center = 182., -68.0
ra_center, dec_center = 178., -80.0
#ra_center, dec_center = 351.6670, -72.0863
#ra_center, dec_center = 351.6670, -89.
pattern = 'alex'
if pattern == 'none':
dither_array = []
elif pattern == 'large':
dither_array = [[4 * X_CCD / 3., 4. * Y_CCD / 3.],
[8. * X_CCD / 3., -11 * Y_CCD / 3.]]
elif pattern == 'small':
dither_array = [[1 * X_CCD / 3., 1. * Y_CCD / 3.],
[2. * X_CCD / 3., -1 * Y_CCD / 3.]]
elif pattern == 'alex':
### ADW: The pattern suggested is actually in SMASH coordinates not celestial.
dither_array = [[0.75, 0.75],
[-0.75, 0.75]]
#dither_array = [[4 * X_CCD / 3., 4. * Y_CCD / 3.],
# [2. * X_CCD / 3., -4 * Y_CCD / 3.]]
#dither_array = [[4 * X_CCD / 3., 4. * Y_CCD / 3.]]
#dither_array = [[5 * X_CCD / 3., 5. * Y_CCD / 3.]]
#mode = 'single'
mode = 'fill'
if mode == 'single':
angsep_max = 0.
if mode == 'fill':
angsep_max = 3.
# This should use the environment variable MAGLITESDIR to define the path
datadir = fileio.get_datadir()
filename = os.path.join(datadir,'smash_fields_alltiles.txt')
data_alltiles = np.recfromtxt(filename, names=True)
filename = os.path.join(datadir,'../scratch/ccd_corners_xy_fill.dat')
data = eval(''.join(open(filename).readlines()))
ccd_array = []
for key in data.keys():
#ccd_array.append(matplotlib.path.Path(data[key]))
ccd_array.append(data[key])
"""
n = 400
x_mesh, y_mesh = np.meshgrid(np.linspace(-1.1, 1.1, n), np.linspace(-1.1, 1.1, n))
count = np.zeros([n, n])
for ii in range(0, len(ccd_array)):
count += ccd_array[ii].contains_points(zip(x_mesh.flatten(), y_mesh.flatten())).reshape([n, n])
pylab.figure()
pylab.pcolor(x_mesh, y_mesh, count)
"""
fig, ax = pylab.subplots(figsize=(8, 8))
# Make the collection and add it to the plot.
#coll = PolyCollection(ccd_array, alpha=0.3, color='red', edgecolors='none')
#ax.add_collection(coll)
#plotFocalPlane(ccd_array, ra_center, dec_center, ra_center, dec_center, ax)
#plotFocalPlane(ccd_array, ra_center, dec_center, ra_center, dec_center + 0.1, ax)
angsep = obztak.utils.projector.angsep(ra_center, dec_center, data_alltiles['RA'], data_alltiles['DEC'])
for ii in np.nonzero(angsep < (np.min(angsep) + 0.01 + angsep_max))[0]:
plotFocalPlane(ccd_array, ra_center, dec_center, data_alltiles['RA'][ii], data_alltiles['DEC'][ii], ax)
for x_dither, y_dither in dither_array:
ra_dither, dec_dither = applyDither(data_alltiles['RA'][ii], data_alltiles['DEC'][ii],
x_dither, y_dither)
plotFocalPlane(ccd_array, ra_center, dec_center, ra_dither, dec_dither, ax)
pylab.xlim(-1.5, 1.5)
pylab.ylim(-1.5, 1.5)
pylab.xlabel('x (deg)', labelpad=20)
pylab.ylabel('y (deg)')
pylab.title('(RA, Dec) = (%.3f, %.3f)'%(ra_center, dec_center))
#for ii in range(0, len(ccd_array)):
#pylab.savefig('dither_ra_%.2f_dec_%.2f_%s_%s.pdf'%(ra_center, dec_center, mode, pattern))
############################################################
def testDither(ra_center, dec_center, infile='target_fields.csv', save=False):
filename = os.path.join(fileio.get_datadir(),'../scratch/ccd_corners_xy_fill.dat')
data = eval(''.join(open(filename).readlines()))
ccd_array = []
for key in data.keys():
#ccd_array.append(matplotlib.path.Path(data[key]))
ccd_array.append(data[key])
data_targets = fileio.csv2rec(infile)
fig, ax = pylab.subplots(figsize=(8, 8))
angsep = obztak.utils.projector.angsep(ra_center, dec_center, data_targets['RA'], data_targets['DEC'])
cut = (angsep < 3.) & (data_targets['FILTER'] == obztak.utils.constants.BANDS[0]) & (data_targets['TILING'] <= 3)
print np.sum(angsep < 3.)
print np.sum(data_targets['FILTER'] == obztak.utils.constants.BANDS[0])
print np.sum(cut)
for ii in np.nonzero(cut)[0]:
plotFocalPlane(ccd_array, ra_center, dec_center, data_targets['RA'][ii], data_targets['DEC'][ii], ax)
pylab.xlim(-1.5, 1.5)
pylab.ylim(-1.5, 1.5)
pylab.xlabel('x (deg)', labelpad=20)
pylab.ylabel('y (deg)')
pylab.title('(RA, Dec) = (%.3f, %.3f)'%(ra_center, dec_center))
if save:
pattern = infile.split('target_fields_')[-1].split('.csv')[0]
pylab.savefig('dither_ra_%.2f_dec_%.2f_%s.pdf'%(ra_center, dec_center, pattern))
############################################################
if __name__ == '__main__':
#infile = 'target_fields_decam_dither_1.csv'
#infile = 'target_fields_decam_dither_2.csv'
infile = 'target_fields_smash_dither.csv'
#infile = 'target_fields_smash_rotate.csv'
save = True
testDither(100., -70., infile=infile, save=save) # On edge
testDither(125., -75., infile=infile, save=save)
testDither(200., -88., infile=infile, save=save)
############################################################
| mit |
neale/CS-program | 434-MachineLearning/final_project/linearClassifier/sklearn/metrics/cluster/__init__.py | 312 | 1322 | """
The :mod:`sklearn.metrics.cluster` submodule contains evaluation metrics for
cluster analysis results. There are two forms of evaluation:
- supervised, which uses a ground truth class values for each sample.
- unsupervised, which does not and measures the 'quality' of the model itself.
"""
from .supervised import adjusted_mutual_info_score
from .supervised import normalized_mutual_info_score
from .supervised import adjusted_rand_score
from .supervised import completeness_score
from .supervised import contingency_matrix
from .supervised import expected_mutual_information
from .supervised import homogeneity_completeness_v_measure
from .supervised import homogeneity_score
from .supervised import mutual_info_score
from .supervised import v_measure_score
from .supervised import entropy
from .unsupervised import silhouette_samples
from .unsupervised import silhouette_score
from .bicluster import consensus_score
__all__ = ["adjusted_mutual_info_score", "normalized_mutual_info_score",
"adjusted_rand_score", "completeness_score", "contingency_matrix",
"expected_mutual_information", "homogeneity_completeness_v_measure",
"homogeneity_score", "mutual_info_score", "v_measure_score",
"entropy", "silhouette_samples", "silhouette_score",
"consensus_score"]
| unlicense |
smartscheduling/scikit-learn-categorical-tree | sklearn/mixture/tests/test_gmm.py | 1 | 15738 | import unittest
from nose.tools import assert_true
import numpy as np
from numpy.testing import (assert_array_equal, assert_array_almost_equal,
assert_raises)
from scipy import stats
from sklearn import mixture
from sklearn.datasets.samples_generator import make_spd_matrix
from sklearn.utils.testing import assert_greater
from sklearn.utils.testing import assert_raise_message
rng = np.random.RandomState(0)
def test_sample_gaussian():
# Test sample generation from mixture.sample_gaussian where covariance
# is diagonal, spherical and full
n_features, n_samples = 2, 300
axis = 1
mu = rng.randint(10) * rng.rand(n_features)
cv = (rng.rand(n_features) + 1.0) ** 2
samples = mixture.sample_gaussian(
mu, cv, covariance_type='diag', n_samples=n_samples)
assert_true(np.allclose(samples.mean(axis), mu, atol=1.3))
assert_true(np.allclose(samples.var(axis), cv, atol=1.5))
# the same for spherical covariances
cv = (rng.rand() + 1.0) ** 2
samples = mixture.sample_gaussian(
mu, cv, covariance_type='spherical', n_samples=n_samples)
assert_true(np.allclose(samples.mean(axis), mu, atol=1.5))
assert_true(np.allclose(
samples.var(axis), np.repeat(cv, n_features), atol=1.5))
# and for full covariances
A = rng.randn(n_features, n_features)
cv = np.dot(A.T, A) + np.eye(n_features)
samples = mixture.sample_gaussian(
mu, cv, covariance_type='full', n_samples=n_samples)
assert_true(np.allclose(samples.mean(axis), mu, atol=1.3))
assert_true(np.allclose(np.cov(samples), cv, atol=2.5))
# Numerical stability check: in SciPy 0.12.0 at least, eigh may return
# tiny negative values in its second return value.
from sklearn.mixture import sample_gaussian
x = sample_gaussian([0, 0], [[4, 3], [1, .1]],
covariance_type='full', random_state=42)
print(x)
assert_true(np.isfinite(x).all())
def _naive_lmvnpdf_diag(X, mu, cv):
# slow and naive implementation of lmvnpdf
ref = np.empty((len(X), len(mu)))
stds = np.sqrt(cv)
for i, (m, std) in enumerate(zip(mu, stds)):
ref[:, i] = np.log(stats.norm.pdf(X, m, std)).sum(axis=1)
return ref
def test_lmvnpdf_diag():
# test a slow and naive implementation of lmvnpdf and
# compare it to the vectorized version (mixture.lmvnpdf) to test
# for correctness
n_features, n_components, n_samples = 2, 3, 10
mu = rng.randint(10) * rng.rand(n_components, n_features)
cv = (rng.rand(n_components, n_features) + 1.0) ** 2
X = rng.randint(10) * rng.rand(n_samples, n_features)
ref = _naive_lmvnpdf_diag(X, mu, cv)
lpr = mixture.log_multivariate_normal_density(X, mu, cv, 'diag')
assert_array_almost_equal(lpr, ref)
def test_lmvnpdf_spherical():
n_features, n_components, n_samples = 2, 3, 10
mu = rng.randint(10) * rng.rand(n_components, n_features)
spherecv = rng.rand(n_components, 1) ** 2 + 1
X = rng.randint(10) * rng.rand(n_samples, n_features)
cv = np.tile(spherecv, (n_features, 1))
reference = _naive_lmvnpdf_diag(X, mu, cv)
lpr = mixture.log_multivariate_normal_density(X, mu, spherecv,
'spherical')
assert_array_almost_equal(lpr, reference)
def test_lmvnpdf_full():
n_features, n_components, n_samples = 2, 3, 10
mu = rng.randint(10) * rng.rand(n_components, n_features)
cv = (rng.rand(n_components, n_features) + 1.0) ** 2
X = rng.randint(10) * rng.rand(n_samples, n_features)
fullcv = np.array([np.diag(x) for x in cv])
reference = _naive_lmvnpdf_diag(X, mu, cv)
lpr = mixture.log_multivariate_normal_density(X, mu, fullcv, 'full')
assert_array_almost_equal(lpr, reference)
def test_lvmpdf_full_cv_non_positive_definite():
n_features, n_components, n_samples = 2, 1, 10
rng = np.random.RandomState(0)
X = rng.randint(10) * rng.rand(n_samples, n_features)
mu = np.mean(X, 0)
cv = np.array([[[-1, 0], [0, 1]]])
expected_message = "'covars' must be symmetric, positive-definite"
assert_raise_message(ValueError, expected_message,
mixture.log_multivariate_normal_density,
X, mu, cv, 'full')
def test_GMM_attributes():
n_components, n_features = 10, 4
covariance_type = 'diag'
g = mixture.GMM(n_components, covariance_type, random_state=rng)
weights = rng.rand(n_components)
weights = weights / weights.sum()
means = rng.randint(-20, 20, (n_components, n_features))
assert_true(g.n_components == n_components)
assert_true(g.covariance_type == covariance_type)
g.weights_ = weights
assert_array_almost_equal(g.weights_, weights)
g.means_ = means
assert_array_almost_equal(g.means_, means)
covars = (0.1 + 2 * rng.rand(n_components, n_features)) ** 2
g.covars_ = covars
assert_array_almost_equal(g.covars_, covars)
assert_raises(ValueError, g._set_covars, [])
assert_raises(ValueError, g._set_covars,
np.zeros((n_components - 2, n_features)))
assert_raises(ValueError, mixture.GMM, n_components=20,
covariance_type='badcovariance_type')
class GMMTester():
do_test_eval = True
def _setUp(self):
self.n_components = 10
self.n_features = 4
self.weights = rng.rand(self.n_components)
self.weights = self.weights / self.weights.sum()
self.means = rng.randint(-20, 20, (self.n_components, self.n_features))
self.threshold = -0.5
self.I = np.eye(self.n_features)
self.covars = {
'spherical': (0.1 + 2 * rng.rand(self.n_components,
self.n_features)) ** 2,
'tied': (make_spd_matrix(self.n_features, random_state=0)
+ 5 * self.I),
'diag': (0.1 + 2 * rng.rand(self.n_components,
self.n_features)) ** 2,
'full': np.array([make_spd_matrix(self.n_features, random_state=0)
+ 5 * self.I for x in range(self.n_components)])}
def test_eval(self):
if not self.do_test_eval:
return # DPGMM does not support setting the means and
# covariances before fitting There is no way of fixing this
# due to the variational parameters being more expressive than
# covariance matrices
g = self.model(n_components=self.n_components,
covariance_type=self.covariance_type, random_state=rng)
# Make sure the means are far apart so responsibilities.argmax()
# picks the actual component used to generate the observations.
g.means_ = 20 * self.means
g.covars_ = self.covars[self.covariance_type]
g.weights_ = self.weights
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = rng.randn(n_samples, self.n_features) + g.means_[gaussidx]
ll, responsibilities = g.score_samples(X)
self.assertEqual(len(ll), n_samples)
self.assertEqual(responsibilities.shape,
(n_samples, self.n_components))
assert_array_almost_equal(responsibilities.sum(axis=1),
np.ones(n_samples))
assert_array_equal(responsibilities.argmax(axis=1), gaussidx)
def test_sample(self, n=100):
g = self.model(n_components=self.n_components,
covariance_type=self.covariance_type, random_state=rng)
# Make sure the means are far apart so responsibilities.argmax()
# picks the actual component used to generate the observations.
g.means_ = 20 * self.means
g.covars_ = np.maximum(self.covars[self.covariance_type], 0.1)
g.weights_ = self.weights
samples = g.sample(n)
self.assertEqual(samples.shape, (n, self.n_features))
def test_train(self, params='wmc'):
g = mixture.GMM(n_components=self.n_components,
covariance_type=self.covariance_type)
g.weights_ = self.weights
g.means_ = self.means
g.covars_ = 20 * self.covars[self.covariance_type]
# Create a training set by sampling from the predefined distribution.
X = g.sample(n_samples=100)
g = self.model(n_components=self.n_components,
covariance_type=self.covariance_type,
random_state=rng, min_covar=1e-1,
n_iter=1, init_params=params)
g.fit(X)
# Do one training iteration at a time so we can keep track of
# the log likelihood to make sure that it increases after each
# iteration.
trainll = []
for _ in range(5):
g.params = params
g.init_params = ''
g.fit(X)
trainll.append(self.score(g, X))
g.n_iter = 10
g.init_params = ''
g.params = params
g.fit(X) # finish fitting
# Note that the log likelihood will sometimes decrease by a
# very small amount after it has more or less converged due to
# the addition of min_covar to the covariance (to prevent
# underflow). This is why the threshold is set to -0.5
# instead of 0.
delta_min = np.diff(trainll).min()
self.assertTrue(
delta_min > self.threshold,
"The min nll increase is %f which is lower than the admissible"
" threshold of %f, for model %s. The likelihoods are %s."
% (delta_min, self.threshold, self.covariance_type, trainll))
def test_train_degenerate(self, params='wmc'):
# Train on degenerate data with 0 in some dimensions
# Create a training set by sampling from the predefined distribution.
X = rng.randn(100, self.n_features)
X.T[1:] = 0
g = self.model(n_components=2, covariance_type=self.covariance_type,
random_state=rng, min_covar=1e-3, n_iter=5,
init_params=params)
g.fit(X)
trainll = g.score(X)
self.assertTrue(np.sum(np.abs(trainll / 100 / X.shape[1])) < 5)
def test_train_1d(self, params='wmc'):
# Train on 1-D data
# Create a training set by sampling from the predefined distribution.
X = rng.randn(100, 1)
#X.T[1:] = 0
g = self.model(n_components=2, covariance_type=self.covariance_type,
random_state=rng, min_covar=1e-7, n_iter=5,
init_params=params)
g.fit(X)
trainll = g.score(X)
if isinstance(g, mixture.DPGMM):
self.assertTrue(np.sum(np.abs(trainll / 100)) < 5)
else:
self.assertTrue(np.sum(np.abs(trainll / 100)) < 2)
def score(self, g, X):
return g.score(X).sum()
class TestGMMWithSphericalCovars(unittest.TestCase, GMMTester):
covariance_type = 'spherical'
model = mixture.GMM
setUp = GMMTester._setUp
class TestGMMWithDiagonalCovars(unittest.TestCase, GMMTester):
covariance_type = 'diag'
model = mixture.GMM
setUp = GMMTester._setUp
class TestGMMWithTiedCovars(unittest.TestCase, GMMTester):
covariance_type = 'tied'
model = mixture.GMM
setUp = GMMTester._setUp
class TestGMMWithFullCovars(unittest.TestCase, GMMTester):
covariance_type = 'full'
model = mixture.GMM
setUp = GMMTester._setUp
def test_multiple_init():
# Test that multiple inits does not much worse than a single one
X = rng.randn(30, 5)
X[:10] += 2
g = mixture.GMM(n_components=2, covariance_type='spherical',
random_state=rng, min_covar=1e-7, n_iter=5)
train1 = g.fit(X).score(X).sum()
g.n_init = 5
train2 = g.fit(X).score(X).sum()
assert_true(train2 >= train1 - 1.e-2)
def test_n_parameters():
# Test that the right number of parameters is estimated
n_samples, n_dim, n_components = 7, 5, 2
X = rng.randn(n_samples, n_dim)
n_params = {'spherical': 13, 'diag': 21, 'tied': 26, 'full': 41}
for cv_type in ['full', 'tied', 'diag', 'spherical']:
g = mixture.GMM(n_components=n_components, covariance_type=cv_type,
random_state=rng, min_covar=1e-7, n_iter=1)
g.fit(X)
assert_true(g._n_parameters() == n_params[cv_type])
def test_1d_1component():
# Test all of the covariance_types return the same BIC score for
# 1-dimensional, 1 component fits.
n_samples, n_dim, n_components = 100, 1, 1
X = rng.randn(n_samples, n_dim)
g_full = mixture.GMM(n_components=n_components, covariance_type='full',
random_state=rng, min_covar=1e-7, n_iter=1)
g_full.fit(X)
g_full_bic = g_full.bic(X)
for cv_type in ['tied', 'diag', 'spherical']:
g = mixture.GMM(n_components=n_components, covariance_type=cv_type,
random_state=rng, min_covar=1e-7, n_iter=1)
g.fit(X)
assert_array_almost_equal(g.bic(X), g_full_bic)
def test_aic():
# Test the aic and bic criteria
n_samples, n_dim, n_components = 50, 3, 2
X = rng.randn(n_samples, n_dim)
SGH = 0.5 * (X.var() + np.log(2 * np.pi)) # standard gaussian entropy
for cv_type in ['full', 'tied', 'diag', 'spherical']:
g = mixture.GMM(n_components=n_components, covariance_type=cv_type,
random_state=rng, min_covar=1e-7)
g.fit(X)
aic = 2 * n_samples * SGH * n_dim + 2 * g._n_parameters()
bic = (2 * n_samples * SGH * n_dim +
np.log(n_samples) * g._n_parameters())
bound = n_dim * 3. / np.sqrt(n_samples)
assert_true(np.abs(g.aic(X) - aic) / n_samples < bound)
assert_true(np.abs(g.bic(X) - bic) / n_samples < bound)
def check_positive_definite_covars(covariance_type):
r"""Test that covariance matrices do not become non positive definite
Due to the accumulation of round-off errors, the computation of the
covariance matrices during the learning phase could lead to non-positive
definite covariance matrices. Namely the use of the formula:
.. math:: C = (\sum_i w_i x_i x_i^T) - \mu \mu^T
instead of:
.. math:: C = \sum_i w_i (x_i - \mu)(x_i - \mu)^T
while mathematically equivalent, was observed a ``LinAlgError`` exception,
when computing a ``GMM`` with full covariance matrices and fixed mean.
This function ensures that some later optimization will not introduce the
problem again.
"""
rng = np.random.RandomState(1)
# we build a dataset with 2 2d component. The components are unbalanced
# (respective weights 0.9 and 0.1)
X = rng.randn(100, 2)
X[-10:] += (3, 3) # Shift the 10 last points
gmm = mixture.GMM(2, params="wc", covariance_type=covariance_type,
min_covar=1e-3)
# This is a non-regression test for issue #2640. The following call used
# to trigger:
# numpy.linalg.linalg.LinAlgError: 2-th leading minor not positive definite
gmm.fit(X)
if covariance_type == "diag" or covariance_type == "spherical":
assert_greater(gmm.covars_.min(), 0)
else:
if covariance_type == "tied":
covs = [gmm.covars_]
else:
covs = gmm.covars_
for c in covs:
assert_greater(np.linalg.det(c), 0)
def test_positive_definite_covars():
# Check positive definiteness for all covariance types
for covariance_type in ["full", "tied", "diag", "spherical"]:
yield check_positive_definite_covars, covariance_type
if __name__ == '__main__':
import nose
nose.runmodule()
| bsd-3-clause |
TuKo/brainiak | tests/fcma/test_mvpa_voxel_selection.py | 5 | 1917 | # Copyright 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from brainiak.fcma.mvpa_voxelselector import MVPAVoxelSelector
from brainiak.searchlight.searchlight import Searchlight
from sklearn import svm
import numpy as np
from mpi4py import MPI
from numpy.random import RandomState
# specify the random state to fix the random numbers
prng = RandomState(1234567890)
def test_mvpa_voxel_selection():
data = prng.rand(5, 5, 5, 8).astype(np.float32)
# all MPI processes read the mask; the mask file is small
mask = np.ones([5, 5, 5], dtype=np.bool)
mask[0, 0, :] = False
labels = [0, 1, 0, 1, 0, 1, 0, 1]
# 2 subjects, 4 epochs per subject
sl = Searchlight(sl_rad=1)
mvs = MVPAVoxelSelector(data, mask, labels, 2, sl)
# for cross validation, use SVM with precomputed kernel
clf = svm.SVC(kernel='rbf', C=10)
result_volume, results = mvs.run(clf)
if MPI.COMM_WORLD.Get_rank() == 0:
output = []
for tuple in results:
if tuple[1] > 0:
output.append(int(8*tuple[1]))
expected_output = [6, 6, 5, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4, 4,
4, 4, 4, 3, 3, 3, 3, 3, 2, 2, 2, 1]
assert np.allclose(output, expected_output, atol=1), \
'voxel selection via SVM does not provide correct results'
if __name__ == '__main__':
test_mvpa_voxel_selection()
| apache-2.0 |
ashwinvis/sthlm-bostad-vis | sssb.py | 1 | 4096 | import os
from io import StringIO
from lxml import html, etree
import pandas as pd
from itertools import chain, islice
import matplotlib.pyplot as plt
from datetime import date
from base import ParserBase
try:
from requests_selenium import Render
except ImportError:
from requests_webkit import Render
def ichunked(seq, chunksize):
"""Yields items from an iterator in iterable chunks.
Parameters
----------
seq : iterable
chunksize : int
Returns
-------
"""
it = iter(seq)
for i in it:
yield chain([i], islice(it, chunksize - 1))
class SSSBParser(ParserBase):
_tag = "sssb"
member_since = date(2001, 10, 31)
def _url(self, area="", apartment_type="Apartment", max_rent="", nb_per_page=50):
apartment_codes = {"Room": "BOASR", "Studio": "BOAS1", "Apartment": "BOASL"}
url = (
r"https://www.sssb.se/en/find-apartment/apply-for-apartment/available-apartments/"
r"available-apartments-list/?omraden={}&objektTyper={}&hyraMax={}"
r"&actionId=&paginationantal={}"
).format(area, apartment_codes[apartment_type], max_rent, nb_per_page)
print("Loading", url)
return url
def _get_html(self, *args):
if self._renew_cache():
url = self._url(*args)
with Render() as render:
render = Render()
self.cache_html = render.get(url)
def get(self, using="html", *args):
self._get_html(*args)
page = self.cache_html
if using == "html":
tree = html.fromstring(page)
return tree
elif using == "etree":
xml_parser = etree.HTMLParser()
tree = etree.parse(StringIO(page), xml_parser)
return tree
def make_df(self, tree):
heading = tree.xpath('//div[@class="RowHeader"]/span/text()')
rows = tree.xpath('//div[@class="Spreadsheet"]/a/span/text()')
nb_cols = len(heading)
def remove_all(seq, match, lstrip=""):
seq[:] = (value.lstrip(lstrip) for value in seq if value not in match)
remove_all(rows, (" ", "\xa0", " \xa0"))
remove_all(heading, "\xa0", " \n")
nb_cols -= 1
table = []
for row in ichunked(rows, nb_cols):
table.append(tuple(row))
self.df = pd.DataFrame(table, columns=heading)
self.df.index = self.df["Address"]
def make_df_hist(self, store_deltas=True):
col1 = self.cache_timestamp.strftime("%c")
col2 = "No. of Applications"
credit_days = self.df["Credit days"].str.split()
df_tmp = credit_days.apply(pd.Series)
df_tmp.columns = [col1, col2]
# Format and change dtype
series1 = df_tmp[col1].apply(pd.to_numeric)
series2 = df_tmp[col2].str.lstrip("(").str.rstrip("st)").apply(pd.to_numeric)
if self.df_hist is None:
self.df_hist = pd.DataFrame({col2: series2, "Start": series1})
else:
self.df_hist = self.df_hist.T.dropna().T
if store_deltas:
self.df_hist[col2] = series2
delta = series1 - self.df_hist.T.sum() + series2
self.df_hist[col1] = delta
if all(delta == 0):
return False
else:
self.df_hist[col1] = series1
self.df_hist[col2] = series2
return True
def plot_hist(self, save=True, **kwargs):
plt.style.use("ggplot")
self.df_hist.iloc[:, 1:].plot(kind="bar", stacked=True, **kwargs)
plt.subplots_adjust(left=0.075, bottom=0.4, right=0.83, top=0.95)
plt.legend(loc="best", fontsize=8, bbox_to_anchor=(1.01, 1.0))
plt.ylabel("Max. credit days")
plt.autoscale()
my_credit_days = (date.today() - self.member_since).days
plt.axhline(y=my_credit_days)
if save:
figname = os.path.join(self.path, self._tag + ".png")
print("Saving", figname)
plt.savefig(figname)
else:
plt.show()
| gpl-3.0 |
rhennigan/code | python/forwardEuler.py | 1 | 1208 | # QUIZ
#
# Modify the for loop below to
# set the values of the t, x, and v
# arrays to implement the Forward
# Euler Method for num_steps many steps.
# To see plots on your own computer, uncomment the two lines below...
import numpy
import matplotlib.pyplot
# from udacityplots import * # ...and comment out this line
def forward_euler():
h = 0.1 # s
g = 9.81 # m / s2
num_steps = 50
t = numpy.zeros(num_steps + 1)
x = numpy.zeros(num_steps + 1)
v = numpy.zeros(num_steps + 1)
for step in range(num_steps):
t[step + 1] = t[step]+h
x[step + 1] = x[step]+h*v[step]
v[step + 1] = v[step]-h*g*t[step]
return t, x, v
t, x, v = forward_euler()
#@show_plot Remove this line when running locally
def plot_me():
axes_height = matplotlib.pyplot.subplot(211)
matplotlib.pyplot.plot(t, x)
axes_velocity = matplotlib.pyplot.subplot(212)
matplotlib.pyplot.plot(t, v)
axes_height.set_ylabel('Height in m')
axes_velocity.set_ylabel('Velocity in m/s')
axes_velocity.set_xlabel('Time in s')
# Uncomment the line below when running locally.
matplotlib.pyplot.show()
plot_me()
| gpl-2.0 |
zmlabe/IceVarFigs | Scripts/SeaIce/plot_sit_PIOMAS_monthly_v2.py | 1 | 8177 | """
Author : Zachary M. Labe
Date : 23 August 2016
"""
from netCDF4 import Dataset
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
import numpy as np
import datetime
import calendar as cal
import matplotlib.colors as c
import cmocean
### Define constants
### Directory and time
directoryfigure = './Figures/'
directorydata = './Data/'
now = datetime.datetime.now()
month = now.month
years = np.arange(1979,2019,1)
months = np.arange(1,13,1)
def readPiomas(directory,vari,years,thresh):
"""
Reads binary PIOMAS data
"""
### Retrieve Grid
grid = np.genfromtxt(directory + 'grid.txt')
grid = np.reshape(grid,(grid.size))
### Define Lat/Lon
lon = grid[:grid.size//2]
lons = np.reshape(lon,(120,360))
lat = grid[grid.size//2:]
lats = np.reshape(lat,(120,360))
### Call variables from PIOMAS
if vari == 'thick':
files = 'heff'
directory = directory + 'Thickness/'
elif vari == 'sic':
files = 'area'
directory = directory + 'SeaIceConcentration/'
elif vari == 'snow':
files = 'snow'
directory = directory + 'SnowCover/'
elif vari == 'oflux':
files = 'oflux'
directory = directory + 'OceanFlux/'
### Read data from binary into numpy arrays
var = np.empty((len(years),12,120,360))
for i in range(len(years)):
data = np.fromfile(directory + files + '_%s.H' % (years[i]),
dtype = 'float32')
### Reshape into [year,month,lat,lon]
months = int(data.shape[0]/(120*360))
if months != 12:
lastyearq = np.reshape(data,(months,120,360))
emptymo = np.empty((12-months,120,360))
emptymo[:,:,:] = np.nan
lastyear = np.append(lastyearq,emptymo,axis=0)
var[i,:,:,:] = lastyear
else:
dataq = np.reshape(data,(12,120,360))
var[i,:,:,:] = dataq
### Mask out threshold values
var[np.where(var <= thresh)] = np.nan
print('Completed: Read "%s" data!' % (vari))
return lats,lons,var
lats,lons,sit = readPiomas(directorydata,'thick',years,0.1)
### Read SIV data
years2,aug = np.genfromtxt(directorydata + 'monthly_piomas.txt',
unpack=True,delimiter='',usecols=[0,2])
climyr = np.where((years2 >= 1981) & (years2 <= 2010))[0]
clim = np.nanmean(aug[climyr])
### Select month
sit = sit[:,1,:,:]
### Adjust axes in time series plots
def adjust_spines(ax, spines):
for loc, spine in ax.spines.items():
if loc in spines:
spine.set_position(('outward', 5))
else:
spine.set_color('none')
if 'left' in spines:
ax.yaxis.set_ticks_position('left')
else:
ax.yaxis.set_ticks([])
if 'bottom' in spines:
ax.xaxis.set_ticks_position('bottom')
else:
ax.xaxis.set_ticks([])
plt.rc('text',usetex=True)
plt.rc('font',**{'family':'sans-serif','sans-serif':['Avant Garde']})
plt.rc('savefig',facecolor='black')
plt.rc('axes',edgecolor='k')
plt.rc('xtick',color='white')
plt.rc('ytick',color='white')
plt.rc('axes',labelcolor='white')
plt.rc('axes',facecolor='black')
## Plot global temperature anomalies
style = 'polar'
### Define figure
if style == 'ortho':
m = Basemap(projection='ortho',lon_0=-90,
lat_0=70,resolution='l',round=True)
elif style == 'polar':
m = Basemap(projection='npstere',boundinglat=67,lon_0=270,resolution='l',round =True)
for i in range(aug.shape[0]):
fig = plt.figure()
ax = plt.subplot(111)
for txt in fig.texts:
txt.set_visible(False)
var = sit[i,:,:]
m.drawmapboundary(fill_color='k')
m.drawlsmask(land_color='k',ocean_color='k')
m.drawcoastlines(color='aqua',linewidth=0.7)
# Make the plot continuous
barlim = np.arange(0,6,1)
cmap = cmocean.cm.thermal
cs = m.contourf(lons,lats,var,
np.arange(0,5.1,0.25),extend='max',
alpha=1,latlon=True,cmap=cmap)
if i >= 39:
ccc = 'slateblue'
else:
ccc = 'w'
t1 = plt.annotate(r'\textbf{%s}' % years[i],textcoords='axes fraction',
xy=(0,0), xytext=(-0.54,0.815),
fontsize=50,color=ccc)
t2 = plt.annotate(r'\textbf{GRAPHIC}: Zachary Labe (@ZLabe)',
textcoords='axes fraction',
xy=(0,0), xytext=(1.02,-0.0),
fontsize=4.5,color='darkgrey',rotation=90,va='bottom')
t3 = plt.annotate(r'\textbf{SOURCE}: http://psc.apl.washington.edu/zhang/IDAO/data.html',
textcoords='axes fraction',
xy=(0,0), xytext=(1.05,-0.0),
fontsize=4.5,color='darkgrey',rotation=90,va='bottom')
t4 = plt.annotate(r'\textbf{DATA}: PIOMAS v2.1 (Zhang and Rothrock, 2003) (\textbf{February})',
textcoords='axes fraction',
xy=(0,0), xytext=(1.08,-0.0),
fontsize=4.5,color='darkgrey',rotation=90,va='bottom')
cbar = m.colorbar(cs,drawedges=True,location='bottom',pad = 0.14,size=0.07)
ticks = np.arange(0,8,1)
cbar.set_ticks(barlim)
cbar.set_ticklabels(list(map(str,barlim)))
cbar.set_label(r'\textbf{SEA ICE THICKNESS [m]}',fontsize=10,
color='darkgrey')
cbar.ax.tick_params(axis='x', size=.0001)
cbar.ax.tick_params(labelsize=7)
###########################################################################
###########################################################################
### Create subplot
a = plt.axes([.2, .225, .08, .4], axisbg='k')
N = 1
ind = np.linspace(N,0.2,1)
width = .33
meansiv = np.nanmean(aug)
rects = plt.bar(ind,[aug[i]],width,zorder=2)
# plt.plot(([meansiv]*2),zorder=1)
rects[0].set_color('aqua')
if i == 39:
rects[0].set_color('slateblue')
adjust_spines(a, ['left', 'bottom'])
a.spines['top'].set_color('none')
a.spines['right'].set_color('none')
a.spines['left'].set_color('none')
a.spines['bottom'].set_color('none')
plt.setp(a.get_xticklines()[0:-1],visible=False)
a.tick_params(labelbottom='off')
a.tick_params(labelleft='off')
a.tick_params('both',length=0,width=0,which='major')
plt.yticks(np.arange(0,31,5),map(str,np.arange(0,31,5)))
plt.xlabel(r'\textbf{SEA ICE VOLUME [km$^{3}$]}',
fontsize=10,color='darkgrey',labelpad=1)
for rectq in rects:
height = rectq.get_height()
cc = 'darkgrey'
if i == 39:
cc ='slateblue'
plt.text(rectq.get_x() + rectq.get_width()/2.0,
height+1, r'\textbf{%s}' % format(int(height*1000),",d"),
ha='center', va='bottom',color=cc,fontsize=20)
fig.subplots_adjust(right=1.1)
###########################################################################
###########################################################################
if i < 10:
plt.savefig(directory + 'icy_0%s.png' % i,dpi=300)
else:
plt.savefig(directory + 'icy_%s.png' % i,dpi=300)
if i == 39:
plt.savefig(directory + 'icy_39.png',dpi=300)
plt.savefig(directory + 'icy_40.png',dpi=300)
plt.savefig(directory + 'icy_41.png',dpi=300)
plt.savefig(directory + 'icy_42.png',dpi=300)
plt.savefig(directory + 'icy_43.png',dpi=300)
plt.savefig(directory + 'icy_44.png',dpi=300)
plt.savefig(directory + 'icy_45.png',dpi=300)
plt.savefig(directory + 'icy_46.png',dpi=300)
plt.savefig(directory + 'icy_47.png',dpi=300)
plt.savefig(directory + 'icy_48.png',dpi=300)
plt.savefig(directory + 'icy_49.png',dpi=300)
plt.savefig(directory + 'icy_50.png',dpi=300)
plt.savefig(directory + 'icy_51.png',dpi=300)
t1.remove()
t2.remove()
t3.remove()
t4.remove()
| mit |
sagarjauhari/BCIpy | cleanup/debug.py | 1 | 1128 | # Copyright 2013, 2014 Justis Grant Peters and Sagar Jauhari
# This file is part of BCIpy.
#
# BCIpy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# BCIpy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with BCIpy. If not, see <http://www.gnu.org/licenses/>.
# -*- coding: utf-8 -*-
"""
Created on Mon Nov 25 02:02:10 2013
@author: sagar
"""
import sys
import re
def printModuleNames():
pat = re.compile("matplotlib*|numpy*|scipy*|pylab*|pandas*")
for name, module in sorted(sys.modules.items()):
if hasattr(module, '__version__') and pat.match(name):
print name, module.__version__
if __name__=='__main__':
printModuleNames()
| gpl-3.0 |
xavierwu/scikit-learn | sklearn/neighbors/tests/test_approximate.py | 71 | 18815 | """
Testing for the approximate neighbor search using
Locality Sensitive Hashing Forest module
(sklearn.neighbors.LSHForest).
"""
# Author: Maheshakya Wijewardena, Joel Nothman
import numpy as np
import scipy.sparse as sp
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_array_less
from sklearn.utils.testing import assert_greater
from sklearn.utils.testing import assert_true
from sklearn.utils.testing import assert_not_equal
from sklearn.utils.testing import assert_warns_message
from sklearn.utils.testing import ignore_warnings
from sklearn.metrics.pairwise import pairwise_distances
from sklearn.neighbors import LSHForest
from sklearn.neighbors import NearestNeighbors
def test_neighbors_accuracy_with_n_candidates():
# Checks whether accuracy increases as `n_candidates` increases.
n_candidates_values = np.array([.1, 50, 500])
n_samples = 100
n_features = 10
n_iter = 10
n_points = 5
rng = np.random.RandomState(42)
accuracies = np.zeros(n_candidates_values.shape[0], dtype=float)
X = rng.rand(n_samples, n_features)
for i, n_candidates in enumerate(n_candidates_values):
lshf = LSHForest(n_candidates=n_candidates)
lshf.fit(X)
for j in range(n_iter):
query = X[rng.randint(0, n_samples)].reshape(1, -1)
neighbors = lshf.kneighbors(query, n_neighbors=n_points,
return_distance=False)
distances = pairwise_distances(query, X, metric='cosine')
ranks = np.argsort(distances)[0, :n_points]
intersection = np.intersect1d(ranks, neighbors).shape[0]
ratio = intersection / float(n_points)
accuracies[i] = accuracies[i] + ratio
accuracies[i] = accuracies[i] / float(n_iter)
# Sorted accuracies should be equal to original accuracies
assert_true(np.all(np.diff(accuracies) >= 0),
msg="Accuracies are not non-decreasing.")
# Highest accuracy should be strictly greater than the lowest
assert_true(np.ptp(accuracies) > 0,
msg="Highest accuracy is not strictly greater than lowest.")
def test_neighbors_accuracy_with_n_estimators():
# Checks whether accuracy increases as `n_estimators` increases.
n_estimators = np.array([1, 10, 100])
n_samples = 100
n_features = 10
n_iter = 10
n_points = 5
rng = np.random.RandomState(42)
accuracies = np.zeros(n_estimators.shape[0], dtype=float)
X = rng.rand(n_samples, n_features)
for i, t in enumerate(n_estimators):
lshf = LSHForest(n_candidates=500, n_estimators=t)
lshf.fit(X)
for j in range(n_iter):
query = X[rng.randint(0, n_samples)].reshape(1, -1)
neighbors = lshf.kneighbors(query, n_neighbors=n_points,
return_distance=False)
distances = pairwise_distances(query, X, metric='cosine')
ranks = np.argsort(distances)[0, :n_points]
intersection = np.intersect1d(ranks, neighbors).shape[0]
ratio = intersection / float(n_points)
accuracies[i] = accuracies[i] + ratio
accuracies[i] = accuracies[i] / float(n_iter)
# Sorted accuracies should be equal to original accuracies
assert_true(np.all(np.diff(accuracies) >= 0),
msg="Accuracies are not non-decreasing.")
# Highest accuracy should be strictly greater than the lowest
assert_true(np.ptp(accuracies) > 0,
msg="Highest accuracy is not strictly greater than lowest.")
@ignore_warnings
def test_kneighbors():
# Checks whether desired number of neighbors are returned.
# It is guaranteed to return the requested number of neighbors
# if `min_hash_match` is set to 0. Returned distances should be
# in ascending order.
n_samples = 12
n_features = 2
n_iter = 10
rng = np.random.RandomState(42)
X = rng.rand(n_samples, n_features)
lshf = LSHForest(min_hash_match=0)
# Test unfitted estimator
assert_raises(ValueError, lshf.kneighbors, X[0])
lshf.fit(X)
for i in range(n_iter):
n_neighbors = rng.randint(0, n_samples)
query = X[rng.randint(0, n_samples)].reshape(1, -1)
neighbors = lshf.kneighbors(query, n_neighbors=n_neighbors,
return_distance=False)
# Desired number of neighbors should be returned.
assert_equal(neighbors.shape[1], n_neighbors)
# Multiple points
n_queries = 5
queries = X[rng.randint(0, n_samples, n_queries)]
distances, neighbors = lshf.kneighbors(queries,
n_neighbors=1,
return_distance=True)
assert_equal(neighbors.shape[0], n_queries)
assert_equal(distances.shape[0], n_queries)
# Test only neighbors
neighbors = lshf.kneighbors(queries, n_neighbors=1,
return_distance=False)
assert_equal(neighbors.shape[0], n_queries)
# Test random point(not in the data set)
query = rng.randn(n_features).reshape(1, -1)
lshf.kneighbors(query, n_neighbors=1,
return_distance=False)
# Test n_neighbors at initialization
neighbors = lshf.kneighbors(query, return_distance=False)
assert_equal(neighbors.shape[1], 5)
# Test `neighbors` has an integer dtype
assert_true(neighbors.dtype.kind == 'i',
msg="neighbors are not in integer dtype.")
def test_radius_neighbors():
# Checks whether Returned distances are less than `radius`
# At least one point should be returned when the `radius` is set
# to mean distance from the considering point to other points in
# the database.
# Moreover, this test compares the radius neighbors of LSHForest
# with the `sklearn.neighbors.NearestNeighbors`.
n_samples = 12
n_features = 2
n_iter = 10
rng = np.random.RandomState(42)
X = rng.rand(n_samples, n_features)
lshf = LSHForest()
# Test unfitted estimator
assert_raises(ValueError, lshf.radius_neighbors, X[0])
lshf.fit(X)
for i in range(n_iter):
# Select a random point in the dataset as the query
query = X[rng.randint(0, n_samples)].reshape(1, -1)
# At least one neighbor should be returned when the radius is the
# mean distance from the query to the points of the dataset.
mean_dist = np.mean(pairwise_distances(query, X, metric='cosine'))
neighbors = lshf.radius_neighbors(query, radius=mean_dist,
return_distance=False)
assert_equal(neighbors.shape, (1,))
assert_equal(neighbors.dtype, object)
assert_greater(neighbors[0].shape[0], 0)
# All distances to points in the results of the radius query should
# be less than mean_dist
distances, neighbors = lshf.radius_neighbors(query,
radius=mean_dist,
return_distance=True)
assert_array_less(distances[0], mean_dist)
# Multiple points
n_queries = 5
queries = X[rng.randint(0, n_samples, n_queries)]
distances, neighbors = lshf.radius_neighbors(queries,
return_distance=True)
# dists and inds should not be 1D arrays or arrays of variable lengths
# hence the use of the object dtype.
assert_equal(distances.shape, (n_queries,))
assert_equal(distances.dtype, object)
assert_equal(neighbors.shape, (n_queries,))
assert_equal(neighbors.dtype, object)
# Compare with exact neighbor search
query = X[rng.randint(0, n_samples)].reshape(1, -1)
mean_dist = np.mean(pairwise_distances(query, X, metric='cosine'))
nbrs = NearestNeighbors(algorithm='brute', metric='cosine').fit(X)
distances_exact, _ = nbrs.radius_neighbors(query, radius=mean_dist)
distances_approx, _ = lshf.radius_neighbors(query, radius=mean_dist)
# Radius-based queries do not sort the result points and the order
# depends on the method, the random_state and the dataset order. Therefore
# we need to sort the results ourselves before performing any comparison.
sorted_dists_exact = np.sort(distances_exact[0])
sorted_dists_approx = np.sort(distances_approx[0])
# Distances to exact neighbors are less than or equal to approximate
# counterparts as the approximate radius query might have missed some
# closer neighbors.
assert_true(np.all(np.less_equal(sorted_dists_exact,
sorted_dists_approx)))
def test_radius_neighbors_boundary_handling():
X = [[0.999, 0.001], [0.5, 0.5], [0, 1.], [-1., 0.001]]
n_points = len(X)
# Build an exact nearest neighbors model as reference model to ensure
# consistency between exact and approximate methods
nnbrs = NearestNeighbors(algorithm='brute', metric='cosine').fit(X)
# Build a LSHForest model with hyperparameter values that always guarantee
# exact results on this toy dataset.
lsfh = LSHForest(min_hash_match=0, n_candidates=n_points).fit(X)
# define a query aligned with the first axis
query = [[1., 0.]]
# Compute the exact cosine distances of the query to the four points of
# the dataset
dists = pairwise_distances(query, X, metric='cosine').ravel()
# The first point is almost aligned with the query (very small angle),
# the cosine distance should therefore be almost null:
assert_almost_equal(dists[0], 0, decimal=5)
# The second point form an angle of 45 degrees to the query vector
assert_almost_equal(dists[1], 1 - np.cos(np.pi / 4))
# The third point is orthogonal from the query vector hence at a distance
# exactly one:
assert_almost_equal(dists[2], 1)
# The last point is almost colinear but with opposite sign to the query
# therefore it has a cosine 'distance' very close to the maximum possible
# value of 2.
assert_almost_equal(dists[3], 2, decimal=5)
# If we query with a radius of one, all the samples except the last sample
# should be included in the results. This means that the third sample
# is lying on the boundary of the radius query:
exact_dists, exact_idx = nnbrs.radius_neighbors(query, radius=1)
approx_dists, approx_idx = lsfh.radius_neighbors(query, radius=1)
assert_array_equal(np.sort(exact_idx[0]), [0, 1, 2])
assert_array_equal(np.sort(approx_idx[0]), [0, 1, 2])
assert_array_almost_equal(np.sort(exact_dists[0]), dists[:-1])
assert_array_almost_equal(np.sort(approx_dists[0]), dists[:-1])
# If we perform the same query with a slighltly lower radius, the third
# point of the dataset that lay on the boundary of the previous query
# is now rejected:
eps = np.finfo(np.float64).eps
exact_dists, exact_idx = nnbrs.radius_neighbors(query, radius=1 - eps)
approx_dists, approx_idx = lsfh.radius_neighbors(query, radius=1 - eps)
assert_array_equal(np.sort(exact_idx[0]), [0, 1])
assert_array_equal(np.sort(approx_idx[0]), [0, 1])
assert_array_almost_equal(np.sort(exact_dists[0]), dists[:-2])
assert_array_almost_equal(np.sort(approx_dists[0]), dists[:-2])
def test_distances():
# Checks whether returned neighbors are from closest to farthest.
n_samples = 12
n_features = 2
n_iter = 10
rng = np.random.RandomState(42)
X = rng.rand(n_samples, n_features)
lshf = LSHForest()
lshf.fit(X)
for i in range(n_iter):
n_neighbors = rng.randint(0, n_samples)
query = X[rng.randint(0, n_samples)].reshape(1, -1)
distances, neighbors = lshf.kneighbors(query,
n_neighbors=n_neighbors,
return_distance=True)
# Returned neighbors should be from closest to farthest, that is
# increasing distance values.
assert_true(np.all(np.diff(distances[0]) >= 0))
# Note: the radius_neighbors method does not guarantee the order of
# the results.
def test_fit():
# Checks whether `fit` method sets all attribute values correctly.
n_samples = 12
n_features = 2
n_estimators = 5
rng = np.random.RandomState(42)
X = rng.rand(n_samples, n_features)
lshf = LSHForest(n_estimators=n_estimators)
lshf.fit(X)
# _input_array = X
assert_array_equal(X, lshf._fit_X)
# A hash function g(p) for each tree
assert_equal(n_estimators, len(lshf.hash_functions_))
# Hash length = 32
assert_equal(32, lshf.hash_functions_[0].components_.shape[0])
# Number of trees_ in the forest
assert_equal(n_estimators, len(lshf.trees_))
# Each tree has entries for every data point
assert_equal(n_samples, len(lshf.trees_[0]))
# Original indices after sorting the hashes
assert_equal(n_estimators, len(lshf.original_indices_))
# Each set of original indices in a tree has entries for every data point
assert_equal(n_samples, len(lshf.original_indices_[0]))
def test_partial_fit():
# Checks whether inserting array is consitent with fitted data.
# `partial_fit` method should set all attribute values correctly.
n_samples = 12
n_samples_partial_fit = 3
n_features = 2
rng = np.random.RandomState(42)
X = rng.rand(n_samples, n_features)
X_partial_fit = rng.rand(n_samples_partial_fit, n_features)
lshf = LSHForest()
# Test unfitted estimator
lshf.partial_fit(X)
assert_array_equal(X, lshf._fit_X)
lshf.fit(X)
# Insert wrong dimension
assert_raises(ValueError, lshf.partial_fit,
np.random.randn(n_samples_partial_fit, n_features - 1))
lshf.partial_fit(X_partial_fit)
# size of _input_array = samples + 1 after insertion
assert_equal(lshf._fit_X.shape[0],
n_samples + n_samples_partial_fit)
# size of original_indices_[1] = samples + 1
assert_equal(len(lshf.original_indices_[0]),
n_samples + n_samples_partial_fit)
# size of trees_[1] = samples + 1
assert_equal(len(lshf.trees_[1]),
n_samples + n_samples_partial_fit)
def test_hash_functions():
# Checks randomness of hash functions.
# Variance and mean of each hash function (projection vector)
# should be different from flattened array of hash functions.
# If hash functions are not randomly built (seeded with
# same value), variances and means of all functions are equal.
n_samples = 12
n_features = 2
n_estimators = 5
rng = np.random.RandomState(42)
X = rng.rand(n_samples, n_features)
lshf = LSHForest(n_estimators=n_estimators,
random_state=rng.randint(0, np.iinfo(np.int32).max))
lshf.fit(X)
hash_functions = []
for i in range(n_estimators):
hash_functions.append(lshf.hash_functions_[i].components_)
for i in range(n_estimators):
assert_not_equal(np.var(hash_functions),
np.var(lshf.hash_functions_[i].components_))
for i in range(n_estimators):
assert_not_equal(np.mean(hash_functions),
np.mean(lshf.hash_functions_[i].components_))
def test_candidates():
# Checks whether candidates are sufficient.
# This should handle the cases when number of candidates is 0.
# User should be warned when number of candidates is less than
# requested number of neighbors.
X_train = np.array([[5, 5, 2], [21, 5, 5], [1, 1, 1], [8, 9, 1],
[6, 10, 2]], dtype=np.float32)
X_test = np.array([7, 10, 3], dtype=np.float32).reshape(1, -1)
# For zero candidates
lshf = LSHForest(min_hash_match=32)
lshf.fit(X_train)
message = ("Number of candidates is not sufficient to retrieve"
" %i neighbors with"
" min_hash_match = %i. Candidates are filled up"
" uniformly from unselected"
" indices." % (3, 32))
assert_warns_message(UserWarning, message, lshf.kneighbors,
X_test, n_neighbors=3)
distances, neighbors = lshf.kneighbors(X_test, n_neighbors=3)
assert_equal(distances.shape[1], 3)
# For candidates less than n_neighbors
lshf = LSHForest(min_hash_match=31)
lshf.fit(X_train)
message = ("Number of candidates is not sufficient to retrieve"
" %i neighbors with"
" min_hash_match = %i. Candidates are filled up"
" uniformly from unselected"
" indices." % (5, 31))
assert_warns_message(UserWarning, message, lshf.kneighbors,
X_test, n_neighbors=5)
distances, neighbors = lshf.kneighbors(X_test, n_neighbors=5)
assert_equal(distances.shape[1], 5)
def test_graphs():
# Smoke tests for graph methods.
n_samples_sizes = [5, 10, 20]
n_features = 3
rng = np.random.RandomState(42)
for n_samples in n_samples_sizes:
X = rng.rand(n_samples, n_features)
lshf = LSHForest(min_hash_match=0)
lshf.fit(X)
kneighbors_graph = lshf.kneighbors_graph(X)
radius_neighbors_graph = lshf.radius_neighbors_graph(X)
assert_equal(kneighbors_graph.shape[0], n_samples)
assert_equal(kneighbors_graph.shape[1], n_samples)
assert_equal(radius_neighbors_graph.shape[0], n_samples)
assert_equal(radius_neighbors_graph.shape[1], n_samples)
def test_sparse_input():
# note: Fixed random state in sp.rand is not supported in older scipy.
# The test should succeed regardless.
X1 = sp.rand(50, 100)
X2 = sp.rand(10, 100)
forest_sparse = LSHForest(radius=1, random_state=0).fit(X1)
forest_dense = LSHForest(radius=1, random_state=0).fit(X1.A)
d_sparse, i_sparse = forest_sparse.kneighbors(X2, return_distance=True)
d_dense, i_dense = forest_dense.kneighbors(X2.A, return_distance=True)
assert_almost_equal(d_sparse, d_dense)
assert_almost_equal(i_sparse, i_dense)
d_sparse, i_sparse = forest_sparse.radius_neighbors(X2,
return_distance=True)
d_dense, i_dense = forest_dense.radius_neighbors(X2.A,
return_distance=True)
assert_equal(d_sparse.shape, d_dense.shape)
for a, b in zip(d_sparse, d_dense):
assert_almost_equal(a, b)
for a, b in zip(i_sparse, i_dense):
assert_almost_equal(a, b)
| bsd-3-clause |
clemkoa/scikit-learn | sklearn/exceptions.py | 50 | 5276 | """
The :mod:`sklearn.exceptions` module includes all custom warnings and error
classes used across scikit-learn.
"""
__all__ = ['NotFittedError',
'ChangedBehaviorWarning',
'ConvergenceWarning',
'DataConversionWarning',
'DataDimensionalityWarning',
'EfficiencyWarning',
'FitFailedWarning',
'NonBLASDotWarning',
'SkipTestWarning',
'UndefinedMetricWarning']
class NotFittedError(ValueError, AttributeError):
"""Exception class to raise if estimator is used before fitting.
This class inherits from both ValueError and AttributeError to help with
exception handling and backward compatibility.
Examples
--------
>>> from sklearn.svm import LinearSVC
>>> from sklearn.exceptions import NotFittedError
>>> try:
... LinearSVC().predict([[1, 2], [2, 3], [3, 4]])
... except NotFittedError as e:
... print(repr(e))
... # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS
NotFittedError('This LinearSVC instance is not fitted yet',)
.. versionchanged:: 0.18
Moved from sklearn.utils.validation.
"""
class ChangedBehaviorWarning(UserWarning):
"""Warning class used to notify the user of any change in the behavior.
.. versionchanged:: 0.18
Moved from sklearn.base.
"""
class ConvergenceWarning(UserWarning):
"""Custom warning to capture convergence problems
.. versionchanged:: 0.18
Moved from sklearn.utils.
"""
class DataConversionWarning(UserWarning):
"""Warning used to notify implicit data conversions happening in the code.
This warning occurs when some input data needs to be converted or
interpreted in a way that may not match the user's expectations.
For example, this warning may occur when the user
- passes an integer array to a function which expects float input and
will convert the input
- requests a non-copying operation, but a copy is required to meet the
implementation's data-type expectations;
- passes an input whose shape can be interpreted ambiguously.
.. versionchanged:: 0.18
Moved from sklearn.utils.validation.
"""
class DataDimensionalityWarning(UserWarning):
"""Custom warning to notify potential issues with data dimensionality.
For example, in random projection, this warning is raised when the
number of components, which quantifies the dimensionality of the target
projection space, is higher than the number of features, which quantifies
the dimensionality of the original source space, to imply that the
dimensionality of the problem will not be reduced.
.. versionchanged:: 0.18
Moved from sklearn.utils.
"""
class EfficiencyWarning(UserWarning):
"""Warning used to notify the user of inefficient computation.
This warning notifies the user that the efficiency may not be optimal due
to some reason which may be included as a part of the warning message.
This may be subclassed into a more specific Warning class.
.. versionadded:: 0.18
"""
class FitFailedWarning(RuntimeWarning):
"""Warning class used if there is an error while fitting the estimator.
This Warning is used in meta estimators GridSearchCV and RandomizedSearchCV
and the cross-validation helper function cross_val_score to warn when there
is an error while fitting the estimator.
Examples
--------
>>> from sklearn.model_selection import GridSearchCV
>>> from sklearn.svm import LinearSVC
>>> from sklearn.exceptions import FitFailedWarning
>>> import warnings
>>> warnings.simplefilter('always', FitFailedWarning)
>>> gs = GridSearchCV(LinearSVC(), {'C': [-1, -2]}, error_score=0)
>>> X, y = [[1, 2], [3, 4], [5, 6], [7, 8], [8, 9]], [0, 0, 0, 1, 1]
>>> with warnings.catch_warnings(record=True) as w:
... try:
... gs.fit(X, y) # This will raise a ValueError since C is < 0
... except ValueError:
... pass
... print(repr(w[-1].message))
... # doctest: +NORMALIZE_WHITESPACE
FitFailedWarning("Classifier fit failed. The score on this train-test
partition for these parameters will be set to 0.000000. Details:
\\nValueError('Penalty term must be positive; got (C=-2)',)",)
.. versionchanged:: 0.18
Moved from sklearn.cross_validation.
"""
class NonBLASDotWarning(EfficiencyWarning):
"""Warning used when the dot operation does not use BLAS.
This warning is used to notify the user that BLAS was not used for dot
operation and hence the efficiency may be affected.
.. versionchanged:: 0.18
Moved from sklearn.utils.validation, extends EfficiencyWarning.
"""
class SkipTestWarning(UserWarning):
"""Warning class used to notify the user of a test that was skipped.
For example, one of the estimator checks requires a pandas import.
If the pandas package cannot be imported, the test will be skipped rather
than register as a failure.
"""
class UndefinedMetricWarning(UserWarning):
"""Warning used when the metric is invalid
.. versionchanged:: 0.18
Moved from sklearn.base.
"""
| bsd-3-clause |
pnisarg/ABSA | src/acd_acs_rule.py | 1 | 10304 | import pandas as pd
from sklearn.model_selection import train_test_split
import codecs
from collections import OrderedDict
import pickle
import json,ast
import sys
from sklearn.metrics import f1_score
def loadCategoryData(categoryDataPath):
"""
Module to load the aspect category dataset
Args:
categoryDataPath: aspect category data path
Returns:
train: training data frame
test: testing data frame
"""
categoryDF=pd.read_csv(categoryDataPath,delimiter='#',encoding = 'utf-8')
del categoryDF['aspectTermPolarity']
domain=[]
for id in categoryDF.index:
domain.append(categoryDF.iloc[id,0][:3])
categoryDF['domain'] = domain
train, test = train_test_split(categoryDF, test_size = 0.2)
return train,test
def loadPOSTaggedReviewData(POSDataPath):
"""
Module to load POS tagged review data
Args:
POSDataPath: POS tagged data path
Returns:
reviewList: list of pos tagged reviews
"""
reviewList=[]
posReview = codecs.open(POSDataPath,encoding='utf-8')
reviewList=[x.strip() for x in posReview.readlines()]
return reviewList
def getAspectDefiningTerms(reviewList,train):
"""
Module to extract aspect term defining words
Args:
reviewList: list of pos tagged reviews
train: training data frame
Returns:
trainAdjectives = training data frame with aspect defining terms for each review
"""
adjectives=OrderedDict()
for item in reviewList:
idAndTagged = item.split()
id = idAndTagged[0]
taggedSentence = idAndTagged[1:]
domain = id.split("/")[0]
definingTerms = [s for s in taggedSentence if "JJ" in s]
adjTerms = []
for items in definingTerms:
word=items.split("/")[0]
word = word.encode('utf-8')
adjTerms.append(word)
adjectives[domain]=adjTerms
adjectiveDF = pd.DataFrame.from_dict(adjectives,orient='index')
adjectiveDF = adjectiveDF.reset_index()
adjectiveDF = adjectiveDF.rename(columns={"index":"reviewID"})
domain=[]
for id in adjectiveDF.index:
domain.append(adjectiveDF.iloc[id,0][:3])
adjectiveDF['domain'] = domain
trainAdjectives = train.merge(adjectiveDF,how='inner',left_on='domainID',right_on='reviewID')
trainAdjectives = trainAdjectives.drop(['domain_x','reviewID','domain_y'],axis=1)
return trainAdjectives
def createTrainingLexicon(trainAdjectives,WordSynsetDictPath,SynsetWordsPath):
"""
Module to create a lexicon of aspect term defining words
Args:
trainAdjectives: training data frame with aspect defining terms for each review
WordSynsetDictPath: word synset data path
SynsetWordsPath: synset data path
Returns:
trainLexicon: Lexicon of defining terms under each category
"""
trainLexicon={}
for id in trainAdjectives.index:
category = trainAdjectives.iloc[id,3]
adjectives = trainAdjectives.iloc[id][4:]
adjectives = adjectives.dropna()
for items in adjectives:
adjList=[]
if category in trainLexicon.keys():
trainLexicon[category].append(items)
else:
adjList.append(items)
trainLexicon[category] = adjList
word2Synset = pickle.load(open(WordSynsetDictPath))
synonyms = pickle.load(open(SynsetWordsPath))
for key,item in trainLexicon.iteritems():
for words in item:
if isinstance(words,str):
word = words.decode('utf-8','ignore')
synList = getSynonyms(word,word2Synset,synonyms)
if len(synList)>0:
for k,value in synList[0].iteritems():
for syns in value:
trainLexicon[key].append(syns)
return trainLexicon
def getSynonyms(word,word2Synset,synonyms):
"""
Module to extract synonyms of a word using Hindi wordnet
Args:
word: the word of which synonyms has to be extracted
word2Synset: dictionary of synset of the word
synonyms: dictionary of synonyms of the word
Returns:
synList: list of synonyms of the word
"""
synList=[]
if word2Synset.has_key(word):
synsets = word2Synset[word]
for pos in synsets.keys():
for synset in synsets[pos]:
if synonyms.has_key(synset):
synDict = synonyms[synset]
synList.append(synDict)
return synList
def loadAspectTermData(aspectTermDataPath):
"""
Module to load aspect term data obtained from Aspect Term Extraction Module
Args:
aspectTermDataPath: aspect term data path from Aspect Term Extraction Module
Returns:
termData: list of aspect term data from Aspect Term Extraction Module
"""
with open(aspectTermDataPath) as termFile:
termData = json.load(termFile)
termData = ast.literal_eval(json.dumps(termData, ensure_ascii=False, encoding='utf8'))
return termData
def getCategoryPrediction(termData,trainLexicon,test):
"""
Module to get category predictions
Args:
termData: list of aspect term data from Aspect Term Extraction Module
trainLexicon: Lexicon of defining terms under each category
test: testing data frame
Returns:
finalDF: data frame with predicted and true category labels
"""
categoryDict=OrderedDict()
for key,items in termData.iteritems():
for k,v in items.iteritems():
if len(v)>0:
for items in v:
for lexKey,lexItems in trainLexicon.iteritems():
if items.decode('utf-8','ignore') in lexItems:
categoryDict[key] = lexKey
predictedCategoryDF = pd.DataFrame(categoryDict.items(), columns=['reviewID', 'predictedCategory'])
finalDF = test.merge(predictedCategoryDF,left_on="domainID",right_on="reviewID")
finalDF = finalDF.dropna()
finalDF.reset_index(drop='True')
return finalDF
def getFScores(y_true,y_pred):
"""
Module to find the f_score of predicted results
Args:
y_true: true labels
y_pred: predicted labels
Returns:
None
"""
print f1_score(y_true, y_pred, average='micro')
def loadAspectTermSentimentData(termSentimentDataPath):
"""
Module to load aspect term sentiment data from Aspect Term Sentiment Extraction Module
Args:
termSentimentDataPath: aspect term sentiment data path
Returns:
termSentiData: dictionary of aspect terms and its sentiment
"""
with open(termSentimentDataPath) as termSentiFile:
termSentiData = json.load(termSentiFile)
termSentiData = ast.literal_eval(json.dumps(termSentiData, ensure_ascii=False, encoding='utf8'))
return termSentiData
def getCategorySentiments(termSentiData,trainLexicon,finalDF):
"""
Module to extract category-wise sentiment scores and
generate final dataframe with predicted and true sentiment
Args:
termSentiData: dictionary of aspect terms and its sentiment
trainLexicon: Lexicon of defining terms under each category
finalDF: data frame with predicted and true category labels
Returns:
finaDF: data frame with predicted and true category sentiment labels
"""
categorySentiScore={}
for key,values in termSentiData.iteritems():
if len(values)>0:
for k,v in values.iteritems():
for entKey,entVal in trainLexicon.iteritems():
if k in entVal:
if entKey in categorySentiScore:
categorySentiScore[entKey] += v
else:
categorySentiScore[entKey] = v
predictedCategory = finalDF['predictedCategory']
predictedCategorySentiment=[]
for category in predictedCategory:
if category in categorySentiScore.keys():
if categorySentiScore[category] > 0:
predictedCategorySentiment.append('pos')
elif categorySentiScore[category] == 0:
predictedCategorySentiment.append('neu')
elif categorySentiScore[category] < 0:
predictedCategorySentiment.append('neg')
else:
predictedCategorySentiment.append('neu')
finalDF['predictedCategorySentiment'] = predictedCategorySentiment
return finalDF
def main(categoryDataPath,POSDataPath,aspectTermDataPath,termSentimentDataPath,WordSynsetDictPath,SynsetWordsPath):
"""
Module to perform Aspect Category Detection based on
Aspect Terms and Aspect Defining Terms extracted from Aspect Term Extraction module
using Rule based and CRF techniques
Args:
categoryDataPath: aspect category data path
POSDataPath: POS tagged data path
aspectTermDataPath: aspect term data path from Aspect Term Extraction Module
termSentimentDataPath: aspect term sentiment data path
WordSynsetDictPath: word synset data path
SynsetWordsPath: synset data path
Returns:
None
"""
train,test = loadCategoryData(categoryDataPath)
reviewList = loadPOSTaggedReviewData(POSDataPath)
trainAdjectives = getAspectDefiningTerms(reviewList,train)
trainLexicon = createTrainingLexicon(trainAdjectives,WordSynsetDictPath,SynsetWordsPath)
termData = loadAspectTermData(aspectTermDataPath)
finalDF = getCategoryPrediction(termData,trainLexicon,test)
getFScores(finalDF['aspectCategory'], finalDF['predictedCategory'])
termSentiData = loadAspectTermSentimentData(termSentimentDataPath)
finalDF = getCategorySentiments(termSentiData,trainLexicon,finalDF)
getFScores(finalDF['categoryPolarity'], finalDF['predictedCategorySentiment'])
main(sys.argv[1],sys.argv[2],sys.argv[3],sys.argv[4],sys.argv[5],sys.argv[6]) | mit |
peterfpeterson/mantid | qt/python/mantidqt/project/plotssaver.py | 3 | 14097 | # Mantid Repository : https://github.com/mantidproject/mantid
#
# Copyright © 2018 ISIS Rutherford Appleton Laboratory UKRI,
# NScD Oak Ridge National Laboratory, European Spallation Source,
# Institut Laue - Langevin & CSNS, Institute of High Energy Physics, CAS
# SPDX - License - Identifier: GPL - 3.0 +
# This file is part of the mantidqt package
#
from copy import deepcopy
import matplotlib.axis
from matplotlib import ticker
from matplotlib.image import AxesImage
from mantid import logger
from mantid.plots.legend import LegendProperties
from mantid.plots.utility import MantidAxType
from matplotlib.colors import to_hex, Normalize
class PlotsSaver(object):
def __init__(self):
self.figure_creation_args = {}
def save_plots(self, plot_dict, is_project_recovery=False):
# if argument is none return empty dictionary
if plot_dict is None:
return []
plot_list = []
for index in plot_dict:
try:
plot_list.append(self.get_dict_from_fig(plot_dict[index].canvas.figure))
except BaseException as e:
# Catch all errors in here so it can fail silently-ish, if this is happening on all plots make sure you
# have built your project.
if isinstance(e, KeyboardInterrupt):
raise KeyboardInterrupt
error_string = "Plot: " + str(index) + " was not saved. Error: " + str(e)
if not is_project_recovery:
logger.warning(error_string)
else:
logger.debug(error_string)
return plot_list
@staticmethod
def _convert_normalise_obj_to_dict(norm):
norm_dict = {'type': type(norm).__name__, 'clip': norm.clip, 'vmin': norm.vmin, 'vmax': norm.vmax}
return norm_dict
@staticmethod
def _add_normalisation_kwargs(cargs_list, axes_list):
for ax_cargs, ax_dict in zip(cargs_list[0], axes_list):
is_norm = ax_dict.pop("_is_norm")
ax_cargs['normalize_by_bin_width'] = is_norm
def get_dict_from_fig(self, fig):
axes_list = []
create_list = []
for ax in fig.axes:
try:
creation_args = deepcopy(ax.creation_args)
# convert the normalise object (if present) into a dict so that it can be json serialised
for args_dict in creation_args:
if 'axis' in args_dict and type(args_dict['axis']) is MantidAxType:
args_dict['axis'] = args_dict['axis'].value
if 'norm' in args_dict.keys() and isinstance(args_dict['norm'], Normalize):
norm_dict = self._convert_normalise_obj_to_dict(args_dict['norm'])
args_dict['norm'] = norm_dict
create_list.append(creation_args)
self.figure_creation_args = creation_args
except AttributeError:
logger.debug("Axis had an axis without creation_args - Common with a Colorfill plot")
continue
axes_list.append(self.get_dict_for_axes(ax))
if create_list and axes_list:
self._add_normalisation_kwargs(create_list, axes_list)
fig_dict = {"creationArguments": create_list,
"axes": axes_list,
"label": fig._label,
"properties": self.get_dict_from_fig_properties(fig)}
return fig_dict
@staticmethod
def get_dict_for_axes_colorbar(ax):
cb_dict = {}
# If an image is present (from imshow)
if len(ax.images) > 0 and isinstance(ax.images[0], AxesImage):
image = ax.images[0]
# If an image is present from pcolor/pcolormesh
elif len(ax.collections) > 0 and isinstance(ax.collections[0], AxesImage):
image = ax.collections[0]
else:
cb_dict["exists"] = False
return cb_dict
cb_dict["exists"] = True
cb_dict["max"] = image.norm.vmax
cb_dict["min"] = image.norm.vmin
cb_dict["interpolation"] = image._interpolation
cb_dict["cmap"] = image.cmap.name
cb_dict["label"] = image._label
return cb_dict
def get_dict_for_axes(self, ax):
ax_dict = {"properties": self.get_dict_from_axes_properties(ax),
"title": ax.get_title(),
"xAxisTitle": ax.get_xlabel(),
"yAxisTitle": ax.get_ylabel(),
"colorbar": self.get_dict_for_axes_colorbar(ax)}
# Get lines from the axes and store it's data
lines_list = []
for index, line in enumerate(ax.lines):
lines_list.append(self.get_dict_from_line(line, index))
ax_dict["lines"] = lines_list
texts_list = []
for text in ax.texts:
texts_list.append(self.get_dict_from_text(text))
ax_dict["texts"] = texts_list
# Potentially need to handle artists that are Text
artist_text_dict = {}
for artist in ax.artists:
if isinstance(artist, matplotlib.text.Text):
artist_text_dict = self.get_dict_from_text(artist)
ax_dict["textFromArtists"] = artist_text_dict
legend_dict = {}
legend = ax.get_legend()
if legend is not None:
legend_dict["exists"] = True
legend_dict.update(LegendProperties.from_legend(legend))
else:
legend_dict["exists"] = False
ax_dict["legend"] = legend_dict
# add value to determine if ax has been normalised
ws_artists = [art for art in ax.tracked_workspaces.values()]
is_norm = all(art[0].is_normalized for art in ws_artists)
ax_dict["_is_norm"] = is_norm
return ax_dict
def get_dict_from_axes_properties(self, ax):
return {"bounds": ax.get_position().bounds,
"dynamic": ax.get_navigate(),
"axisOn": ax.axison,
"frameOn": ax.get_frame_on(),
"visible": ax.get_visible(),
"xAxisProperties": self.get_dict_from_axis_properties(ax.xaxis),
"yAxisProperties": self.get_dict_from_axis_properties(ax.yaxis),
"xAxisScale": ax.xaxis.get_scale(),
"xLim": ax.get_xlim(),
"xAutoScale": ax.get_autoscalex_on(),
"yAxisScale": ax.yaxis.get_scale(),
"yLim": ax.get_ylim(),
"yAutoScale": ax.get_autoscaley_on(),
"showMinorGrid": hasattr(ax, 'show_minor_gridlines') and ax.show_minor_gridlines,
"tickParams": self.get_dict_from_tick_properties(ax),
"spineWidths": self.get_dict_from_spine_widths(ax)}
def get_dict_from_axis_properties(self, ax):
prop_dict = {"majorTickLocator": type(ax.get_major_locator()).__name__,
"minorTickLocator": type(ax.get_minor_locator()).__name__,
"majorTickFormatter": type(ax.get_major_formatter()).__name__,
"minorTickFormatter": type(ax.get_minor_formatter()).__name__,
"gridStyle": self.get_dict_for_grid_style(ax),
"visible": ax.get_visible()}
if not (isinstance(ax, matplotlib.axis.YAxis) or isinstance(ax, matplotlib.axis.XAxis)):
raise ValueError("Value passed is not a valid axis")
if isinstance(ax.get_major_locator(), ticker.FixedLocator):
prop_dict["majorTickLocatorValues"] = list(ax.get_major_locator())
else:
prop_dict["majorTickLocatorValues"] = None
if isinstance(ax.get_minor_locator(), ticker.FixedLocator):
prop_dict["minorTickLocatorValues"] = list(ax.get_minor_locator())
else:
prop_dict["minorTickLocatorValues"] = None
formatter = ax.get_major_formatter()
if isinstance(formatter, ticker.FixedFormatter):
prop_dict["majorTickFormat"] = list(formatter.seq)
else:
prop_dict["majorTickFormat"] = None
formatter = ax.get_minor_formatter()
if isinstance(formatter, ticker.FixedFormatter):
prop_dict["minorTickFormat"] = list(formatter.seq)
else:
prop_dict["minorTickFormat"] = None
labels = ax.get_ticklabels()
if labels:
prop_dict["fontSize"] = labels[0].get_fontsize()
else:
prop_dict["fontSize"] = ""
return prop_dict
@staticmethod
def get_dict_for_grid_style(ax):
grid_style = {}
gridlines = ax.get_gridlines()
if ax._gridOnMajor and len(gridlines) > 0:
grid_style["color"] = to_hex(gridlines[0].get_color())
grid_style["alpha"] = gridlines[0].get_alpha()
grid_style["gridOn"] = True
grid_style["minorGridOn"] = ax._gridOnMinor
else:
grid_style["gridOn"] = False
return grid_style
def get_dict_from_line(self, line, index=0):
line_dict = {"lineIndex": index,
"label": line.get_label(),
"alpha": line.get_alpha(),
"color": to_hex(line.get_color()),
"lineWidth": line.get_linewidth(),
"lineStyle": line.get_linestyle(),
"markerStyle": self.get_dict_from_marker_style(line),
"errorbars": self.get_dict_for_errorbars(line)}
if line_dict["alpha"] is None:
line_dict["alpha"] = 1
return line_dict
def get_dict_for_errorbars(self, line):
if self.figure_creation_args[0]["function"] == "errorbar":
return {"exists": True,
"dashCapStyle": line.get_dash_capstyle(),
"dashJoinStyle": line.get_dash_joinstyle(),
"solidCapStyle": line.get_solid_capstyle(),
"solidJoinStyle": line.get_solid_joinstyle()}
else:
return {"exists": False}
@staticmethod
def get_dict_from_marker_style(line):
style_dict = {"faceColor": to_hex(line.get_markerfacecolor()),
"edgeColor": to_hex(line.get_markeredgecolor()),
"edgeWidth": line.get_markeredgewidth(),
"markerType": line.get_marker(),
"markerSize": line.get_markersize(),
"zOrder": line.get_zorder()}
return style_dict
def get_dict_from_text(self, text):
text_dict = {"text": text.get_text()}
if text_dict["text"]:
# text_dict["transform"] = text.get_transform()
text_dict["position"] = text.get_position()
text_dict["useTeX"] = text.get_usetex()
text_dict["style"] = self.get_dict_from_text_style(text)
return text_dict
@staticmethod
def get_dict_from_text_style(text):
style_dict = {"alpha": text.get_alpha(),
"textSize": text.get_size(),
"color": to_hex(text.get_color()),
"hAlign": text.get_horizontalalignment(),
"vAlign": text.get_verticalalignment(),
"rotation": text.get_rotation(),
"zOrder": text.get_zorder()}
if style_dict["alpha"] is None:
style_dict["alpha"] = 1
return style_dict
@staticmethod
def get_dict_from_fig_properties(fig):
return {"figWidth": fig.get_figwidth(), "figHeight": fig.get_figheight(), "dpi": fig.dpi}
@staticmethod
def get_dict_from_tick_properties(ax):
xaxis_major_kw = ax.xaxis._major_tick_kw
xaxis_minor_kw = ax.xaxis._minor_tick_kw
yaxis_major_kw = ax.yaxis._major_tick_kw
yaxis_minor_kw = ax.yaxis._minor_tick_kw
tick_dict = {
"xaxis": {
"major": {
"bottom": xaxis_major_kw['tick1On'],
"top": xaxis_major_kw['tick2On'],
"labelbottom": xaxis_major_kw['label1On'],
"labeltop": xaxis_major_kw['label2On']
},
"minor": {
"bottom": xaxis_minor_kw['tick1On'],
"top": xaxis_minor_kw['tick2On'],
"labelbottom": xaxis_minor_kw['label1On'],
"labeltop": xaxis_minor_kw['label2On']
}
},
"yaxis": {
"major": {
"left": yaxis_major_kw['tick1On'],
"right": yaxis_major_kw['tick2On'],
"labelleft": yaxis_major_kw['label1On'],
"labelright": yaxis_major_kw['label2On']
},
"minor": {
"left": yaxis_minor_kw['tick1On'],
"right": yaxis_minor_kw['tick2On'],
"labelleft": yaxis_minor_kw['label1On'],
"labelright": yaxis_minor_kw['label2On']
}
}
}
# Set none guaranteed variables in tick_dict
for axis in tick_dict:
for size in tick_dict[axis]:
# Setup keyword dict for given axis and size (major/minor)
keyword_dict_variable_name = f"{axis}_{size}_kw"
keyword_dict = locals()[keyword_dict_variable_name]
if "tickdir" in keyword_dict:
tick_dict[axis][size]["direction"] = keyword_dict["tickdir"]
if "size" in keyword_dict:
tick_dict[axis][size]["size"] = keyword_dict["size"]
if "width" in keyword_dict:
tick_dict[axis][size]["width"] = keyword_dict["width"]
return tick_dict
@staticmethod
def get_dict_from_spine_widths(ax):
return {
'left': ax.spines['left']._linewidth,
'right': ax.spines['right']._linewidth,
'bottom': ax.spines['bottom']._linewidth,
'top': ax.spines['top']._linewidth}
| gpl-3.0 |
benschmaus/catapult | third_party/google-endpoints/future/utils/__init__.py | 36 | 20238 | """
A selection of cross-compatible functions for Python 2 and 3.
This module exports useful functions for 2/3 compatible code:
* bind_method: binds functions to classes
* ``native_str_to_bytes`` and ``bytes_to_native_str``
* ``native_str``: always equal to the native platform string object (because
this may be shadowed by imports from future.builtins)
* lists: lrange(), lmap(), lzip(), lfilter()
* iterable method compatibility:
- iteritems, iterkeys, itervalues
- viewitems, viewkeys, viewvalues
These use the original method if available, otherwise they use items,
keys, values.
* types:
* text_type: unicode in Python 2, str in Python 3
* binary_type: str in Python 2, bythes in Python 3
* string_types: basestring in Python 2, str in Python 3
* bchr(c):
Take an integer and make a 1-character byte string
* bord(c)
Take the result of indexing on a byte string and make an integer
* tobytes(s)
Take a text string, a byte string, or a sequence of characters taken
from a byte string, and make a byte string.
* raise_from()
* raise_with_traceback()
This module also defines these decorators:
* ``python_2_unicode_compatible``
* ``with_metaclass``
* ``implements_iterator``
Some of the functions in this module come from the following sources:
* Jinja2 (BSD licensed: see
https://github.com/mitsuhiko/jinja2/blob/master/LICENSE)
* Pandas compatibility module pandas.compat
* six.py by Benjamin Peterson
* Django
"""
import types
import sys
import numbers
import functools
import copy
import inspect
PY3 = sys.version_info[0] == 3
PY2 = sys.version_info[0] == 2
PY26 = sys.version_info[0:2] == (2, 6)
PY27 = sys.version_info[0:2] == (2, 7)
PYPY = hasattr(sys, 'pypy_translation_info')
def python_2_unicode_compatible(cls):
"""
A decorator that defines __unicode__ and __str__ methods under Python
2. Under Python 3, this decorator is a no-op.
To support Python 2 and 3 with a single code base, define a __str__
method returning unicode text and apply this decorator to the class, like
this::
>>> from future.utils import python_2_unicode_compatible
>>> @python_2_unicode_compatible
... class MyClass(object):
... def __str__(self):
... return u'Unicode string: \u5b54\u5b50'
>>> a = MyClass()
Then, after this import:
>>> from future.builtins import str
the following is ``True`` on both Python 3 and 2::
>>> str(a) == a.encode('utf-8').decode('utf-8')
True
and, on a Unicode-enabled terminal with the right fonts, these both print the
Chinese characters for Confucius::
>>> print(a)
>>> print(str(a))
The implementation comes from django.utils.encoding.
"""
if not PY3:
cls.__unicode__ = cls.__str__
cls.__str__ = lambda self: self.__unicode__().encode('utf-8')
return cls
def with_metaclass(meta, *bases):
"""
Function from jinja2/_compat.py. License: BSD.
Use it like this::
class BaseForm(object):
pass
class FormType(type):
pass
class Form(with_metaclass(FormType, BaseForm)):
pass
This requires a bit of explanation: the basic idea is to make a
dummy metaclass for one level of class instantiation that replaces
itself with the actual metaclass. Because of internal type checks
we also need to make sure that we downgrade the custom metaclass
for one level to something closer to type (that's why __call__ and
__init__ comes back from type etc.).
This has the advantage over six.with_metaclass of not introducing
dummy classes into the final MRO.
"""
class metaclass(meta):
__call__ = type.__call__
__init__ = type.__init__
def __new__(cls, name, this_bases, d):
if this_bases is None:
return type.__new__(cls, name, (), d)
return meta(name, bases, d)
return metaclass('temporary_class', None, {})
# Definitions from pandas.compat and six.py follow:
if PY3:
def bchr(s):
return bytes([s])
def bstr(s):
if isinstance(s, str):
return bytes(s, 'latin-1')
else:
return bytes(s)
def bord(s):
return s
string_types = str,
integer_types = int,
class_types = type,
text_type = str
binary_type = bytes
else:
# Python 2
def bchr(s):
return chr(s)
def bstr(s):
return str(s)
def bord(s):
return ord(s)
string_types = basestring,
integer_types = (int, long)
class_types = (type, types.ClassType)
text_type = unicode
binary_type = str
###
if PY3:
def tobytes(s):
if isinstance(s, bytes):
return s
else:
if isinstance(s, str):
return s.encode('latin-1')
else:
return bytes(s)
else:
# Python 2
def tobytes(s):
if isinstance(s, unicode):
return s.encode('latin-1')
else:
return ''.join(s)
tobytes.__doc__ = """
Encodes to latin-1 (where the first 256 chars are the same as
ASCII.)
"""
if PY3:
def native_str_to_bytes(s, encoding='utf-8'):
return s.encode(encoding)
def bytes_to_native_str(b, encoding='utf-8'):
return b.decode(encoding)
def text_to_native_str(t, encoding=None):
return t
else:
# Python 2
def native_str_to_bytes(s, encoding=None):
from future.types import newbytes # to avoid a circular import
return newbytes(s)
def bytes_to_native_str(b, encoding=None):
return native(b)
def text_to_native_str(t, encoding='ascii'):
"""
Use this to create a Py2 native string when "from __future__ import
unicode_literals" is in effect.
"""
return unicode(t).encode(encoding)
native_str_to_bytes.__doc__ = """
On Py3, returns an encoded string.
On Py2, returns a newbytes type, ignoring the ``encoding`` argument.
"""
if PY3:
# list-producing versions of the major Python iterating functions
def lrange(*args, **kwargs):
return list(range(*args, **kwargs))
def lzip(*args, **kwargs):
return list(zip(*args, **kwargs))
def lmap(*args, **kwargs):
return list(map(*args, **kwargs))
def lfilter(*args, **kwargs):
return list(filter(*args, **kwargs))
else:
import __builtin__
# Python 2-builtin ranges produce lists
lrange = __builtin__.range
lzip = __builtin__.zip
lmap = __builtin__.map
lfilter = __builtin__.filter
def isidentifier(s, dotted=False):
'''
A function equivalent to the str.isidentifier method on Py3
'''
if dotted:
return all(isidentifier(a) for a in s.split('.'))
if PY3:
return s.isidentifier()
else:
import re
_name_re = re.compile(r"[a-zA-Z_][a-zA-Z0-9_]*$")
return bool(_name_re.match(s))
def viewitems(obj, **kwargs):
"""
Function for iterating over dictionary items with the same set-like
behaviour on Py2.7 as on Py3.
Passes kwargs to method."""
func = getattr(obj, "viewitems", None)
if not func:
func = obj.items
return func(**kwargs)
def viewkeys(obj, **kwargs):
"""
Function for iterating over dictionary keys with the same set-like
behaviour on Py2.7 as on Py3.
Passes kwargs to method."""
func = getattr(obj, "viewkeys", None)
if not func:
func = obj.keys
return func(**kwargs)
def viewvalues(obj, **kwargs):
"""
Function for iterating over dictionary values with the same set-like
behaviour on Py2.7 as on Py3.
Passes kwargs to method."""
func = getattr(obj, "viewvalues", None)
if not func:
func = obj.values
return func(**kwargs)
def iteritems(obj, **kwargs):
"""Use this only if compatibility with Python versions before 2.7 is
required. Otherwise, prefer viewitems().
"""
func = getattr(obj, "iteritems", None)
if not func:
func = obj.items
return func(**kwargs)
def iterkeys(obj, **kwargs):
"""Use this only if compatibility with Python versions before 2.7 is
required. Otherwise, prefer viewkeys().
"""
func = getattr(obj, "iterkeys", None)
if not func:
func = obj.keys
return func(**kwargs)
def itervalues(obj, **kwargs):
"""Use this only if compatibility with Python versions before 2.7 is
required. Otherwise, prefer viewvalues().
"""
func = getattr(obj, "itervalues", None)
if not func:
func = obj.values
return func(**kwargs)
def bind_method(cls, name, func):
"""Bind a method to class, python 2 and python 3 compatible.
Parameters
----------
cls : type
class to receive bound method
name : basestring
name of method on class instance
func : function
function to be bound as method
Returns
-------
None
"""
# only python 2 has an issue with bound/unbound methods
if not PY3:
setattr(cls, name, types.MethodType(func, None, cls))
else:
setattr(cls, name, func)
def getexception():
return sys.exc_info()[1]
def _get_caller_globals_and_locals():
"""
Returns the globals and locals of the calling frame.
Is there an alternative to frame hacking here?
"""
caller_frame = inspect.stack()[2]
myglobals = caller_frame[0].f_globals
mylocals = caller_frame[0].f_locals
return myglobals, mylocals
def _repr_strip(mystring):
"""
Returns the string without any initial or final quotes.
"""
r = repr(mystring)
if r.startswith("'") and r.endswith("'"):
return r[1:-1]
else:
return r
if PY3:
def raise_from(exc, cause):
"""
Equivalent to:
raise EXCEPTION from CAUSE
on Python 3. (See PEP 3134).
"""
myglobals, mylocals = _get_caller_globals_and_locals()
# We pass the exception and cause along with other globals
# when we exec():
myglobals = myglobals.copy()
myglobals['__python_future_raise_from_exc'] = exc
myglobals['__python_future_raise_from_cause'] = cause
execstr = "raise __python_future_raise_from_exc from __python_future_raise_from_cause"
exec(execstr, myglobals, mylocals)
def raise_(tp, value=None, tb=None):
"""
A function that matches the Python 2.x ``raise`` statement. This
allows re-raising exceptions with the cls value and traceback on
Python 2 and 3.
"""
if value is not None and isinstance(tp, Exception):
raise TypeError("instance exception may not have a separate value")
if value is not None:
exc = tp(value)
else:
exc = tp
if exc.__traceback__ is not tb:
raise exc.with_traceback(tb)
raise exc
def raise_with_traceback(exc, traceback=Ellipsis):
if traceback == Ellipsis:
_, _, traceback = sys.exc_info()
raise exc.with_traceback(traceback)
else:
def raise_from(exc, cause):
"""
Equivalent to:
raise EXCEPTION from CAUSE
on Python 3. (See PEP 3134).
"""
# Is either arg an exception class (e.g. IndexError) rather than
# instance (e.g. IndexError('my message here')? If so, pass the
# name of the class undisturbed through to "raise ... from ...".
if isinstance(exc, type) and issubclass(exc, Exception):
e = exc()
# exc = exc.__name__
# execstr = "e = " + _repr_strip(exc) + "()"
# myglobals, mylocals = _get_caller_globals_and_locals()
# exec(execstr, myglobals, mylocals)
else:
e = exc
e.__suppress_context__ = False
if isinstance(cause, type) and issubclass(cause, Exception):
e.__cause__ = cause()
e.__suppress_context__ = True
elif cause is None:
e.__cause__ = None
e.__suppress_context__ = True
elif isinstance(cause, BaseException):
e.__cause__ = cause
e.__suppress_context__ = True
else:
raise TypeError("exception causes must derive from BaseException")
e.__context__ = sys.exc_info()[1]
raise e
exec('''
def raise_(tp, value=None, tb=None):
raise tp, value, tb
def raise_with_traceback(exc, traceback=Ellipsis):
if traceback == Ellipsis:
_, _, traceback = sys.exc_info()
raise exc, None, traceback
'''.strip())
raise_with_traceback.__doc__ = (
"""Raise exception with existing traceback.
If traceback is not passed, uses sys.exc_info() to get traceback."""
)
# Deprecated alias for backward compatibility with ``future`` versions < 0.11:
reraise = raise_
def implements_iterator(cls):
'''
From jinja2/_compat.py. License: BSD.
Use as a decorator like this::
@implements_iterator
class UppercasingIterator(object):
def __init__(self, iterable):
self._iter = iter(iterable)
def __iter__(self):
return self
def __next__(self):
return next(self._iter).upper()
'''
if PY3:
return cls
else:
cls.next = cls.__next__
del cls.__next__
return cls
if PY3:
get_next = lambda x: x.next
else:
get_next = lambda x: x.__next__
def encode_filename(filename):
if PY3:
return filename
else:
if isinstance(filename, unicode):
return filename.encode('utf-8')
return filename
def is_new_style(cls):
"""
Python 2.7 has both new-style and old-style classes. Old-style classes can
be pesky in some circumstances, such as when using inheritance. Use this
function to test for whether a class is new-style. (Python 3 only has
new-style classes.)
"""
return hasattr(cls, '__class__') and ('__dict__' in dir(cls)
or hasattr(cls, '__slots__'))
# The native platform string and bytes types. Useful because ``str`` and
# ``bytes`` are redefined on Py2 by ``from future.builtins import *``.
native_str = str
native_bytes = bytes
def istext(obj):
"""
Deprecated. Use::
>>> isinstance(obj, str)
after this import:
>>> from future.builtins import str
"""
return isinstance(obj, type(u''))
def isbytes(obj):
"""
Deprecated. Use::
>>> isinstance(obj, bytes)
after this import:
>>> from future.builtins import bytes
"""
return isinstance(obj, type(b''))
def isnewbytes(obj):
"""
Equivalent to the result of ``isinstance(obj, newbytes)`` were
``__instancecheck__`` not overridden on the newbytes subclass. In
other words, it is REALLY a newbytes instance, not a Py2 native str
object?
"""
# TODO: generalize this so that it works with subclasses of newbytes
# Import is here to avoid circular imports:
from future.types.newbytes import newbytes
return type(obj) == newbytes
def isint(obj):
"""
Deprecated. Tests whether an object is a Py3 ``int`` or either a Py2 ``int`` or
``long``.
Instead of using this function, you can use:
>>> from future.builtins import int
>>> isinstance(obj, int)
The following idiom is equivalent:
>>> from numbers import Integral
>>> isinstance(obj, Integral)
"""
return isinstance(obj, numbers.Integral)
def native(obj):
"""
On Py3, this is a no-op: native(obj) -> obj
On Py2, returns the corresponding native Py2 types that are
superclasses for backported objects from Py3:
>>> from builtins import str, bytes, int
>>> native(str(u'ABC'))
u'ABC'
>>> type(native(str(u'ABC')))
unicode
>>> native(bytes(b'ABC'))
b'ABC'
>>> type(native(bytes(b'ABC')))
bytes
>>> native(int(10**20))
100000000000000000000L
>>> type(native(int(10**20)))
long
Existing native types on Py2 will be returned unchanged:
>>> type(native(u'ABC'))
unicode
"""
if hasattr(obj, '__native__'):
return obj.__native__()
else:
return obj
# Implementation of exec_ is from ``six``:
if PY3:
import builtins
exec_ = getattr(builtins, "exec")
else:
def exec_(code, globs=None, locs=None):
"""Execute code in a namespace."""
if globs is None:
frame = sys._getframe(1)
globs = frame.f_globals
if locs is None:
locs = frame.f_locals
del frame
elif locs is None:
locs = globs
exec("""exec code in globs, locs""")
# Defined here for backward compatibility:
def old_div(a, b):
"""
DEPRECATED: import ``old_div`` from ``past.utils`` instead.
Equivalent to ``a / b`` on Python 2 without ``from __future__ import
division``.
TODO: generalize this to other objects (like arrays etc.)
"""
if isinstance(a, numbers.Integral) and isinstance(b, numbers.Integral):
return a // b
else:
return a / b
def as_native_str(encoding='utf-8'):
'''
A decorator to turn a function or method call that returns text, i.e.
unicode, into one that returns a native platform str.
Use it as a decorator like this::
from __future__ import unicode_literals
class MyClass(object):
@as_native_str(encoding='ascii')
def __repr__(self):
return next(self._iter).upper()
'''
if PY3:
return lambda f: f
else:
def encoder(f):
@functools.wraps(f)
def wrapper(*args, **kwargs):
return f(*args, **kwargs).encode(encoding=encoding)
return wrapper
return encoder
# listvalues and listitems definitions from Nick Coghlan's (withdrawn)
# PEP 496:
try:
dict.iteritems
except AttributeError:
# Python 3
def listvalues(d):
return list(d.values())
def listitems(d):
return list(d.items())
else:
# Python 2
def listvalues(d):
return d.values()
def listitems(d):
return d.items()
if PY3:
def ensure_new_type(obj):
return obj
else:
def ensure_new_type(obj):
from future.types.newbytes import newbytes
from future.types.newstr import newstr
from future.types.newint import newint
from future.types.newdict import newdict
native_type = type(native(obj))
# Upcast only if the type is already a native (non-future) type
if issubclass(native_type, type(obj)):
# Upcast
if native_type == str: # i.e. Py2 8-bit str
return newbytes(obj)
elif native_type == unicode:
return newstr(obj)
elif native_type == int:
return newint(obj)
elif native_type == long:
return newint(obj)
elif native_type == dict:
return newdict(obj)
else:
return obj
else:
# Already a new type
assert type(obj) in [newbytes, newstr]
return obj
__all__ = ['PY2', 'PY26', 'PY3', 'PYPY',
'as_native_str', 'bind_method', 'bord', 'bstr',
'bytes_to_native_str', 'encode_filename', 'ensure_new_type',
'exec_', 'get_next', 'getexception', 'implements_iterator',
'is_new_style', 'isbytes', 'isidentifier', 'isint',
'isnewbytes', 'istext', 'iteritems', 'iterkeys', 'itervalues',
'lfilter', 'listitems', 'listvalues', 'lmap', 'lrange',
'lzip', 'native', 'native_bytes', 'native_str',
'native_str_to_bytes', 'old_div',
'python_2_unicode_compatible', 'raise_',
'raise_with_traceback', 'reraise', 'text_to_native_str',
'tobytes', 'viewitems', 'viewkeys', 'viewvalues',
'with_metaclass'
]
| bsd-3-clause |
mattcaldwell/zipline | tests/utils/test_factory.py | 34 | 2175 | #
# Copyright 2013 Quantopian, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from unittest import TestCase
import pandas as pd
import pytz
import numpy as np
from zipline.utils.factory import (load_from_yahoo,
load_bars_from_yahoo)
class TestFactory(TestCase):
def test_load_from_yahoo(self):
stocks = ['AAPL', 'GE']
start = pd.datetime(1993, 1, 1, 0, 0, 0, 0, pytz.utc)
end = pd.datetime(2002, 1, 1, 0, 0, 0, 0, pytz.utc)
data = load_from_yahoo(stocks=stocks, start=start, end=end)
assert data.index[0] == pd.Timestamp('1993-01-04 00:00:00+0000')
assert data.index[-1] == pd.Timestamp('2001-12-31 00:00:00+0000')
for stock in stocks:
assert stock in data.columns
np.testing.assert_raises(
AssertionError, load_from_yahoo, stocks=stocks,
start=end, end=start
)
def test_load_bars_from_yahoo(self):
stocks = ['AAPL', 'GE']
start = pd.datetime(1993, 1, 1, 0, 0, 0, 0, pytz.utc)
end = pd.datetime(2002, 1, 1, 0, 0, 0, 0, pytz.utc)
data = load_bars_from_yahoo(stocks=stocks, start=start, end=end)
assert data.major_axis[0] == pd.Timestamp('1993-01-04 00:00:00+0000')
assert data.major_axis[-1] == pd.Timestamp('2001-12-31 00:00:00+0000')
for stock in stocks:
assert stock in data.items
for ohlc in ['open', 'high', 'low', 'close', 'volume', 'price']:
assert ohlc in data.minor_axis
np.testing.assert_raises(
AssertionError, load_bars_from_yahoo, stocks=stocks,
start=end, end=start
)
| apache-2.0 |
Aasmi/scikit-learn | sklearn/feature_selection/tests/test_rfe.py | 209 | 11733 | """
Testing Recursive feature elimination
"""
import warnings
import numpy as np
from numpy.testing import assert_array_almost_equal, assert_array_equal
from nose.tools import assert_equal, assert_true
from scipy import sparse
from sklearn.feature_selection.rfe import RFE, RFECV
from sklearn.datasets import load_iris, make_friedman1
from sklearn.metrics import zero_one_loss
from sklearn.svm import SVC, SVR
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import cross_val_score
from sklearn.utils import check_random_state
from sklearn.utils.testing import ignore_warnings
from sklearn.utils.testing import assert_warns_message
from sklearn.utils.testing import assert_greater
from sklearn.metrics import make_scorer
from sklearn.metrics import get_scorer
class MockClassifier(object):
"""
Dummy classifier to test recursive feature ellimination
"""
def __init__(self, foo_param=0):
self.foo_param = foo_param
def fit(self, X, Y):
assert_true(len(X) == len(Y))
self.coef_ = np.ones(X.shape[1], dtype=np.float64)
return self
def predict(self, T):
return T.shape[0]
predict_proba = predict
decision_function = predict
transform = predict
def score(self, X=None, Y=None):
if self.foo_param > 1:
score = 1.
else:
score = 0.
return score
def get_params(self, deep=True):
return {'foo_param': self.foo_param}
def set_params(self, **params):
return self
def test_rfe_set_params():
generator = check_random_state(0)
iris = load_iris()
X = np.c_[iris.data, generator.normal(size=(len(iris.data), 6))]
y = iris.target
clf = SVC(kernel="linear")
rfe = RFE(estimator=clf, n_features_to_select=4, step=0.1)
y_pred = rfe.fit(X, y).predict(X)
clf = SVC()
with warnings.catch_warnings(record=True):
# estimator_params is deprecated
rfe = RFE(estimator=clf, n_features_to_select=4, step=0.1,
estimator_params={'kernel': 'linear'})
y_pred2 = rfe.fit(X, y).predict(X)
assert_array_equal(y_pred, y_pred2)
def test_rfe_features_importance():
generator = check_random_state(0)
iris = load_iris()
X = np.c_[iris.data, generator.normal(size=(len(iris.data), 6))]
y = iris.target
clf = RandomForestClassifier(n_estimators=20,
random_state=generator, max_depth=2)
rfe = RFE(estimator=clf, n_features_to_select=4, step=0.1)
rfe.fit(X, y)
assert_equal(len(rfe.ranking_), X.shape[1])
clf_svc = SVC(kernel="linear")
rfe_svc = RFE(estimator=clf_svc, n_features_to_select=4, step=0.1)
rfe_svc.fit(X, y)
# Check if the supports are equal
assert_array_equal(rfe.get_support(), rfe_svc.get_support())
def test_rfe_deprecation_estimator_params():
deprecation_message = ("The parameter 'estimator_params' is deprecated as "
"of version 0.16 and will be removed in 0.18. The "
"parameter is no longer necessary because the "
"value is set via the estimator initialisation or "
"set_params method.")
generator = check_random_state(0)
iris = load_iris()
X = np.c_[iris.data, generator.normal(size=(len(iris.data), 6))]
y = iris.target
assert_warns_message(DeprecationWarning, deprecation_message,
RFE(estimator=SVC(), n_features_to_select=4, step=0.1,
estimator_params={'kernel': 'linear'}).fit,
X=X,
y=y)
assert_warns_message(DeprecationWarning, deprecation_message,
RFECV(estimator=SVC(), step=1, cv=5,
estimator_params={'kernel': 'linear'}).fit,
X=X,
y=y)
def test_rfe():
generator = check_random_state(0)
iris = load_iris()
X = np.c_[iris.data, generator.normal(size=(len(iris.data), 6))]
X_sparse = sparse.csr_matrix(X)
y = iris.target
# dense model
clf = SVC(kernel="linear")
rfe = RFE(estimator=clf, n_features_to_select=4, step=0.1)
rfe.fit(X, y)
X_r = rfe.transform(X)
clf.fit(X_r, y)
assert_equal(len(rfe.ranking_), X.shape[1])
# sparse model
clf_sparse = SVC(kernel="linear")
rfe_sparse = RFE(estimator=clf_sparse, n_features_to_select=4, step=0.1)
rfe_sparse.fit(X_sparse, y)
X_r_sparse = rfe_sparse.transform(X_sparse)
assert_equal(X_r.shape, iris.data.shape)
assert_array_almost_equal(X_r[:10], iris.data[:10])
assert_array_almost_equal(rfe.predict(X), clf.predict(iris.data))
assert_equal(rfe.score(X, y), clf.score(iris.data, iris.target))
assert_array_almost_equal(X_r, X_r_sparse.toarray())
def test_rfe_mockclassifier():
generator = check_random_state(0)
iris = load_iris()
X = np.c_[iris.data, generator.normal(size=(len(iris.data), 6))]
y = iris.target
# dense model
clf = MockClassifier()
rfe = RFE(estimator=clf, n_features_to_select=4, step=0.1)
rfe.fit(X, y)
X_r = rfe.transform(X)
clf.fit(X_r, y)
assert_equal(len(rfe.ranking_), X.shape[1])
assert_equal(X_r.shape, iris.data.shape)
def test_rfecv():
generator = check_random_state(0)
iris = load_iris()
X = np.c_[iris.data, generator.normal(size=(len(iris.data), 6))]
y = list(iris.target) # regression test: list should be supported
# Test using the score function
rfecv = RFECV(estimator=SVC(kernel="linear"), step=1, cv=5)
rfecv.fit(X, y)
# non-regression test for missing worst feature:
assert_equal(len(rfecv.grid_scores_), X.shape[1])
assert_equal(len(rfecv.ranking_), X.shape[1])
X_r = rfecv.transform(X)
# All the noisy variable were filtered out
assert_array_equal(X_r, iris.data)
# same in sparse
rfecv_sparse = RFECV(estimator=SVC(kernel="linear"), step=1, cv=5)
X_sparse = sparse.csr_matrix(X)
rfecv_sparse.fit(X_sparse, y)
X_r_sparse = rfecv_sparse.transform(X_sparse)
assert_array_equal(X_r_sparse.toarray(), iris.data)
# Test using a customized loss function
scoring = make_scorer(zero_one_loss, greater_is_better=False)
rfecv = RFECV(estimator=SVC(kernel="linear"), step=1, cv=5,
scoring=scoring)
ignore_warnings(rfecv.fit)(X, y)
X_r = rfecv.transform(X)
assert_array_equal(X_r, iris.data)
# Test using a scorer
scorer = get_scorer('accuracy')
rfecv = RFECV(estimator=SVC(kernel="linear"), step=1, cv=5,
scoring=scorer)
rfecv.fit(X, y)
X_r = rfecv.transform(X)
assert_array_equal(X_r, iris.data)
# Test fix on grid_scores
def test_scorer(estimator, X, y):
return 1.0
rfecv = RFECV(estimator=SVC(kernel="linear"), step=1, cv=5,
scoring=test_scorer)
rfecv.fit(X, y)
assert_array_equal(rfecv.grid_scores_, np.ones(len(rfecv.grid_scores_)))
# Same as the first two tests, but with step=2
rfecv = RFECV(estimator=SVC(kernel="linear"), step=2, cv=5)
rfecv.fit(X, y)
assert_equal(len(rfecv.grid_scores_), 6)
assert_equal(len(rfecv.ranking_), X.shape[1])
X_r = rfecv.transform(X)
assert_array_equal(X_r, iris.data)
rfecv_sparse = RFECV(estimator=SVC(kernel="linear"), step=2, cv=5)
X_sparse = sparse.csr_matrix(X)
rfecv_sparse.fit(X_sparse, y)
X_r_sparse = rfecv_sparse.transform(X_sparse)
assert_array_equal(X_r_sparse.toarray(), iris.data)
def test_rfecv_mockclassifier():
generator = check_random_state(0)
iris = load_iris()
X = np.c_[iris.data, generator.normal(size=(len(iris.data), 6))]
y = list(iris.target) # regression test: list should be supported
# Test using the score function
rfecv = RFECV(estimator=MockClassifier(), step=1, cv=5)
rfecv.fit(X, y)
# non-regression test for missing worst feature:
assert_equal(len(rfecv.grid_scores_), X.shape[1])
assert_equal(len(rfecv.ranking_), X.shape[1])
def test_rfe_estimator_tags():
rfe = RFE(SVC(kernel='linear'))
assert_equal(rfe._estimator_type, "classifier")
# make sure that cross-validation is stratified
iris = load_iris()
score = cross_val_score(rfe, iris.data, iris.target)
assert_greater(score.min(), .7)
def test_rfe_min_step():
n_features = 10
X, y = make_friedman1(n_samples=50, n_features=n_features, random_state=0)
n_samples, n_features = X.shape
estimator = SVR(kernel="linear")
# Test when floor(step * n_features) <= 0
selector = RFE(estimator, step=0.01)
sel = selector.fit(X, y)
assert_equal(sel.support_.sum(), n_features // 2)
# Test when step is between (0,1) and floor(step * n_features) > 0
selector = RFE(estimator, step=0.20)
sel = selector.fit(X, y)
assert_equal(sel.support_.sum(), n_features // 2)
# Test when step is an integer
selector = RFE(estimator, step=5)
sel = selector.fit(X, y)
assert_equal(sel.support_.sum(), n_features // 2)
def test_number_of_subsets_of_features():
# In RFE, 'number_of_subsets_of_features'
# = the number of iterations in '_fit'
# = max(ranking_)
# = 1 + (n_features + step - n_features_to_select - 1) // step
# After optimization #4534, this number
# = 1 + np.ceil((n_features - n_features_to_select) / float(step))
# This test case is to test their equivalence, refer to #4534 and #3824
def formula1(n_features, n_features_to_select, step):
return 1 + ((n_features + step - n_features_to_select - 1) // step)
def formula2(n_features, n_features_to_select, step):
return 1 + np.ceil((n_features - n_features_to_select) / float(step))
# RFE
# Case 1, n_features - n_features_to_select is divisible by step
# Case 2, n_features - n_features_to_select is not divisible by step
n_features_list = [11, 11]
n_features_to_select_list = [3, 3]
step_list = [2, 3]
for n_features, n_features_to_select, step in zip(
n_features_list, n_features_to_select_list, step_list):
generator = check_random_state(43)
X = generator.normal(size=(100, n_features))
y = generator.rand(100).round()
rfe = RFE(estimator=SVC(kernel="linear"),
n_features_to_select=n_features_to_select, step=step)
rfe.fit(X, y)
# this number also equals to the maximum of ranking_
assert_equal(np.max(rfe.ranking_),
formula1(n_features, n_features_to_select, step))
assert_equal(np.max(rfe.ranking_),
formula2(n_features, n_features_to_select, step))
# In RFECV, 'fit' calls 'RFE._fit'
# 'number_of_subsets_of_features' of RFE
# = the size of 'grid_scores' of RFECV
# = the number of iterations of the for loop before optimization #4534
# RFECV, n_features_to_select = 1
# Case 1, n_features - 1 is divisible by step
# Case 2, n_features - 1 is not divisible by step
n_features_to_select = 1
n_features_list = [11, 10]
step_list = [2, 2]
for n_features, step in zip(n_features_list, step_list):
generator = check_random_state(43)
X = generator.normal(size=(100, n_features))
y = generator.rand(100).round()
rfecv = RFECV(estimator=SVC(kernel="linear"), step=step, cv=5)
rfecv.fit(X, y)
assert_equal(rfecv.grid_scores_.shape[0],
formula1(n_features, n_features_to_select, step))
assert_equal(rfecv.grid_scores_.shape[0],
formula2(n_features, n_features_to_select, step))
| bsd-3-clause |
EntilZha/PyFunctional | functional/test/test_functional.py | 1 | 36609 | # pylint: skip-file
import unittest
import array
from collections import namedtuple
from itertools import product
from functional.pipeline import Sequence, is_iterable, _wrap, extend
from functional.transformations import name
from functional import seq, pseq
Data = namedtuple("Data", "x y")
def pandas_is_installed():
try:
global pandas
import pandas
return True
except ImportError:
return False
class TestPipeline(unittest.TestCase):
def setUp(self):
self.seq = seq
def assert_type(self, s):
self.assertTrue(isinstance(s, Sequence))
def assert_not_type(self, s):
self.assertFalse(isinstance(s, Sequence))
def assertIteratorEqual(self, iter_0, iter_1):
seq_0 = list(iter_0)
seq_1 = list(iter_1)
self.assertListEqual(seq_0, seq_1)
def test_is_iterable(self):
self.assertFalse(is_iterable([]))
self.assertTrue(is_iterable(iter([1, 2])))
def test_constructor(self):
self.assertRaises(TypeError, lambda: Sequence(1))
def test_base_sequence(self):
l = []
self.assert_type(self.seq(l))
self.assert_not_type(self.seq(l).sequence)
self.assert_type(self.seq(self.seq(l)))
self.assert_not_type(self.seq(self.seq(l)).sequence)
self.assert_not_type(self.seq(l)._base_sequence)
def test_eq(self):
l = [1, 2, 3]
self.assertIteratorEqual(self.seq(l).map(lambda x: x), self.seq(l))
def test_ne(self):
a = [1, 2, 3]
b = [1]
self.assertNotEqual(self.seq(a), self.seq(b))
def test_repr(self):
l = [1, 2, 3]
self.assertEqual(repr(l), repr(self.seq(l)))
def test_lineage_name(self):
f = lambda x: x
self.assertEqual(f.__name__, name(f))
f = "test"
self.assertEqual("test", name(f))
def test_str(self):
l = [1, 2, 3]
self.assertEqual(str(l), str(self.seq(l)))
def test_hash(self):
self.assertRaises(TypeError, lambda: hash(self.seq([1])))
def test_len(self):
l = [1, 2, 3]
s = self.seq(l)
self.assertEqual(len(l), s.size())
self.assertEqual(len(l), s.len())
def test_count(self):
l = self.seq([-1, -1, 1, 1, 1])
self.assertEqual(l.count(lambda x: x > 0), 3)
self.assertEqual(l.count(lambda x: x < 0), 2)
def test_getitem(self):
l = [1, 2, [3, 4, 5]]
s = self.seq(l).map(lambda x: x)
self.assertEqual(s[1], 2)
self.assertEqual(s[2], [3, 4, 5])
self.assert_type(s[2])
self.assertEqual(s[1:], [2, [3, 4, 5]])
self.assert_type(s[1:])
l = [{1, 2}, {2, 3}, {4, 5}]
s = self.seq(l)
self.assertIsInstance(s[0], set)
self.assertEqual(s[0], l[0])
def test_iter(self):
l = list(enumerate(self.seq([1, 2, 3])))
e = list(enumerate([1, 2, 3]))
self.assertEqual(l, e)
l = self.seq([1, 2, 3])
e = [1, 2, 3]
result = []
for n in l:
result.append(n)
self.assertEqual(result, e)
self.assert_type(l)
def test_contains(self):
string = "abcdef"
s = self.seq(iter(string)).map(lambda x: x)
self.assertTrue("c" in s)
def test_add(self):
l0 = self.seq([1, 2, 3]).map(lambda x: x)
l1 = self.seq([4, 5, 6])
l2 = [4, 5, 6]
expect = [1, 2, 3, 4, 5, 6]
self.assertEqual(l0 + l1, expect)
self.assertEqual(l0 + l2, expect)
def test_head(self):
l = self.seq([1, 2, 3]).map(lambda x: x)
self.assertEqual(l.head(), 1)
l = self.seq([[1, 2], 3, 4])
self.assertEqual(l.head(), [1, 2])
self.assert_type(l.head())
l = self.seq([])
with self.assertRaises(IndexError):
l.head()
def test_first(self):
l = self.seq([1, 2, 3]).map(lambda x: x)
self.assertEqual(l.first(), 1)
l = self.seq([[1, 2], 3, 4]).map(lambda x: x)
self.assertEqual(l.first(), [1, 2])
self.assert_type(l.first())
l = self.seq([])
with self.assertRaises(IndexError):
l.head()
def test_head_option(self):
l = self.seq([1, 2, 3]).map(lambda x: x)
self.assertEqual(l.head_option(), 1)
l = self.seq([[1, 2], 3, 4]).map(lambda x: x)
self.assertEqual(l.head_option(), [1, 2])
self.assert_type(l.head_option())
l = self.seq([])
self.assertIsNone(l.head_option())
def test_last(self):
l = self.seq([1, 2, 3]).map(lambda x: x)
self.assertEqual(l.last(), 3)
l = self.seq([1, 2, [3, 4]]).map(lambda x: x)
self.assertEqual(l.last(), [3, 4])
self.assert_type(l.last())
def test_last_option(self):
l = self.seq([1, 2, 3]).map(lambda x: x)
self.assertEqual(l.last_option(), 3)
l = self.seq([1, 2, [3, 4]]).map(lambda x: x)
self.assertEqual(l.last_option(), [3, 4])
self.assert_type(l.last_option())
l = self.seq([])
self.assertIsNone(l.last_option())
def test_init(self):
result = self.seq([1, 2, 3, 4]).map(lambda x: x).init()
expect = [1, 2, 3]
self.assertIteratorEqual(result, expect)
def test_tail(self):
l = self.seq([1, 2, 3, 4]).map(lambda x: x)
expect = [2, 3, 4]
self.assertIteratorEqual(l.tail(), expect)
def test_inits(self):
l = self.seq([1, 2, 3]).map(lambda x: x)
expect = [[1, 2, 3], [1, 2], [1], []]
self.assertIteratorEqual(l.inits(), expect)
self.assertIteratorEqual(l.inits().map(lambda s: s.sum()), [6, 3, 1, 0])
def test_tails(self):
l = self.seq([1, 2, 3]).map(lambda x: x)
expect = [[1, 2, 3], [2, 3], [3], []]
self.assertIteratorEqual(l.tails(), expect)
self.assertIteratorEqual(l.tails().map(lambda s: s.sum()), [6, 5, 3, 0])
def test_drop(self):
s = self.seq([1, 2, 3, 4, 5, 6])
expect = [5, 6]
result = s.drop(4)
self.assertIteratorEqual(result, expect)
self.assert_type(result)
self.assertIteratorEqual(s.drop(0), s)
self.assertIteratorEqual(s.drop(-1), s)
def test_drop_right(self):
s = self.seq([1, 2, 3, 4, 5]).map(lambda x: x)
expect = [1, 2, 3]
result = s.drop_right(2)
self.assert_type(result)
self.assertIteratorEqual(result, expect)
self.assertIteratorEqual(s.drop_right(0), s)
self.assertIteratorEqual(s.drop_right(-1), s)
s = seq(1, 2, 3, 4, 5).filter(lambda x: x < 4)
expect = [1, 2]
result = s.drop_right(1)
self.assert_type(result)
self.assertIteratorEqual(result, expect)
s = seq(5, 4, 3, 2, 1).sorted()
expect = [1, 2, 3]
result = s.drop_right(2)
self.assert_type(result)
self.assertIteratorEqual(result, expect)
def test_drop_while(self):
l = [1, 2, 3, 4, 5, 6, 7, 8]
f = lambda x: x < 4
expect = [4, 5, 6, 7, 8]
result = self.seq(l).drop_while(f)
self.assertIteratorEqual(expect, result)
self.assert_type(result)
def test_take(self):
s = self.seq([1, 2, 3, 4, 5, 6])
expect = [1, 2, 3, 4]
result = s.take(4)
self.assertIteratorEqual(result, expect)
self.assert_type(result)
self.assertIteratorEqual(s.take(0), self.seq([]))
self.assertIteratorEqual(s.take(-1), self.seq([]))
def test_take_while(self):
l = [1, 2, 3, 4, 5, 6, 7, 8]
f = lambda x: x < 4
expect = [1, 2, 3]
result = self.seq(l).take_while(f)
self.assertIteratorEqual(result, expect)
self.assert_type(result)
def test_union(self):
result = self.seq([1, 1, 2, 3, 3]).union([1, 4, 5])
expect = [1, 2, 3, 4, 5]
self.assert_type(result)
self.assertSetEqual(result.set(), set(expect))
def test_intersection(self):
result = self.seq([1, 2, 2, 3]).intersection([2, 3, 4, 5])
expect = [2, 3]
self.assert_type(result)
self.assertSetEqual(result.set(), set(expect))
def test_difference(self):
result = self.seq([1, 2, 3]).difference([2, 3, 4])
expect = [1]
self.assert_type(result)
self.assertSetEqual(result.set(), set(expect))
def test_symmetric_difference(self):
result = self.seq([1, 2, 3, 3]).symmetric_difference([2, 4, 5])
expect = [1, 3, 4, 5]
self.assert_type(result)
self.assertSetEqual(result.set(), set(expect))
def test_map(self):
f = lambda x: x * 2
l = [1, 2, 0, 5]
expect = [2, 4, 0, 10]
result = self.seq(l).map(f)
self.assertIteratorEqual(expect, result)
self.assert_type(result)
def test_select(self):
f = lambda x: x * 2
l = [1, 2, 0, 5]
expect = [2, 4, 0, 10]
result = self.seq(l).select(f)
self.assertIteratorEqual(expect, result)
self.assert_type(result)
def test_starmap(self):
f = lambda x, y: x * y
l = [(1, 1), (0, 3), (-3, 3), (4, 2)]
expect = [1, 0, -9, 8]
result = self.seq(l).starmap(f)
self.assertIteratorEqual(expect, result)
self.assert_type(result)
result = self.seq(l).smap(f)
self.assertIteratorEqual(expect, result)
self.assert_type(result)
def test_filter(self):
f = lambda x: x > 0
l = [0, -1, 5, 10]
expect = [5, 10]
s = self.seq(l)
result = s.filter(f)
self.assertIteratorEqual(expect, result)
self.assert_type(result)
def test_where(self):
f = lambda x: x > 0
l = [0, -1, 5, 10]
expect = [5, 10]
s = self.seq(l)
result = s.where(f)
self.assertIteratorEqual(expect, result)
self.assert_type(result)
def test_filter_not(self):
f = lambda x: x > 0
l = [0, -1, 5, 10]
expect = [0, -1]
result = self.seq(l).filter_not(f)
self.assertIteratorEqual(expect, result)
self.assert_type(result)
def test_map_filter(self):
f = lambda x: x > 0
g = lambda x: x * 2
l = [0, -1, 5]
s = self.seq(l)
expect = [10]
result = s.filter(f).map(g)
self.assertIteratorEqual(expect, result)
self.assert_type(result)
def test_reduce(self):
f = lambda x, y: x + y
l = ["a", "b", "c"]
expect = "abc"
s = self.seq(l)
self.assertEqual(expect, s.reduce(f))
with self.assertRaises(TypeError):
seq([]).reduce(f)
with self.assertRaises(ValueError):
seq([]).reduce(f, 0, 0)
self.assertEqual(seq([]).reduce(f, 1), 1)
self.assertEqual(seq([0, 2]).reduce(f, 1), 3)
def test_accumulate(self):
f = lambda x, y: x + y
l_char = ["a", "b", "c"]
expect_char = ["a", "ab", "abc"]
l_num = [1, 2, 3]
expect_num = [1, 3, 6]
self.assertEqual(seq(l_char).accumulate(), expect_char)
self.assertEqual(seq(l_num).accumulate(), expect_num)
def test_aggregate(self):
f = lambda current, next_element: current + next_element
l = self.seq([1, 2, 3, 4])
self.assertEqual(l.aggregate(f), 10)
self.assertEqual(l.aggregate(0, f), 10)
self.assertEqual(l.aggregate(0, f, lambda x: 2 * x), 20)
l = self.seq(["a", "b", "c"])
self.assertEqual(l.aggregate(f), "abc")
self.assertEqual(l.aggregate("", f), "abc")
self.assertEqual(l.aggregate("", f, lambda x: x.upper()), "ABC")
self.assertEqual(l.aggregate(f), "abc")
self.assertEqual(l.aggregate("z", f), "zabc")
self.assertEqual(l.aggregate("z", f, lambda x: x.upper()), "ZABC")
with self.assertRaises(ValueError):
l.aggregate()
with self.assertRaises(ValueError):
l.aggregate(None, None, None, None)
def test_fold_left(self):
f = lambda current, next_element: current + next_element
l = self.seq([1, 2, 3, 4])
self.assertEqual(l.fold_left(0, f), 10)
self.assertEqual(l.fold_left(-10, f), 0)
l = self.seq(["a", "b", "c"])
self.assertEqual(l.fold_left("", f), "abc")
self.assertEqual(l.fold_left("z", f), "zabc")
f = lambda x, y: x + [y]
self.assertEqual(l.fold_left([], f), ["a", "b", "c"])
self.assertEqual(l.fold_left(["start"], f), ["start", "a", "b", "c"])
def test_fold_right(self):
f = lambda next_element, current: current + next_element
l = self.seq([1, 2, 3, 4])
self.assertEqual(l.fold_right(0, f), 10)
self.assertEqual(l.fold_right(-10, f), 0)
l = self.seq(["a", "b", "c"])
self.assertEqual(l.fold_right("", f), "cba")
self.assertEqual(l.fold_right("z", f), "zcba")
f = lambda next_element, current: current + [next_element]
self.assertEqual(l.fold_right([], f), ["c", "b", "a"])
self.assertEqual(l.fold_right(["start"], f), ["start", "c", "b", "a"])
def test_sorted(self):
s = self.seq([1, 3, 2, 5, 4])
r = s.sorted()
self.assertIteratorEqual([1, 2, 3, 4, 5], r)
self.assert_type(r)
def test_order_by(self):
s = self.seq([(2, "a"), (1, "b"), (4, "c"), (3, "d")])
r = s.order_by(lambda x: x[0])
self.assertIteratorEqual([(1, "b"), (2, "a"), (3, "d"), (4, "c")], r)
self.assert_type(r)
def test_reverse(self):
l = [1, 2, 3]
expect = [4, 3, 2]
s = self.seq(l).map(lambda x: x + 1)
result = s.reverse()
self.assertIteratorEqual(expect, result)
self.assert_type(result)
result = s.__reversed__()
self.assertIteratorEqual(expect, result)
self.assert_type(result)
def test_distinct(self):
l = [1, 3, 1, 2, 2, 3]
expect = [1, 3, 2]
s = self.seq(l)
result = s.distinct()
self.assertEqual(result.size(), len(expect))
for er in zip(expect, result):
self.assertEqual(
er[0], er[1], "Order was not preserved after running distinct!"
)
for e in result:
self.assertTrue(e in expect)
self.assert_type(result)
def test_distinct_by(self):
s = self.seq(Data(1, 2), Data(1, 3), Data(2, 0), Data(3, -1), Data(1, 5))
expect = {Data(1, 2), Data(2, 0), Data(3, -1)}
result = s.distinct_by(lambda data: data.x)
self.assertSetEqual(set(result), expect)
self.assert_type(result)
def test_slice(self):
s = self.seq([1, 2, 3, 4])
result = s.slice(1, 2)
self.assertIteratorEqual(result, [2])
self.assert_type(result)
result = s.slice(1, 3)
self.assertIteratorEqual(result, [2, 3])
self.assert_type(result)
def test_any(self):
l = [True, False]
self.assertTrue(self.seq(l).any())
def test_all(self):
l = [True, False]
self.assertFalse(self.seq(l).all())
l = [True, True]
self.assertTrue(self.seq(l).all())
def test_enumerate(self):
l = [2, 3, 4]
e = [(0, 2), (1, 3), (2, 4)]
result = self.seq(l).enumerate()
self.assertIteratorEqual(result, e)
self.assert_type(result)
def test_inner_join(self):
l0 = [("a", 1), ("b", 2), ("c", 3)]
l1 = [("a", 2), ("c", 4), ("d", 5)]
result0 = self.seq(l0).inner_join(l1)
result1 = self.seq(l0).join(l1, "inner")
e = [("a", (1, 2)), ("c", (3, 4))]
self.assert_type(result0)
self.assert_type(result1)
self.assertDictEqual(dict(result0), dict(e))
self.assertDictEqual(dict(result1), dict(e))
result0 = self.seq(l0).inner_join(self.seq(l1))
result1 = self.seq(l0).join(self.seq(l1), "inner")
self.assert_type(result0)
self.assert_type(result1)
self.assertDictEqual(dict(result0), dict(e))
self.assertDictEqual(dict(result1), dict(e))
def test_left_join(self):
left = [("a", 1), ("b", 2)]
right = [("a", 2), ("c", 3)]
result0 = self.seq(left).left_join(right)
result1 = self.seq(left).join(right, "left")
expect = [("a", (1, 2)), ("b", (2, None))]
self.assert_type(result0)
self.assert_type(result1)
self.assertDictEqual(dict(result0), dict(expect))
self.assertDictEqual(dict(result1), dict(expect))
result0 = self.seq(left).left_join(self.seq(right))
result1 = self.seq(left).join(self.seq(right), "left")
self.assert_type(result0)
self.assert_type(result1)
self.assertDictEqual(dict(result0), dict(expect))
self.assertDictEqual(dict(result1), dict(expect))
def test_right_join(self):
left = [("a", 1), ("b", 2)]
right = [("a", 2), ("c", 3)]
result0 = self.seq(left).right_join(right)
result1 = self.seq(left).join(right, "right")
expect = [("a", (1, 2)), ("c", (None, 3))]
self.assert_type(result0)
self.assert_type(result1)
self.assertDictEqual(dict(result0), dict(expect))
self.assertDictEqual(dict(result1), dict(expect))
result0 = self.seq(left).right_join(self.seq(right))
result1 = self.seq(left).join(self.seq(right), "right")
self.assert_type(result0)
self.assert_type(result1)
self.assertDictEqual(dict(result0), dict(expect))
self.assertDictEqual(dict(result1), dict(expect))
def test_outer_join(self):
left = [("a", 1), ("b", 2)]
right = [("a", 2), ("c", 3)]
result0 = self.seq(left).outer_join(right)
result1 = self.seq(left).join(right, "outer")
expect = [("a", (1, 2)), ("b", (2, None)), ("c", (None, 3))]
self.assert_type(result0)
self.assert_type(result1)
self.assertDictEqual(dict(result0), dict(expect))
self.assertDictEqual(dict(result1), dict(expect))
result0 = self.seq(left).outer_join(self.seq(right))
result1 = self.seq(left).join(self.seq(right), "outer")
self.assert_type(result0)
self.assert_type(result1)
self.assertDictEqual(dict(result0), dict(expect))
self.assertDictEqual(dict(result1), dict(expect))
def test_join(self):
with self.assertRaises(TypeError):
self.seq([(1, 2)]).join([(2, 3)], "").to_list()
def test_max(self):
l = [1, 2, 3]
self.assertEqual(3, self.seq(l).max())
def test_min(self):
l = [1, 2, 3]
self.assertEqual(1, self.seq(l).min())
def test_max_by(self):
l = ["aa", "bbbb", "c", "dd"]
self.assertEqual("bbbb", self.seq(l).max_by(len))
def test_min_by(self):
l = ["aa", "bbbb", "c", "dd"]
self.assertEqual("c", self.seq(l).min_by(len))
def test_find(self):
l = [1, 2, 3]
f = lambda x: x == 3
g = lambda x: False
self.assertEqual(3, self.seq(l).find(f))
self.assertIsNone(self.seq(l).find(g))
def test_flatten(self):
l = [[1, 1, 1], [2, 2, 2], [[3, 3], [4, 4]]]
expect = [1, 1, 1, 2, 2, 2, [3, 3], [4, 4]]
result = self.seq(l).flatten()
self.assertIteratorEqual(expect, result)
self.assert_type(result)
def test_flat_map(self):
l = [[1, 1, 1], [2, 2, 2], [3, 3, 3]]
f = lambda x: x
expect = [1, 1, 1, 2, 2, 2, 3, 3, 3]
result = self.seq(l).flat_map(f)
self.assertIteratorEqual(expect, result)
self.assert_type(result)
def test_group_by(self):
l = [(1, 1), (1, 2), (1, 3), (2, 2)]
f = lambda x: x[0]
expect = {1: [(1, 1), (1, 2), (1, 3)], 2: [(2, 2)]}
result = self.seq(l).group_by(f)
result_comparison = {}
for kv in result:
result_comparison[kv[0]] = kv[1]
self.assertIteratorEqual(expect, result_comparison)
self.assert_type(result)
def test_group_by_key(self):
l = [("a", 1), ("a", 2), ("a", 3), ("b", -1), ("b", 1), ("c", 10), ("c", 5)]
e = {"a": [1, 2, 3], "b": [-1, 1], "c": [10, 5]}.items()
result = self.seq(l).group_by_key()
self.assertEqual(result.len(), len(e))
for e0, e1 in zip(result, e):
self.assertIteratorEqual(e0, e1)
self.assert_type(result)
def test_grouped(self):
l = self.seq([1, 2, 3, 4, 5, 6, 7, 8])
expect = [[1, 2], [3, 4], [5, 6], [7, 8]]
self.assertIteratorEqual(map(list, l.grouped(2)), expect)
expect = [[1, 2, 3], [4, 5, 6], [7, 8]]
self.assertIteratorEqual(map(list, l.grouped(3)), expect)
def test_grouped_returns_list(self):
l = self.seq([1, 2, 3, 4, 5, 6, 7, 8])
self.assertTrue(is_iterable(l.grouped(2)))
self.assertTrue(is_iterable(l.grouped(3)))
def test_grouped_returns_list_of_lists(self):
test_inputs = [
[i for i in "abcdefghijklmnop"],
[None for i in range(10)],
[i for i in range(10)],
[[i] for i in range(10)],
[{i} for i in range(10)],
[{i, i + 1} for i in range(10)],
[[i, i + 1] for i in range(10)],
]
def gen_test(collection, group_size):
expected_type = type(collection[0])
types_after_grouping = (
seq(collection)
.grouped(group_size)
.flatten()
.map(lambda item: type(item))
)
err_msg = f"Typing was not maintained after grouping. An input of {collection} yielded output types of {set(types_after_grouping)} and not {expected_type} as expected."
return types_after_grouping.for_all(lambda t: t == expected_type), err_msg
for test_input in test_inputs:
for group_size in [1, 2, 4, 7]:
all_sub_collections_are_lists, err_msg = gen_test(
test_input, group_size
)
self.assertTrue(all_sub_collections_are_lists, msg=err_msg)
def test_sliding(self):
l = self.seq([1, 2, 3, 4, 5, 6, 7])
expect = [[1, 2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 7]]
self.assertIteratorEqual(l.sliding(2), expect)
l = self.seq([1, 2, 3])
expect = [[1, 2], [3]]
self.assertIteratorEqual(l.sliding(2, 2), expect)
expect = [[1, 2]]
self.assertIteratorEqual(l.sliding(2, 3), expect)
def test_empty(self):
self.assertTrue(self.seq([]).empty())
self.assertEqual(self.seq(), self.seq([]))
def test_non_empty(self):
self.assertTrue(self.seq([1]).non_empty())
def test_non_zero_bool(self):
self.assertTrue(bool(self.seq([1])))
self.assertFalse(bool(self.seq([])))
def test_make_string(self):
l = [1, 2, 3]
expect1 = "123"
expect2 = "1:2:3"
s = self.seq(l)
self.assertEqual(expect1, s.make_string(""))
self.assertEqual(expect2, s.make_string(":"))
s = self.seq([])
self.assertEqual("", s.make_string(""))
self.assertEqual("", s.make_string(":"))
def test_partition(self):
l = [-1, -2, -3, 1, 2, 3]
e2 = [-1, -2, -3]
e1 = [1, 2, 3]
f = lambda x: x > 0
s = self.seq(l)
p1, p2 = s.partition(f)
self.assertIteratorEqual(e1, list(p1))
self.assertIteratorEqual(e2, list(p2))
self.assert_type(p1)
self.assert_type(p2)
result = self.seq([[1, 2, 3], [4, 5, 6]]).flatten().partition(lambda x: x > 2)
expect = [[3, 4, 5, 6], [1, 2]]
self.assertIteratorEqual(expect, list(result))
self.assert_type(result)
def test_cartesian(self):
result = seq.range(3).cartesian(range(3)).list()
self.assertListEqual(result, list(product(range(3), range(3))))
result = seq.range(3).cartesian(range(3), range(2)).list()
self.assertListEqual(result, list(product(range(3), range(3), range(2))))
result = seq.range(3).cartesian(range(3), range(2), repeat=2).list()
self.assertListEqual(
result, list(product(range(3), range(3), range(2), repeat=2))
)
def test_product(self):
l = [2, 2, 3]
self.assertEqual(12, self.seq(l).product())
self.assertEqual(96, self.seq(l).product(lambda x: x * 2))
s = self.seq([])
self.assertEqual(1, s.product())
self.assertEqual(2, s.product(lambda x: x * 2))
s = self.seq([5])
self.assertEqual(5, s.product())
self.assertEqual(10, s.product(lambda x: x * 2))
def test_sum(self):
l = [1, 2, 3]
self.assertEqual(6, self.seq(l).sum())
self.assertEqual(12, self.seq(l).sum(lambda x: x * 2))
def test_average(self):
l = [1, 2]
self.assertEqual(1.5, self.seq(l).average())
self.assertEqual(4.5, self.seq(l).average(lambda x: x * 3))
def test_set(self):
l = [1, 1, 2, 2, 3]
ls = set(l)
self.assertIteratorEqual(ls, self.seq(l).set())
def test_zip(self):
l1 = [1, 2, 3]
l2 = [-1, -2, -3]
e = [(1, -1), (2, -2), (3, -3)]
result = self.seq(l1).zip(l2)
self.assertIteratorEqual(e, result)
self.assert_type(result)
def test_zip_with_index(self):
l = [2, 3, 4]
e = [(2, 0), (3, 1), (4, 2)]
result = self.seq(l).zip_with_index()
self.assertIteratorEqual(result, e)
self.assert_type(result)
e = [(2, 5), (3, 6), (4, 7)]
result = self.seq(l).zip_with_index(5)
self.assertIteratorEqual(result, e)
self.assert_type(result)
def test_to_list(self):
l = [1, 2, 3, "abc", {1: 2}, {1, 2, 3}]
result = self.seq(l).to_list()
self.assertIteratorEqual(result, l)
self.assertTrue(isinstance(result, list))
result = self.seq(iter([0, 1, 2])).to_list()
self.assertIsInstance(result, list)
result = self.seq(l).list(n=2)
self.assertEqual(result, [1, 2])
def test_list(self):
l = [1, 2, 3, "abc", {1: 2}, {1, 2, 3}]
result = self.seq(l).list()
self.assertEqual(result, l)
self.assertTrue(isinstance(result, list))
result = self.seq(iter([0, 1, 2])).to_list()
self.assertIsInstance(result, list)
result = self.seq(l).list(n=2)
self.assertEqual(result, [1, 2])
def test_for_each(self):
l = [1, 2, 3, "abc", {1: 2}, {1, 2, 3}]
result = []
def f(e):
result.append(e)
self.seq(l).for_each(f)
self.assertEqual(result, l)
def test_exists(self):
l = ["aaa", "BBB", "ccc"]
self.assertTrue(self.seq(l).exists(str.islower))
self.assertTrue(self.seq(l).exists(str.isupper))
self.assertFalse(self.seq(l).exists(lambda s: "d" in s))
def test_for_all(self):
l = ["aaa", "bbb", "ccc"]
self.assertTrue(self.seq(l).for_all(str.islower))
self.assertFalse(self.seq(l).for_all(str.isupper))
def test_to_dict(self):
l = [(1, 2), (2, 10), (7, 2)]
d = {1: 2, 2: 10, 7: 2}
result = self.seq(l).to_dict()
self.assertDictEqual(result, d)
self.assertTrue(isinstance(result, dict))
result = self.seq(l).to_dict(default=lambda: 100)
self.assertTrue(1 in result)
self.assertFalse(3 in result)
self.assertEqual(result[4], 100)
result = self.seq(l).dict(default=100)
self.assertTrue(1 in result)
self.assertFalse(3 in result)
self.assertEqual(result[4], 100)
def test_dict(self):
l = [(1, 2), (2, 10), (7, 2)]
d = {1: 2, 2: 10, 7: 2}
result = self.seq(l).dict()
self.assertDictEqual(result, d)
self.assertTrue(isinstance(result, dict))
result = self.seq(l).dict(default=lambda: 100)
self.assertTrue(1 in result)
self.assertFalse(3 in result)
self.assertEqual(result[4], 100)
result = self.seq(l).dict(default=100)
self.assertTrue(1 in result)
self.assertFalse(3 in result)
self.assertEqual(result[4], 100)
def test_reduce_by_key(self):
l = [("a", 1), ("a", 2), ("a", 3), ("b", -1), ("b", 1), ("c", 10), ("c", 5)]
e = {"a": 6, "b": 0, "c": 15}.items()
result = self.seq(l).reduce_by_key(lambda x, y: x + y)
self.assertEqual(result.len(), len(e))
for e0, e1 in zip(result, e):
self.assertEqual(e0, e1)
self.assert_type(result)
def test_count_by_key(self):
l = [
("a", 1),
("a", 2),
("a", 3),
("b", -1),
("b", 1),
("c", 10),
("c", 5),
("d", 1),
]
e = {"a": 3, "b": 2, "c": 2, "d": 1}.items()
result = self.seq(l).count_by_key()
self.assertEqual(result.len(), len(e))
for e0, e1 in zip(result, e):
self.assertEqual(e0, e1)
self.assert_type(result)
def test_count_by_value(self):
l = ["a", "a", "a", "b", "b", "c", "d"]
e = {"a": 3, "b": 2, "c": 1, "d": 1}.items()
result = self.seq(l).count_by_value()
self.assertEqual(result.len(), len(e))
for e0, e1 in zip(result, e):
self.assertEqual(e0, e1)
self.assert_type(result)
def test_wrap(self):
self.assert_type(_wrap([1, 2]))
self.assert_type(_wrap((1, 2)))
self.assert_not_type(_wrap(1))
self.assert_not_type(_wrap(1.0))
self.assert_not_type(_wrap("test"))
self.assert_not_type(_wrap(True))
self.assert_not_type(_wrap(Data(1, 2)))
def test_wrap_objects(self):
class A(object):
a = 1
l = [A(), A(), A()]
self.assertIsInstance(_wrap(A()), A)
self.assert_type(self.seq(l))
@unittest.skipUnless(
pandas_is_installed(), "Skip pandas tests if pandas is not installed"
)
def test_wrap_pandas(self):
df1 = pandas.DataFrame({"name": ["name1", "name2"], "value": [1, 2]})
df2 = pandas.DataFrame({"name": ["name1", "name2"], "value": [3, 4]})
result = seq([df1, df2]).reduce(lambda x, y: x.append(y))
self.assertEqual(result.len(), 4)
self.assertEqual(result[0].to_list(), ["name1", 1])
self.assertEqual(result[1].to_list(), ["name2", 2])
self.assertEqual(result[2].to_list(), ["name1", 3])
self.assertEqual(result[3].to_list(), ["name2", 4])
def test_iterator_consumption(self):
sequence = self.seq([1, 2, 3])
first_transform = sequence.map(lambda x: x)
second_transform = first_transform.map(lambda x: x)
first_list = list(first_transform)
second_list = list(second_transform)
expect = [1, 2, 3]
self.assertIteratorEqual(first_list, expect)
self.assertIteratorEqual(second_list, expect)
def test_single_call(self):
if self.seq is pseq:
raise self.skipTest("pseq doesn't support functions with side-effects")
counter = []
def counter_func(x):
counter.append(1)
return x
list(self.seq([1, 2, 3, 4]).map(counter_func))
self.assertEqual(len(counter), 4)
def test_seq(self):
self.assertIteratorEqual(self.seq([1, 2, 3]), [1, 2, 3])
self.assertIteratorEqual(self.seq(1, 2, 3), [1, 2, 3])
self.assertIteratorEqual(self.seq(1), [1])
self.assertIteratorEqual(self.seq(iter([1, 2, 3])), [1, 2, 3])
self.assertIteratorEqual(self.seq(), [])
def test_lineage_repr(self):
s = self.seq(1).map(lambda x: x).filter(lambda x: True)
self.assertEqual(
repr(s._lineage), "Lineage: sequence -> map(<lambda>) -> filter(<lambda>)"
)
def test_cache(self):
if self.seq is pseq:
raise self.skipTest("pseq doesn't support functions with side-effects")
calls = []
func = lambda x: calls.append(x)
result = self.seq(1, 2, 3).map(func).cache().map(lambda x: x).to_list()
self.assertEqual(len(calls), 3)
self.assertEqual(result, [None, None, None])
result = self.seq(1, 2, 3).map(lambda x: x).cache()
self.assertEqual(
repr(result._lineage), "Lineage: sequence -> map(<lambda>) -> cache"
)
result = self.seq(1, 2, 3).map(lambda x: x).cache(delete_lineage=True)
self.assertEqual(repr(result._lineage), "Lineage: sequence")
def test_tabulate(self):
sequence = seq([[1, 2, 3], [4, 5, 6]])
self.assertEqual(sequence.show(), None)
self.assertNotEqual(sequence._repr_html_(), None)
result = sequence.tabulate()
self.assertEqual(result, "- - -\n1 2 3\n4 5 6\n- - -")
sequence = seq(1, 2, 3)
self.assertEqual(sequence.tabulate(), None)
class NotTabulatable(object):
pass
sequence = seq(NotTabulatable(), NotTabulatable(), NotTabulatable())
self.assertEqual(sequence.tabulate(), None)
long_data = seq([(i, i + 1) for i in range(30)])
self.assertTrue("Showing 10 of 30 rows" in long_data.tabulate(n=10))
self.assertTrue("Showing 10 of 30 rows" in long_data._repr_html_())
self.assertTrue(
"Showing 10 of 30 rows" not in long_data.tabulate(n=10, tablefmt="plain")
)
def test_tabulate_namedtuple(self):
sequence_tabulated = seq([Data(1, 2), Data(6, 7)]).tabulate()
self.assertEqual(sequence_tabulated, " x y\n--- ---\n 1 2\n 6 7")
def test_repr_max_lines(self):
sequence = seq.range(200)
self.assertEqual(len(repr(sequence)), 395)
sequence._max_repr_items = None
self.assertEqual(len(repr(sequence)), 890)
class TestExtend(unittest.TestCase):
def test_custom_functions(self):
@extend(aslist=True)
def my_zip(it):
return zip(it, it)
result = seq.range(3).my_zip().list()
expected = list(zip(range(3), range(3)))
self.assertEqual(result, expected)
result = seq.range(3).my_zip().my_zip().list()
expected = list(zip(expected, expected))
self.assertEqual(result, expected)
@extend
def square(it):
return [i ** 2 for i in it]
result = seq.range(100).square().list()
expected = [i ** 2 for i in range(100)]
self.assertEqual(result, expected)
name = "PARALLEL_SQUARE"
@extend(parallel=True, name=name)
def square_parallel(it):
return [i ** 2 for i in it]
result = seq.range(100).square_parallel()
self.assertEqual(result.sum(), sum(expected))
self.assertEqual(
repr(result._lineage), "Lineage: sequence -> extended[%s]" % name
)
@extend
def my_filter(it, n=10):
return (i for i in it if i > n)
# test keyword args
result = seq.range(20).my_filter(n=10).list()
expected = list(filter(lambda x: x > 10, range(20)))
self.assertEqual(result, expected)
# test args
result = seq.range(20).my_filter(10).list()
self.assertEqual(result, expected)
# test final
@extend(final=True)
def toarray(it):
return array.array("f", it)
result = seq.range(10).toarray()
expected = array.array("f", range(10))
self.assertEqual(result, expected)
result = seq.range(10).map(lambda x: x ** 2).toarray()
expected = array.array("f", [i ** 2 for i in range(10)])
self.assertEqual(result, expected)
# a more complex example combining all above
@extend()
def sum_pair(it):
return (i[0] + i[1] for i in it)
result = (
seq.range(100).my_filter(85).my_zip().sum_pair().square_parallel().toarray()
)
expected = array.array(
"f",
list(
map(
lambda x: (x[0] + x[1]) ** 2,
map(lambda x: (x, x), filter(lambda x: x > 85, range(100))),
)
),
)
self.assertEqual(result, expected)
class TestParallelPipeline(TestPipeline):
def setUp(self):
self.seq = pseq
| mit |
JohanComparat/nbody-npt-functions | bin/bin_SMHMr/plot_slice_simulation.py | 1 | 4452 | import StellarMass
import XrayLuminosity
import numpy as n
from scipy.stats import norm
from scipy.integrate import quad
from scipy.interpolate import interp1d
import matplotlib
matplotlib.use('pdf')
import matplotlib.pyplot as p
import glob
import astropy.io.fits as fits
import os
import time
import numpy as n
import sys
print " set up box, and redshift "
def get_slice(env='MD04', file_type="out", aexp='0.74230'):
# parameters of the slice
xmin, ymin, zmin = 0., 0., 0.,
xmax, ymax, zmax = 60., 60., 30.
fileList = n.array(glob.glob(os.path.join(os.environ[env], "catalogs", file_type+"_"+aexp+"_*Xray.fits" )))
fileList.sort()
print fileList
def get_plot_data(fileN):
hd = fits.open(fileN)
xd = hd[1].data['x']
yd = hd[1].data['y']
zd = hd[1].data['z']
stellar_mass = hd[1].data['stellar_mass_Mo13_mvir']
selection = hd[1].data['stellar_mass_reliable']
LX_AGN = hd[1].data['lambda_sar_Bo16'] + hd[1].data['stellar_mass_Mo13_mvir']
active = hd[1].data['activity']
#hd[1].data['Mgas_cluster']
#hd[1].data['kT_cluster']
LX_cluster = hd[1].data['Lx_bol_cluster']
#hd[1].data['Lx_ce_cluster']
selection_spatial = (selection) & (xd > xmin) & (xd < xmax) & (yd > ymin) & (yd < ymax) & (zd > zmin) & (zd < zmax)
#agn_selection = (active)&(LX_AGN>42)
#cluster_selection = (LX_cluster>42)
zone = (selection_spatial) # & ( agn_selection | cluster_selection )
print "N points = ", len(xd[zone])
return xd[zone], yd[zone], zd[zone], stellar_mass[zone], LX_AGN[zone], LX_cluster[zone], active[zone]
y, x, z, mass, LX_AGN, LX_cluster, active = get_plot_data(fileList[0])
"""
for fileN in fileList[1:]:
print fileN
y_i, x_i, z_i, mass_i, LX_AGN_i, LX_cluster_i, active_i = get_plot_data(fileN)
x = n.hstack((x, x_i))
y = n.hstack((y, y_i))
z = n.hstack((z, z_i))
mass = n.hstack((mass, mass_i))
LX_AGN = n.hstack((LX_AGN, LX_AGN_i))
LX_cluster = n.hstack((LX_cluster, LX_cluster_i))
active = n.hstack((active, active_i))
"""
print "N points total", len(x)
def plot_slice(y, x, z, mass, LX_AGN, LX_cluster):
LX_cut = 41.5
agn_selection = (active)&(LX_AGN>LX_cut)
cluster_selection = (LX_cluster>LX_cut)
mass_selection = (mass>9)
print "N AGN in plot=", len(y[agn_selection])
print "N cluster in plot=", len(y[cluster_selection])
p.figure(1, (11,9))
p.plot(x[mass_selection], y[mass_selection], 'ko', rasterized=True, alpha=0.1, label='log(M)>9', markeredgecolor='k')
p.scatter(x[cluster_selection], y[cluster_selection], c=mass[cluster_selection] , s=mass[cluster_selection] , label="Cluster LX>"+str(LX_cut), rasterized=True, edgecolor='face', marker='s')
p.scatter(x[agn_selection], y[agn_selection], c=mass[agn_selection] , s=mass[agn_selection], label="AGN LX>"+str(LX_cut), rasterized=True, edgecolor='face', marker='*')
cb = p.colorbar(shrink=0.7)
cb.set_label('stellar mass')
p.xlabel(r'$x [Mpc/h]$')
p.ylabel(r'$y [Mpc/h]$')
p.grid()
p.xlim((xmin, xmax))
p.ylim((ymin, ymax))
p.legend()#frameon=False)
#p.title('Duty cycle 1%')
p.savefig(os.path.join(os.environ[env], "results", file_type+"_"+aexp+'_xy_sim_slice.pdf'))
p.clf()
p.figure(1, (11,9))
p.plot(x[mass_selection], y[mass_selection], 'ko', rasterized=True, alpha=0.1, label='log(M)>9', markeredgecolor='k')
p.scatter(x[cluster_selection], y[cluster_selection], c=LX_cluster[cluster_selection] , s=LX_cluster[cluster_selection]/2. , label="Cluster LX>"+str(LX_cut), rasterized=True, edgecolor='face', marker='s')
p.scatter(x[agn_selection], y[agn_selection], c=LX_AGN[agn_selection] , s=LX_AGN[agn_selection]/2., label="AGN LX>"+str(LX_cut), rasterized=True, edgecolor='face', marker='*')
cb = p.colorbar(shrink=0.7)
cb.set_label('Xray luminosity')
p.xlabel(r'$x [Mpc/h]$')
p.ylabel(r'$y [Mpc/h]$')
p.grid()
p.xlim((xmin, xmax))
p.ylim((ymin, ymax))
p.legend()#frameon=False)
#p.title('Duty cycle 1%')
p.savefig(os.path.join(os.environ[env], "results", file_type+"_"+aexp+'_xy_sim_slice_LX.pdf'))
p.clf()
plot_slice(y, x, z, mass, LX_AGN, LX_cluster)
get_slice(env='MD04', file_type="out", aexp='0.74230')
os.system("cp $MD04/results/*.pdf ~/wwwDir/eRoMok/plots/MD_0.4Gpc/")
get_slice(env='MD10', file_type="out", aexp='0.74980')
os.system("cp $MD10/results/*.pdf ~/wwwDir/eRoMok/plots/MD_1.0Gpc/")
#get_slice(env='MD25', file_type="out", aexp='0.75440')
#os.system("cp $MD25/results/*.pdf ~/wwwDir/eRoMok/plots/MD_2.5Gpc/")
| cc0-1.0 |
russel1237/scikit-learn | sklearn/utils/tests/test_validation.py | 79 | 18547 | """Tests for input validation functions"""
import warnings
from tempfile import NamedTemporaryFile
from itertools import product
import numpy as np
from numpy.testing import assert_array_equal
import scipy.sparse as sp
from nose.tools import assert_raises, assert_true, assert_false, assert_equal
from sklearn.utils.testing import assert_raises_regexp
from sklearn.utils.testing import assert_no_warnings
from sklearn.utils.testing import assert_warns_message
from sklearn.utils.testing import assert_warns
from sklearn.utils.testing import ignore_warnings
from sklearn.utils import as_float_array, check_array, check_symmetric
from sklearn.utils import check_X_y
from sklearn.utils.mocking import MockDataFrame
from sklearn.utils.estimator_checks import NotAnArray
from sklearn.random_projection import sparse_random_matrix
from sklearn.linear_model import ARDRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestRegressor
from sklearn.svm import SVR
from sklearn.datasets import make_blobs
from sklearn.utils.validation import (
NotFittedError,
has_fit_parameter,
check_is_fitted,
check_consistent_length,
DataConversionWarning,
)
from sklearn.utils.testing import assert_raise_message
def test_as_float_array():
# Test function for as_float_array
X = np.ones((3, 10), dtype=np.int32)
X = X + np.arange(10, dtype=np.int32)
# Checks that the return type is ok
X2 = as_float_array(X, copy=False)
np.testing.assert_equal(X2.dtype, np.float32)
# Another test
X = X.astype(np.int64)
X2 = as_float_array(X, copy=True)
# Checking that the array wasn't overwritten
assert_true(as_float_array(X, False) is not X)
# Checking that the new type is ok
np.testing.assert_equal(X2.dtype, np.float64)
# Here, X is of the right type, it shouldn't be modified
X = np.ones((3, 2), dtype=np.float32)
assert_true(as_float_array(X, copy=False) is X)
# Test that if X is fortran ordered it stays
X = np.asfortranarray(X)
assert_true(np.isfortran(as_float_array(X, copy=True)))
# Test the copy parameter with some matrices
matrices = [
np.matrix(np.arange(5)),
sp.csc_matrix(np.arange(5)).toarray(),
sparse_random_matrix(10, 10, density=0.10).toarray()
]
for M in matrices:
N = as_float_array(M, copy=True)
N[0, 0] = np.nan
assert_false(np.isnan(M).any())
def test_np_matrix():
# Confirm that input validation code does not return np.matrix
X = np.arange(12).reshape(3, 4)
assert_false(isinstance(as_float_array(X), np.matrix))
assert_false(isinstance(as_float_array(np.matrix(X)), np.matrix))
assert_false(isinstance(as_float_array(sp.csc_matrix(X)), np.matrix))
def test_memmap():
# Confirm that input validation code doesn't copy memory mapped arrays
asflt = lambda x: as_float_array(x, copy=False)
with NamedTemporaryFile(prefix='sklearn-test') as tmp:
M = np.memmap(tmp, shape=(10, 10), dtype=np.float32)
M[:] = 0
for f in (check_array, np.asarray, asflt):
X = f(M)
X[:] = 1
assert_array_equal(X.ravel(), M.ravel())
X[:] = 0
def test_ordering():
# Check that ordering is enforced correctly by validation utilities.
# We need to check each validation utility, because a 'copy' without
# 'order=K' will kill the ordering.
X = np.ones((10, 5))
for A in X, X.T:
for copy in (True, False):
B = check_array(A, order='C', copy=copy)
assert_true(B.flags['C_CONTIGUOUS'])
B = check_array(A, order='F', copy=copy)
assert_true(B.flags['F_CONTIGUOUS'])
if copy:
assert_false(A is B)
X = sp.csr_matrix(X)
X.data = X.data[::-1]
assert_false(X.data.flags['C_CONTIGUOUS'])
@ignore_warnings
def test_check_array():
# accept_sparse == None
# raise error on sparse inputs
X = [[1, 2], [3, 4]]
X_csr = sp.csr_matrix(X)
assert_raises(TypeError, check_array, X_csr)
# ensure_2d
assert_warns(DeprecationWarning, check_array, [0, 1, 2])
X_array = check_array([0, 1, 2])
assert_equal(X_array.ndim, 2)
X_array = check_array([0, 1, 2], ensure_2d=False)
assert_equal(X_array.ndim, 1)
# don't allow ndim > 3
X_ndim = np.arange(8).reshape(2, 2, 2)
assert_raises(ValueError, check_array, X_ndim)
check_array(X_ndim, allow_nd=True) # doesn't raise
# force_all_finite
X_inf = np.arange(4).reshape(2, 2).astype(np.float)
X_inf[0, 0] = np.inf
assert_raises(ValueError, check_array, X_inf)
check_array(X_inf, force_all_finite=False) # no raise
# nan check
X_nan = np.arange(4).reshape(2, 2).astype(np.float)
X_nan[0, 0] = np.nan
assert_raises(ValueError, check_array, X_nan)
check_array(X_inf, force_all_finite=False) # no raise
# dtype and order enforcement.
X_C = np.arange(4).reshape(2, 2).copy("C")
X_F = X_C.copy("F")
X_int = X_C.astype(np.int)
X_float = X_C.astype(np.float)
Xs = [X_C, X_F, X_int, X_float]
dtypes = [np.int32, np.int, np.float, np.float32, None, np.bool, object]
orders = ['C', 'F', None]
copys = [True, False]
for X, dtype, order, copy in product(Xs, dtypes, orders, copys):
X_checked = check_array(X, dtype=dtype, order=order, copy=copy)
if dtype is not None:
assert_equal(X_checked.dtype, dtype)
else:
assert_equal(X_checked.dtype, X.dtype)
if order == 'C':
assert_true(X_checked.flags['C_CONTIGUOUS'])
assert_false(X_checked.flags['F_CONTIGUOUS'])
elif order == 'F':
assert_true(X_checked.flags['F_CONTIGUOUS'])
assert_false(X_checked.flags['C_CONTIGUOUS'])
if copy:
assert_false(X is X_checked)
else:
# doesn't copy if it was already good
if (X.dtype == X_checked.dtype and
X_checked.flags['C_CONTIGUOUS'] == X.flags['C_CONTIGUOUS']
and X_checked.flags['F_CONTIGUOUS'] == X.flags['F_CONTIGUOUS']):
assert_true(X is X_checked)
# allowed sparse != None
X_csc = sp.csc_matrix(X_C)
X_coo = X_csc.tocoo()
X_dok = X_csc.todok()
X_int = X_csc.astype(np.int)
X_float = X_csc.astype(np.float)
Xs = [X_csc, X_coo, X_dok, X_int, X_float]
accept_sparses = [['csr', 'coo'], ['coo', 'dok']]
for X, dtype, accept_sparse, copy in product(Xs, dtypes, accept_sparses,
copys):
with warnings.catch_warnings(record=True) as w:
X_checked = check_array(X, dtype=dtype,
accept_sparse=accept_sparse, copy=copy)
if (dtype is object or sp.isspmatrix_dok(X)) and len(w):
message = str(w[0].message)
messages = ["object dtype is not supported by sparse matrices",
"Can't check dok sparse matrix for nan or inf."]
assert_true(message in messages)
else:
assert_equal(len(w), 0)
if dtype is not None:
assert_equal(X_checked.dtype, dtype)
else:
assert_equal(X_checked.dtype, X.dtype)
if X.format in accept_sparse:
# no change if allowed
assert_equal(X.format, X_checked.format)
else:
# got converted
assert_equal(X_checked.format, accept_sparse[0])
if copy:
assert_false(X is X_checked)
else:
# doesn't copy if it was already good
if (X.dtype == X_checked.dtype and X.format == X_checked.format):
assert_true(X is X_checked)
# other input formats
# convert lists to arrays
X_dense = check_array([[1, 2], [3, 4]])
assert_true(isinstance(X_dense, np.ndarray))
# raise on too deep lists
assert_raises(ValueError, check_array, X_ndim.tolist())
check_array(X_ndim.tolist(), allow_nd=True) # doesn't raise
# convert weird stuff to arrays
X_no_array = NotAnArray(X_dense)
result = check_array(X_no_array)
assert_true(isinstance(result, np.ndarray))
def test_check_array_pandas_dtype_object_conversion():
# test that data-frame like objects with dtype object
# get converted
X = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=np.object)
X_df = MockDataFrame(X)
assert_equal(check_array(X_df).dtype.kind, "f")
assert_equal(check_array(X_df, ensure_2d=False).dtype.kind, "f")
# smoke-test against dataframes with column named "dtype"
X_df.dtype = "Hans"
assert_equal(check_array(X_df, ensure_2d=False).dtype.kind, "f")
def test_check_array_dtype_stability():
# test that lists with ints don't get converted to floats
X = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
assert_equal(check_array(X).dtype.kind, "i")
assert_equal(check_array(X, ensure_2d=False).dtype.kind, "i")
def test_check_array_dtype_warning():
X_int_list = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
X_float64 = np.asarray(X_int_list, dtype=np.float64)
X_float32 = np.asarray(X_int_list, dtype=np.float32)
X_int64 = np.asarray(X_int_list, dtype=np.int64)
X_csr_float64 = sp.csr_matrix(X_float64)
X_csr_float32 = sp.csr_matrix(X_float32)
X_csc_float32 = sp.csc_matrix(X_float32)
X_csc_int32 = sp.csc_matrix(X_int64, dtype=np.int32)
y = [0, 0, 1]
integer_data = [X_int64, X_csc_int32]
float64_data = [X_float64, X_csr_float64]
float32_data = [X_float32, X_csr_float32, X_csc_float32]
for X in integer_data:
X_checked = assert_no_warnings(check_array, X, dtype=np.float64,
accept_sparse=True)
assert_equal(X_checked.dtype, np.float64)
X_checked = assert_warns(DataConversionWarning, check_array, X,
dtype=np.float64,
accept_sparse=True, warn_on_dtype=True)
assert_equal(X_checked.dtype, np.float64)
# Check that the warning message includes the name of the Estimator
X_checked = assert_warns_message(DataConversionWarning,
'SomeEstimator',
check_array, X,
dtype=[np.float64, np.float32],
accept_sparse=True,
warn_on_dtype=True,
estimator='SomeEstimator')
assert_equal(X_checked.dtype, np.float64)
X_checked, y_checked = assert_warns_message(
DataConversionWarning, 'KNeighborsClassifier',
check_X_y, X, y, dtype=np.float64, accept_sparse=True,
warn_on_dtype=True, estimator=KNeighborsClassifier())
assert_equal(X_checked.dtype, np.float64)
for X in float64_data:
X_checked = assert_no_warnings(check_array, X, dtype=np.float64,
accept_sparse=True, warn_on_dtype=True)
assert_equal(X_checked.dtype, np.float64)
X_checked = assert_no_warnings(check_array, X, dtype=np.float64,
accept_sparse=True, warn_on_dtype=False)
assert_equal(X_checked.dtype, np.float64)
for X in float32_data:
X_checked = assert_no_warnings(check_array, X,
dtype=[np.float64, np.float32],
accept_sparse=True)
assert_equal(X_checked.dtype, np.float32)
assert_true(X_checked is X)
X_checked = assert_no_warnings(check_array, X,
dtype=[np.float64, np.float32],
accept_sparse=['csr', 'dok'],
copy=True)
assert_equal(X_checked.dtype, np.float32)
assert_false(X_checked is X)
X_checked = assert_no_warnings(check_array, X_csc_float32,
dtype=[np.float64, np.float32],
accept_sparse=['csr', 'dok'],
copy=False)
assert_equal(X_checked.dtype, np.float32)
assert_false(X_checked is X_csc_float32)
assert_equal(X_checked.format, 'csr')
def test_check_array_min_samples_and_features_messages():
# empty list is considered 2D by default:
msg = "0 feature(s) (shape=(1, 0)) while a minimum of 1 is required."
assert_raise_message(ValueError, msg, check_array, [[]])
# If considered a 1D collection when ensure_2d=False, then the minimum
# number of samples will break:
msg = "0 sample(s) (shape=(0,)) while a minimum of 1 is required."
assert_raise_message(ValueError, msg, check_array, [], ensure_2d=False)
# Invalid edge case when checking the default minimum sample of a scalar
msg = "Singleton array array(42) cannot be considered a valid collection."
assert_raise_message(TypeError, msg, check_array, 42, ensure_2d=False)
# But this works if the input data is forced to look like a 2 array with
# one sample and one feature:
X_checked = assert_warns(DeprecationWarning, check_array, [42],
ensure_2d=True)
assert_array_equal(np.array([[42]]), X_checked)
# Simulate a model that would need at least 2 samples to be well defined
X = np.ones((1, 10))
y = np.ones(1)
msg = "1 sample(s) (shape=(1, 10)) while a minimum of 2 is required."
assert_raise_message(ValueError, msg, check_X_y, X, y,
ensure_min_samples=2)
# The same message is raised if the data has 2 dimensions even if this is
# not mandatory
assert_raise_message(ValueError, msg, check_X_y, X, y,
ensure_min_samples=2, ensure_2d=False)
# Simulate a model that would require at least 3 features (e.g. SelectKBest
# with k=3)
X = np.ones((10, 2))
y = np.ones(2)
msg = "2 feature(s) (shape=(10, 2)) while a minimum of 3 is required."
assert_raise_message(ValueError, msg, check_X_y, X, y,
ensure_min_features=3)
# Only the feature check is enabled whenever the number of dimensions is 2
# even if allow_nd is enabled:
assert_raise_message(ValueError, msg, check_X_y, X, y,
ensure_min_features=3, allow_nd=True)
# Simulate a case where a pipeline stage as trimmed all the features of a
# 2D dataset.
X = np.empty(0).reshape(10, 0)
y = np.ones(10)
msg = "0 feature(s) (shape=(10, 0)) while a minimum of 1 is required."
assert_raise_message(ValueError, msg, check_X_y, X, y)
# nd-data is not checked for any minimum number of features by default:
X = np.ones((10, 0, 28, 28))
y = np.ones(10)
X_checked, y_checked = check_X_y(X, y, allow_nd=True)
assert_array_equal(X, X_checked)
assert_array_equal(y, y_checked)
def test_has_fit_parameter():
assert_false(has_fit_parameter(KNeighborsClassifier, "sample_weight"))
assert_true(has_fit_parameter(RandomForestRegressor, "sample_weight"))
assert_true(has_fit_parameter(SVR, "sample_weight"))
assert_true(has_fit_parameter(SVR(), "sample_weight"))
def test_check_symmetric():
arr_sym = np.array([[0, 1], [1, 2]])
arr_bad = np.ones(2)
arr_asym = np.array([[0, 2], [0, 2]])
test_arrays = {'dense': arr_asym,
'dok': sp.dok_matrix(arr_asym),
'csr': sp.csr_matrix(arr_asym),
'csc': sp.csc_matrix(arr_asym),
'coo': sp.coo_matrix(arr_asym),
'lil': sp.lil_matrix(arr_asym),
'bsr': sp.bsr_matrix(arr_asym)}
# check error for bad inputs
assert_raises(ValueError, check_symmetric, arr_bad)
# check that asymmetric arrays are properly symmetrized
for arr_format, arr in test_arrays.items():
# Check for warnings and errors
assert_warns(UserWarning, check_symmetric, arr)
assert_raises(ValueError, check_symmetric, arr, raise_exception=True)
output = check_symmetric(arr, raise_warning=False)
if sp.issparse(output):
assert_equal(output.format, arr_format)
assert_array_equal(output.toarray(), arr_sym)
else:
assert_array_equal(output, arr_sym)
def test_check_is_fitted():
# Check is ValueError raised when non estimator instance passed
assert_raises(ValueError, check_is_fitted, ARDRegression, "coef_")
assert_raises(TypeError, check_is_fitted, "SVR", "support_")
ard = ARDRegression()
svr = SVR()
try:
assert_raises(NotFittedError, check_is_fitted, ard, "coef_")
assert_raises(NotFittedError, check_is_fitted, svr, "support_")
except ValueError:
assert False, "check_is_fitted failed with ValueError"
# NotFittedError is a subclass of both ValueError and AttributeError
try:
check_is_fitted(ard, "coef_", "Random message %(name)s, %(name)s")
except ValueError as e:
assert_equal(str(e), "Random message ARDRegression, ARDRegression")
try:
check_is_fitted(svr, "support_", "Another message %(name)s, %(name)s")
except AttributeError as e:
assert_equal(str(e), "Another message SVR, SVR")
ard.fit(*make_blobs())
svr.fit(*make_blobs())
assert_equal(None, check_is_fitted(ard, "coef_"))
assert_equal(None, check_is_fitted(svr, "support_"))
def test_check_consistent_length():
check_consistent_length([1], [2], [3], [4], [5])
check_consistent_length([[1, 2], [[1, 2]]], [1, 2], ['a', 'b'])
check_consistent_length([1], (2,), np.array([3]), sp.csr_matrix((1, 2)))
assert_raises_regexp(ValueError, 'inconsistent numbers of samples',
check_consistent_length, [1, 2], [1])
assert_raises_regexp(TypeError, 'got <\w+ \'int\'>',
check_consistent_length, [1, 2], 1)
assert_raises_regexp(TypeError, 'got <\w+ \'object\'>',
check_consistent_length, [1, 2], object())
assert_raises(TypeError, check_consistent_length, [1, 2], np.array(1))
# Despite ensembles having __len__ they must raise TypeError
assert_raises_regexp(TypeError, 'estimator', check_consistent_length,
[1, 2], RandomForestRegressor())
# XXX: We should have a test with a string, but what is correct behaviour?
| bsd-3-clause |
yaukwankiu/armor | tests/modifiedMexicanHatTest15_march2014_sigmaPreprocessing16.py | 1 | 7833 | # modified mexican hat wavelet test.py
# spectral analysis for RADAR and WRF patterns
# NO plotting - just saving the results: LOG-response spectra for each sigma and max-LOG response numerical spectra
# pre-convolved with a gaussian filter of sigma=10
import os, shutil
import time, datetime
import pickle
import numpy as np
from scipy import signal, ndimage
import matplotlib.pyplot as plt
from armor import defaultParameters as dp
from armor import pattern
from armor import objects4 as ob
#from armor import misc as ms
dbz = pattern.DBZ
kongreywrf = ob.kongreywrf
kongreywrf.fix()
kongrey = ob.kongrey
monsoon = ob.monsoon
monsoon.list= [v for v in monsoon.list if '20120612' in v.dataTime] #fix
march2014 = ob.march2014
march2014wrf11 = ob.march2014wrf11
march2014wrf12 = ob.march2014wrf12
march2014wrf = ob.march2014wrf
march2014wrf.fix()
################################################################################
# hack
#kongrey.list = [v for v in kongrey.list if v.dataTime>="20130828.2320"]
################################################################################
# parameters
sigmaPreprocessing = 16 # sigma for preprocessing, 2014-05-15
testName = "modifiedMexicanHatTest15_march2014_sigmaPreprocessing" + str(sigmaPreprocessing)
sigmas = [1, 2, 4, 5, 8 ,10 ,16, 20, 32, 40, 64, 80, 128, 160, 256,]
dbzstreams = [march2014]
sigmaPower=0
scaleSpacePower=0 #2014-05-14
testScriptsFolder = dp.root + 'python/armor/tests/'
timeString = str(int(time.time()))
outputFolder = dp.root + 'labLogs/%d-%d-%d-%s/' % \
(time.localtime().tm_year, time.localtime().tm_mon, time.localtime().tm_mday, testName)
if not os.path.exists(outputFolder):
os.makedirs(outputFolder)
shutil.copyfile(testScriptsFolder+testName+".py", outputFolder+ timeString + testName+".py")
# end parameters
################################################################################
summaryFile = open(outputFolder + timeString + "summary.txt", 'a')
for ds in dbzstreams:
summaryFile.write("\n===============================================================\n\n\n")
streamMean = 0.
dbzCount = 0
#hack
#streamMean = np.array([135992.57472004235, 47133.59049120619, 16685.039217734946, 11814.043851969862, 5621.567482638702, 3943.2774923729303, 1920.246102887001, 1399.7855335686243, 760.055614122099, 575.3654495432361, 322.26668666562375, 243.49842951291757, 120.54647935045809, 79.05741086463254, 26.38971066782135])
#dbzCount = 140
for a in ds:
print "-------------------------------------------------"
print testName
print
print a.name
a.load()
a.setThreshold(0)
a.saveImage(imagePath=outputFolder+a.name+".png")
L = []
a.responseImages = [] #2014-05-02
#for sigma in [1, 2, 4, 8 ,16, 32, 64, 128, 256, 512]:
for sigma in sigmas:
print "sigma:", sigma
a.load()
a.setThreshold(0)
arr0 = a.matrix
#####################################################################
arr0 = ndimage.filters.gaussian_filter(arr0, sigma=sigmaPreprocessing) # <-- 2014-05-15
#####################################################################
#arr1 = signal.convolve2d(arr0, mask_i, mode='same', boundary='fill')
#arr1 = ndimage.filters.gaussian_laplace(arr0, sigma=sigma, mode="constant", cval=0.0) #2014-05-07
#arr1 = ndimage.filters.gaussian_laplace(arr0, sigma=sigma, mode="constant", cval=0.0) * sigma**2 #2014-04-29
arr1 = ndimage.filters.gaussian_laplace(arr0, sigma=sigma, mode="constant", cval=0.0) * sigma**scaleSpacePower #2014-05-14
a1 = dbz(matrix=arr1.real, name=a.name + "_" + testName + "_sigma" + str(sigma))
L.append({ 'sigma' : sigma,
'a1' : a1,
'abssum1': abs(a1.matrix).sum(),
'sum1' : a1.matrix.sum(),
})
print "abs sum", abs(a1.matrix.sum())
#a1.show()
#a2.show()
plt.close()
#a1.histogram(display=False, outputPath=outputFolder+a1.name+"_histogram.png")
###############################################################################
# computing the spectrum, i.e. sigma for which the LOG has max response
# 2014-05-02
a.responseImages.append({'sigma' : sigma,
'matrix' : arr1 * sigma**2,
})
pickle.dump(a.responseImages, open(outputFolder+a.name+"responseImagesList.pydump",'w'))
a_LOGspec = dbz(name= a.name + "Laplacian-of-Gaussian_numerical_spectrum",
imagePath=outputFolder+a1.name+"_LOGspec.png",
outputPath = outputFolder+a1.name+"_LOGspec.dat",
cmap = 'jet',
)
a.responseImages = np.dstack([v['matrix'] for v in a.responseImages])
#print 'shape:', a.responseImages.shape #debug
a.responseMax = a.responseImages.max(axis=2) # the deepest dimension
a_LOGspec.matrix = np.zeros(a.matrix.shape)
for count, sigma in enumerate(sigmas):
a_LOGspec.matrix += sigma * (a.responseMax == a.responseImages[:,:,count])
a_LOGspec.vmin = a_LOGspec.matrix.min()
a_LOGspec.vmax = a_LOGspec.matrix.max()
print "saving to:", a_LOGspec.imagePath
#a_LOGspec.saveImage()
print a_LOGspec.outputPath
#a_LOGspec.saveMatrix()
#a_LOGspec.histogram(display=False, outputPath=outputFolder+a1.name+"_LOGspec_histogram.png")
pickle.dump(a_LOGspec, open(outputFolder+ a_LOGspec.name + ".pydump","w"))
# end computing the sigma for which the LOG has max response
# 2014-05-02
##############################################################################
#pickle.dump(L, open(outputFolder+ a.name +'_test_results.pydump','w')) # no need to dump if test is easy
sigmas = np.array([v['sigma'] for v in L])
y1 = [v['abssum1'] for v in L]
plt.close()
plt.plot(sigmas,y1)
plt.title(a1.name+ '\n absolute values against sigma')
plt.savefig(outputFolder+a1.name+"-spectrum-histogram.png")
plt.close()
# now update the mean
streamMeanUpdate = np.array([v['abssum1'] for v in L])
dbzCount += 1
streamMean = 1.* ((streamMean*(dbzCount -1)) + streamMeanUpdate ) / dbzCount
print "Stream Count and Mean so far:", dbzCount, streamMean
# now save the mean and the plot
summaryText = '\n---------------------------------------\n'
summaryText += str(int(time.time())) + '\n'
summaryText += "dbzStream Name: " + ds.name + '\n'
summaryText += "dbzCount:\t" + str(dbzCount) + '\n'
summaryText +="sigma=\t\t" + str(sigmas.tolist()) + '\n'
summaryText += "streamMean=\t" + str(streamMean.tolist()) +'\n'
print summaryText
print "saving..."
# release the memory
a.matrix = np.array([0])
summaryFile.write(summaryText)
plt.close()
plt.plot(sigmas, streamMean* (sigmas**sigmaPower))
plt.title(ds.name + '- average laplacian-of-gaussian numerical spectrum\n' +\
'for ' +str(dbzCount) + ' DBZ patterns\n' +\
'suppressed by a factor of sigma^' + str(sigmaPower) )
plt.savefig(outputFolder + ds.name + "_average_LoG_numerical_spectrum.png")
plt.close()
summaryFile.close()
| cc0-1.0 |
awalls-cx18/gnuradio | gr-filter/examples/interpolate.py | 7 | 8811 | #!/usr/bin/env python
#
# Copyright 2009,2012,2013 Free Software Foundation, Inc.
#
# This file is part of GNU Radio
#
# GNU Radio is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3, or (at your option)
# any later version.
#
# GNU Radio is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with GNU Radio; see the file COPYING. If not, write to
# the Free Software Foundation, Inc., 51 Franklin Street,
# Boston, MA 02110-1301, USA.
#
from __future__ import print_function
from __future__ import division
from __future__ import unicode_literals
from gnuradio import gr
from gnuradio import blocks
from gnuradio import filter
import sys, time
import numpy
try:
from gnuradio import analog
except ImportError:
sys.stderr.write("Error: Program requires gr-analog.\n")
sys.exit(1)
try:
import pylab
from pylab import mlab
except ImportError:
sys.stderr.write("Error: Program requires matplotlib (see: matplotlib.sourceforge.net).\n")
sys.exit(1)
class pfb_top_block(gr.top_block):
def __init__(self):
gr.top_block.__init__(self)
self._N = 100000 # number of samples to use
self._fs = 2000 # initial sampling rate
self._interp = 5 # Interpolation rate for PFB interpolator
self._ainterp = 5.5 # Resampling rate for the PFB arbitrary resampler
# Frequencies of the signals we construct
freq1 = 100
freq2 = 200
# Create a set of taps for the PFB interpolator
# This is based on the post-interpolation sample rate
self._taps = filter.firdes.low_pass_2(self._interp,
self._interp*self._fs,
freq2+50, 50,
attenuation_dB=120,
window=filter.firdes.WIN_BLACKMAN_hARRIS)
# Create a set of taps for the PFB arbitrary resampler
# The filter size is the number of filters in the filterbank; 32 will give very low side-lobes,
# and larger numbers will reduce these even farther
# The taps in this filter are based on a sampling rate of the filter size since it acts
# internally as an interpolator.
flt_size = 32
self._taps2 = filter.firdes.low_pass_2(flt_size,
flt_size*self._fs,
freq2+50, 150,
attenuation_dB=120,
window=filter.firdes.WIN_BLACKMAN_hARRIS)
# Calculate the number of taps per channel for our own information
tpc = numpy.ceil(float(len(self._taps)) / float(self._interp))
print("Number of taps: ", len(self._taps))
print("Number of filters: ", self._interp)
print("Taps per channel: ", tpc)
# Create a couple of signals at different frequencies
self.signal1 = analog.sig_source_c(self._fs, analog.GR_SIN_WAVE, freq1, 0.5)
self.signal2 = analog.sig_source_c(self._fs, analog.GR_SIN_WAVE, freq2, 0.5)
self.signal = blocks.add_cc()
self.head = blocks.head(gr.sizeof_gr_complex, self._N)
# Construct the PFB interpolator filter
self.pfb = filter.pfb.interpolator_ccf(self._interp, self._taps)
# Construct the PFB arbitrary resampler filter
self.pfb_ar = filter.pfb.arb_resampler_ccf(self._ainterp, self._taps2, flt_size)
self.snk_i = blocks.vector_sink_c()
#self.pfb_ar.pfb.print_taps()
#self.pfb.pfb.print_taps()
# Connect the blocks
self.connect(self.signal1, self.head, (self.signal,0))
self.connect(self.signal2, (self.signal,1))
self.connect(self.signal, self.pfb)
self.connect(self.signal, self.pfb_ar)
self.connect(self.signal, self.snk_i)
# Create the sink for the interpolated signals
self.snk1 = blocks.vector_sink_c()
self.snk2 = blocks.vector_sink_c()
self.connect(self.pfb, self.snk1)
self.connect(self.pfb_ar, self.snk2)
def main():
tb = pfb_top_block()
tstart = time.time()
tb.run()
tend = time.time()
print("Run time: %f" % (tend - tstart))
if 1:
fig1 = pylab.figure(1, figsize=(12,10), facecolor="w")
fig2 = pylab.figure(2, figsize=(12,10), facecolor="w")
fig3 = pylab.figure(3, figsize=(12,10), facecolor="w")
Ns = 10000
Ne = 10000
fftlen = 8192
winfunc = numpy.blackman
# Plot input signal
fs = tb._fs
d = tb.snk_i.data()[Ns:Ns+Ne]
sp1_f = fig1.add_subplot(2, 1, 1)
X,freq = mlab.psd(d, NFFT=fftlen, noverlap=fftlen / 4, Fs=fs,
window = lambda d: d*winfunc(fftlen),
scale_by_freq=True)
X_in = 10.0*numpy.log10(abs(numpy.fft.fftshift(X)))
f_in = numpy.arange(-fs / 2.0, fs / 2.0, fs / float(X_in.size))
p1_f = sp1_f.plot(f_in, X_in, "b")
sp1_f.set_xlim([min(f_in), max(f_in)+1])
sp1_f.set_ylim([-200.0, 50.0])
sp1_f.set_title("Input Signal", weight="bold")
sp1_f.set_xlabel("Frequency (Hz)")
sp1_f.set_ylabel("Power (dBW)")
Ts = 1.0 / fs
Tmax = len(d)*Ts
t_in = numpy.arange(0, Tmax, Ts)
x_in = numpy.array(d)
sp1_t = fig1.add_subplot(2, 1, 2)
p1_t = sp1_t.plot(t_in, x_in.real, "b-o")
#p1_t = sp1_t.plot(t_in, x_in.imag, "r-o")
sp1_t.set_ylim([-2.5, 2.5])
sp1_t.set_title("Input Signal", weight="bold")
sp1_t.set_xlabel("Time (s)")
sp1_t.set_ylabel("Amplitude")
# Plot output of PFB interpolator
fs_int = tb._fs*tb._interp
sp2_f = fig2.add_subplot(2, 1, 1)
d = tb.snk1.data()[Ns:Ns+(tb._interp*Ne)]
X,freq = mlab.psd(d, NFFT=fftlen, noverlap=fftlen / 4, Fs=fs,
window = lambda d: d*winfunc(fftlen),
scale_by_freq=True)
X_o = 10.0*numpy.log10(abs(numpy.fft.fftshift(X)))
f_o = numpy.arange(-fs_int / 2.0, fs_int / 2.0, fs_int / float(X_o.size))
p2_f = sp2_f.plot(f_o, X_o, "b")
sp2_f.set_xlim([min(f_o), max(f_o)+1])
sp2_f.set_ylim([-200.0, 50.0])
sp2_f.set_title("Output Signal from PFB Interpolator", weight="bold")
sp2_f.set_xlabel("Frequency (Hz)")
sp2_f.set_ylabel("Power (dBW)")
Ts_int = 1.0 / fs_int
Tmax = len(d)*Ts_int
t_o = numpy.arange(0, Tmax, Ts_int)
x_o1 = numpy.array(d)
sp2_t = fig2.add_subplot(2, 1, 2)
p2_t = sp2_t.plot(t_o, x_o1.real, "b-o")
#p2_t = sp2_t.plot(t_o, x_o.imag, "r-o")
sp2_t.set_ylim([-2.5, 2.5])
sp2_t.set_title("Output Signal from PFB Interpolator", weight="bold")
sp2_t.set_xlabel("Time (s)")
sp2_t.set_ylabel("Amplitude")
# Plot output of PFB arbitrary resampler
fs_aint = tb._fs * tb._ainterp
sp3_f = fig3.add_subplot(2, 1, 1)
d = tb.snk2.data()[Ns:Ns+(tb._interp*Ne)]
X,freq = mlab.psd(d, NFFT=fftlen, noverlap=fftlen / 4, Fs=fs,
window = lambda d: d*winfunc(fftlen),
scale_by_freq=True)
X_o = 10.0*numpy.log10(abs(numpy.fft.fftshift(X)))
f_o = numpy.arange(-fs_aint / 2.0, fs_aint / 2.0, fs_aint / float(X_o.size))
p3_f = sp3_f.plot(f_o, X_o, "b")
sp3_f.set_xlim([min(f_o), max(f_o)+1])
sp3_f.set_ylim([-200.0, 50.0])
sp3_f.set_title("Output Signal from PFB Arbitrary Resampler", weight="bold")
sp3_f.set_xlabel("Frequency (Hz)")
sp3_f.set_ylabel("Power (dBW)")
Ts_aint = 1.0 / fs_aint
Tmax = len(d)*Ts_aint
t_o = numpy.arange(0, Tmax, Ts_aint)
x_o2 = numpy.array(d)
sp3_f = fig3.add_subplot(2, 1, 2)
p3_f = sp3_f.plot(t_o, x_o2.real, "b-o")
p3_f = sp3_f.plot(t_o, x_o1.real, "m-o")
#p3_f = sp3_f.plot(t_o, x_o2.imag, "r-o")
sp3_f.set_ylim([-2.5, 2.5])
sp3_f.set_title("Output Signal from PFB Arbitrary Resampler", weight="bold")
sp3_f.set_xlabel("Time (s)")
sp3_f.set_ylabel("Amplitude")
pylab.show()
if __name__ == "__main__":
try:
main()
except KeyboardInterrupt:
pass
| gpl-3.0 |
vybstat/scikit-learn | examples/cluster/plot_adjusted_for_chance_measures.py | 286 | 4353 | """
==========================================================
Adjustment for chance in clustering performance evaluation
==========================================================
The following plots demonstrate the impact of the number of clusters and
number of samples on various clustering performance evaluation metrics.
Non-adjusted measures such as the V-Measure show a dependency between
the number of clusters and the number of samples: the mean V-Measure
of random labeling increases significantly as the number of clusters is
closer to the total number of samples used to compute the measure.
Adjusted for chance measure such as ARI display some random variations
centered around a mean score of 0.0 for any number of samples and
clusters.
Only adjusted measures can hence safely be used as a consensus index
to evaluate the average stability of clustering algorithms for a given
value of k on various overlapping sub-samples of the dataset.
"""
print(__doc__)
# Author: Olivier Grisel <olivier.grisel@ensta.org>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from time import time
from sklearn import metrics
def uniform_labelings_scores(score_func, n_samples, n_clusters_range,
fixed_n_classes=None, n_runs=5, seed=42):
"""Compute score for 2 random uniform cluster labelings.
Both random labelings have the same number of clusters for each value
possible value in ``n_clusters_range``.
When fixed_n_classes is not None the first labeling is considered a ground
truth class assignment with fixed number of classes.
"""
random_labels = np.random.RandomState(seed).random_integers
scores = np.zeros((len(n_clusters_range), n_runs))
if fixed_n_classes is not None:
labels_a = random_labels(low=0, high=fixed_n_classes - 1,
size=n_samples)
for i, k in enumerate(n_clusters_range):
for j in range(n_runs):
if fixed_n_classes is None:
labels_a = random_labels(low=0, high=k - 1, size=n_samples)
labels_b = random_labels(low=0, high=k - 1, size=n_samples)
scores[i, j] = score_func(labels_a, labels_b)
return scores
score_funcs = [
metrics.adjusted_rand_score,
metrics.v_measure_score,
metrics.adjusted_mutual_info_score,
metrics.mutual_info_score,
]
# 2 independent random clusterings with equal cluster number
n_samples = 100
n_clusters_range = np.linspace(2, n_samples, 10).astype(np.int)
plt.figure(1)
plots = []
names = []
for score_func in score_funcs:
print("Computing %s for %d values of n_clusters and n_samples=%d"
% (score_func.__name__, len(n_clusters_range), n_samples))
t0 = time()
scores = uniform_labelings_scores(score_func, n_samples, n_clusters_range)
print("done in %0.3fs" % (time() - t0))
plots.append(plt.errorbar(
n_clusters_range, np.median(scores, axis=1), scores.std(axis=1))[0])
names.append(score_func.__name__)
plt.title("Clustering measures for 2 random uniform labelings\n"
"with equal number of clusters")
plt.xlabel('Number of clusters (Number of samples is fixed to %d)' % n_samples)
plt.ylabel('Score value')
plt.legend(plots, names)
plt.ylim(ymin=-0.05, ymax=1.05)
# Random labeling with varying n_clusters against ground class labels
# with fixed number of clusters
n_samples = 1000
n_clusters_range = np.linspace(2, 100, 10).astype(np.int)
n_classes = 10
plt.figure(2)
plots = []
names = []
for score_func in score_funcs:
print("Computing %s for %d values of n_clusters and n_samples=%d"
% (score_func.__name__, len(n_clusters_range), n_samples))
t0 = time()
scores = uniform_labelings_scores(score_func, n_samples, n_clusters_range,
fixed_n_classes=n_classes)
print("done in %0.3fs" % (time() - t0))
plots.append(plt.errorbar(
n_clusters_range, scores.mean(axis=1), scores.std(axis=1))[0])
names.append(score_func.__name__)
plt.title("Clustering measures for random uniform labeling\n"
"against reference assignment with %d classes" % n_classes)
plt.xlabel('Number of clusters (Number of samples is fixed to %d)' % n_samples)
plt.ylabel('Score value')
plt.ylim(ymin=-0.05, ymax=1.05)
plt.legend(plots, names)
plt.show()
| bsd-3-clause |
rupakc/Kaggle-Compendium | San Francisco Salaries/salary-baseline.py | 1 | 2919 | import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import BaggingRegressor
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.ensemble import AdaBoostRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble import RandomTreesEmbedding
from sklearn.neural_network import MLPRegressor
from sklearn.linear_model import ElasticNet
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn import metrics
def get_mlp_regressor(num_hidden_units=51):
mlp = MLPRegressor(hidden_layer_sizes=num_hidden_units)
return [mlp],['Multi-Layer Perceptron']
def get_ensemble_models():
rf = RandomForestRegressor(n_estimators=51,min_samples_leaf=5,min_samples_split=3,random_state=42)
bag = BaggingRegressor(n_estimators=51,random_state=42)
extra = ExtraTreesRegressor(n_estimators=71,random_state=42)
ada = AdaBoostRegressor(random_state=42)
grad = GradientBoostingRegressor(n_estimators=101,random_state=42)
classifier_list = [rf,bag,extra,ada,grad]
classifier_name_list = ['Random Forests','Bagging','Extra Trees','AdaBoost','Gradient Boost']
return classifier_list, classifier_name_list
def get_linear_model():
elastic_net = ElasticNet()
return [elastic_net],['Elastic Net']
def print_evaluation_metrics(trained_model,trained_model_name,X_test,y_test):
print '--------- For Model : ', trained_model_name ,' ---------\n'
predicted_values = trained_model.predict(X_test)
print "Mean Absolute Error : ", metrics.mean_absolute_error(y_test,predicted_values)
print "Median Absolute Error : ", metrics.median_absolute_error(y_test,predicted_values)
print "Mean Squared Error : ", metrics.mean_squared_error(y_test,predicted_values)
print "R2 Score : ", metrics.r2_score(y_test,predicted_values)
print "---------------------------------------\n"
filename = 'Salaries.csv'
salary_frame = pd.read_csv(filename)
salary_frame = salary_frame.head(148500)
label_encoder = LabelEncoder()
del salary_frame['Id']
del salary_frame['EmployeeName']
del salary_frame['Benefits']
del salary_frame['Notes']
del salary_frame['Agency']
del salary_frame['Status']
salary_frame.dropna(inplace=True)
predicted_values = list(salary_frame['TotalPay'].values)
del salary_frame['TotalPay']
del salary_frame['TotalPayBenefits']
salary_frame['JobTitle'] = label_encoder.fit_transform(salary_frame['JobTitle'].values)
X_train,X_test,y_train,y_test = train_test_split(salary_frame.values,predicted_values,test_size=0.1,random_state=42)
classifier_list, classifier_name_list = get_ensemble_models()
for classifier,classifier_name in zip(classifier_list,classifier_name_list):
classifier.fit(X_train,y_train)
print_evaluation_metrics(classifier,classifier_name,X_test,y_test)
| mit |
MikeDT/CNN_2_BBN | Synthetic_Data_Creator.py | 1 | 12328 | # -*- coding: utf-8 -*-
"""
Created on Mon Sep 11 13:31:40 2017
@author: Mike
# create datasets
# tune complexity
# have a variety of methods/types
# then create in bulk
# save in a single df
# pickle the edges and the df
# adjust the trainer to split out from that style input (importPrepX from CNN_2_BBN.py)
#
# then retroactively run the analysis (that)
exponential
log normal
student's T
chi suared
gamma
beta
weibull
https://blog.cloudera.com/blog/2015/12/common-probability-distributions-the-data-scientists-crib
-sheet/
http://www.math.wm.edu/~leemis/chart/UDR/UDR.html
more mixture types (e.g. mix all in one, mix one by one, mix random amount,
mix more at start of dist, less at end)
"""
import pandas as pd
import numpy as np
import inspect
import networkx as nx
import random
from itertools import islice, chain
from random import randint
class Synthetic_Data_Creator():
''' A synthetic data creator, modelled on propagating distriutions over a ranom DAG'''
def __init__(self):
''' Initialises the class, with the assumptions detailed in the parameters below
Arguments:
n/a
Returns(to self) :
df -- empty dataframe
edgesTrue -- empty edgelist (assuemd to be nx/pgmpy compatible)
distDict -- the distribution ratios
distParam -- the parameters used to model the data distributions
colNames -- assumes the column names are A-Z (in the future this can be used to
override input dfs)
'''
self.df = pd.DataFrame()
self.edgesTrue = []
self.distDict = {"Normal":1,"Poisson":1,"Uniform":1,
"Gamma":1,"Beta":1,"Exponential":1,
"ChiSquare":1}
self.distParam = {"perNodeConMin":1,"perNodeConMax":4, "minCat":8, "maxCat":15,
"mean":10, "stDev" : 5, # normal
"mu" : 4}
self.colNames = ['A','B','C','D','E','F','G','H','I','J','K','L','M',
'N','O','P','Q','R','S','T','U','V','W','X','Y','Z']
def overrideDistDict(self,*,ratioNorm=None,ratioPois=None,ratioUni=None):
''' Allows overriding of the distribution mixes (and extension, in the future for more dist types)
Arguments:
n/a
Returns(to self) :
distDict -- the distribution ratios
'''
frame = inspect.currentframe()
args, _, _, values = inspect.getargvalues(frame)
for arg in args:
if values[arg] != None:
self.distDict[arg] = values[arg]
def overrideDistParam(self,*,noiseLevel=None,perNodeConMin=None,perNodeConMax=None, #common
minCat=None, maxCat=None, # common
mean=None, stDev = None, # normal
mu = None):
''' Allows overriding of the distribution parameters (and extension, in the future for more dist types)
Arguments:
n/a
Returns(to self) :
distParam -- the distribution ratios
'''
frame = inspect.currentframe()
args, _, _, values = inspect.getargvalues(frame)
for arg in args:
if values[arg] != None:
self.distParam[arg] = values[arg]
def mixDist(self,*,mixType,mixValues):
''' To be completed, but will adjust the mixing approach
Arguments:
n/a
Returns(to self) :
n/a
'''
None
def createRandomDist(self,*,size, noise = True):
''' Returns a random distribution dictionary, detailing th native distribution of each node
Arguments:
size -- the size of the random distribution
noise -- toggles the inclusion of noise (default = True)
Returns :
randomDistDict -- a dictionary with one entry per possible distribution
'''
randomDistDict = {}
randomParamFactor = np.random.randint(low=-3,high=4)
#Normal distribution
scale = self.distParam["stDev"]+randomParamFactor
normDist = np.random.normal(loc=0.0, scale=scale, size=size)
normDist = normDist - min(normDist) # shifts to smallest being 0
normDist = normDist / (max(normDist/self.distParam["maxCat"])) # constrains the dist
randomDistDict["Normal"] = normDist
#Poisson distribution
randomDistDict["Poisson"] = np.random.poisson(self.distParam["mu"]+randomParamFactor, size)
#Uniform distribution
randomDistDict["Uniform"] = np.random.uniform(low=1,
high= np.random.randint(low=self.distParam["minCat"],
high=self.distParam["maxCat"])
+ randomParamFactor, size=size)
#Gamma distribution
randomDistDict["Gamma"] = np.random.gamma(shape=2,scale=self.distParam["stDev"]
+randomParamFactor,size=size)
#Beta distribution
randomDistDict["Beta"] = np.random.beta(a=2,b=self.distParam["stDev"]
+randomParamFactor,size=size)
#Chi Suare distribution
randomDistDict["ChiSquare"] = np.random.chisquare(2,size=size)
#Chi Suare distribution
randomDistDict["Exponential"] = np.random.exponential(scale=self.distParam["stDev"]
+randomParamFactor,size=size)
return randomDistDict
def createData(self,*,dimensions,size=10000):
''' Creates a randomised distribution, i.e. the primary distribution
Arguments:
dimensions -- the number of df columns/dimensions/nodes in the PGM
size -- the size of the dataframe (no default)
Returns (to self):
df -- a randomly populated dataframe, with a variey of different distribution types
'''
#instantiate a dataframe
self.df = pd.DataFrame(np.random.randint(low=self.distParam["minCat"],
high=self.distParam["maxCat"],
size=(size, dimensions)),
columns=self.colNames[:dimensions])
self.distTypes = {}
#select a primary distribution per column at (weighted) random
probDistArr = []
totalDenom = sum(self.distDict.values())
for key in self.distDict:
probDistArr.append(self.distDict[key]/totalDenom)
probDistArr = np.asarray(probDistArr)
for dimension in range(0,dimensions):
choice = np.random.choice(np.asarray(list(self.distDict.keys())), 1, p=probDistArr)
self.distTypes[self.colNames[dimension]] = choice[0]
for column in self.df.columns:
dataDists = self.createRandomDist(size=size)
self.df[column] = dataDists[self.distTypes[column]]
self.df = self.df.fillna(0)
self.df = self.df.replace(np.inf, 0)
self.df = self.df.astype(int)
def createPGM(self,model="Default"):
''' creates a random PGM from the possible approaches (currently only 1... :$)
Arguments:
model -- the PGM creation approach
Returns (to self):
n/a
'''
if model == "Default":
self.createDefPGM()
def randomChunk(self,lst, min_chunk=2, max_chunk=3):
''' Randomly splits a list
Arguments:
lst -- the list to be chunked up
min_chunk -- minimum chunk size (default=1)
max_chunk -- maximum chunk size (default=3)
Returns:
n/a
'''
it = iter(lst)
while True:
nxt = list(islice(it,randint(min_chunk,max_chunk)))
if nxt:
yield nxt
else:
break
def run(self,*,dimensions=6,size=10000, sklearnMetric = False):
''' Creates a PGM and DF from scratch, using the default values
Arguments:
dimensions -- the number of dimensions
size -- the length of the dataframe
Returns:
df -- the underlying dataframe, post PGM creation
G --the networkX compatible Graph object
'''
#self.Synthetic_Data_Creator()
self.createData(dimensions=dimensions,size=size)
self.createPGM()
if sklearnMetric == False:
return self.df, self.G
else:
return self.df, self.G, self.sklearnMet
def createDefPGM(self,*,min_chunk=2, max_chunk=4,minConnect=1, maxConnect=4,dominance = 0.5,
randomiseTarget = False,ringDecayRate=0.5):
''' Creates a PGM from otherwise random data by iteratively blending column values
Arguments:
df -- the dataframe that requires PGMing
min_chunk -- minimum chunk size/concentric rings in the PGM (default=1)
max_chunk -- maximum chunk size/concentric rings in the PGM (default=3)
minConnect -- the minimum number of edges created per node during each sweep (default = 1)
maxConnect -- the maximum number of edges created per node during each sweep (default = 3)
dominance -- the degree of dominance of the native node distribution (default = 0.5)
randomiseTarget -- randomise the target to reduce the homogeniety of PGMs (default = False)
ringDecayRate -- the degree to which the subsequnt rings lose the dominance (default=1.5)
Returns (to self):
df -- the prepped dataframe
G -- networkx Graph object
'''
modelEdges = []
# Create concentric rings, populated randomly by nodes from the dataframe
nodeLst = self.df.columns[1:]
self.df['A'] = sorted(list(self.df['A']))
dfTmp = self.df
ring1 = 0
while ring1 < 2:
ringLst = list(self.randomChunk(nodeLst,min_chunk=min_chunk, max_chunk=max_chunk))
ringLst = ['A'] + list((ringLst))
ring1 = len(ringLst[1])
isGoodPGM = False
# Iterate of concentic rings, blending each distribution with anothermrandom set, sampled
# from all "higher" rings
while isGoodPGM == False:
for i, ring in enumerate(ringLst[:-1]):
availableNodes = list(chain.from_iterable(ringLst[i+1:]))
random.shuffle(availableNodes)
try:
connects = np.random.randint(low=minConnect,
high=round(maxConnect-i*ringDecayRate))
except:
connects = minConnect
randomConnections = list(set(availableNodes[0:connects]))
for node in ring:
for connection in randomConnections:
#df[connection] = df[connection]*dominance + df[node]*(1-dominance)
dfTmp[connection] = self.df[connection]*dominance + \
self.df[node]*(1-dominance)
modelEdges.append((node,connection))
# Convert edges and nodes to graphs and ensures random graph is a DAG
G = nx.DiGraph()
G.add_nodes_from(self.df.columns)
G.add_edges_from(modelEdges)
isGoodPGM = nx.is_directed_acyclic_graph(G)
dfTmp = dfTmp.astype(int)
self.sklearnMet = None#make call to sklearn MNB to ensure networks are appropriately complex
self.df, self.G = dfTmp, G
#sklearn chunk/function
| apache-2.0 |
wkfwkf/statsmodels | statsmodels/datasets/cpunish/data.py | 25 | 2597 | """US Capital Punishment dataset."""
__docformat__ = 'restructuredtext'
COPYRIGHT = """Used with express permission from the original author,
who retains all rights."""
TITLE = __doc__
SOURCE = """
Jeff Gill's `Generalized Linear Models: A Unified Approach`
http://jgill.wustl.edu/research/books.html
"""
DESCRSHORT = """Number of state executions in 1997"""
DESCRLONG = """This data describes the number of times capital punishment is implemented
at the state level for the year 1997. The outcome variable is the number of
executions. There were executions in 17 states.
Included in the data are explanatory variables for median per capita income
in dollars, the percent of the population classified as living in poverty,
the percent of Black citizens in the population, the rate of violent
crimes per 100,000 residents for 1996, a dummy variable indicating
whether the state is in the South, and (an estimate of) the proportion
of the population with a college degree of some kind.
"""
NOTE = """::
Number of Observations - 17
Number of Variables - 7
Variable name definitions::
EXECUTIONS - Executions in 1996
INCOME - Median per capita income in 1996 dollars
PERPOVERTY - Percent of the population classified as living in poverty
PERBLACK - Percent of black citizens in the population
VC100k96 - Rate of violent crimes per 100,00 residents for 1996
SOUTH - SOUTH == 1 indicates a state in the South
DEGREE - An esimate of the proportion of the state population with a
college degree of some kind
State names are included in the data file, though not returned by load.
"""
from numpy import recfromtxt, column_stack, array
from statsmodels.datasets import utils as du
from os.path import dirname, abspath
def load():
"""
Load the cpunish data and return a Dataset class.
Returns
-------
Dataset instance:
See DATASET_PROPOSAL.txt for more information.
"""
data = _get_data()
return du.process_recarray(data, endog_idx=0, dtype=float)
def load_pandas():
"""
Load the cpunish data and return a Dataset class.
Returns
-------
Dataset instance:
See DATASET_PROPOSAL.txt for more information.
"""
data = _get_data()
return du.process_recarray_pandas(data, endog_idx=0, dtype=float)
def _get_data():
filepath = dirname(abspath(__file__))
data = recfromtxt(open(filepath + '/cpunish.csv', 'rb'), delimiter=",",
names=True, dtype=float, usecols=(1,2,3,4,5,6,7))
return data
| bsd-3-clause |
tawsifkhan/scikit-learn | sklearn/tree/export.py | 53 | 15772 | """
This module defines export functions for decision trees.
"""
# Authors: Gilles Louppe <g.louppe@gmail.com>
# Peter Prettenhofer <peter.prettenhofer@gmail.com>
# Brian Holt <bdholt1@gmail.com>
# Noel Dawe <noel@dawe.me>
# Satrajit Gosh <satrajit.ghosh@gmail.com>
# Trevor Stephens <trev.stephens@gmail.com>
# Licence: BSD 3 clause
import numpy as np
from ..externals import six
from . import _tree
def _color_brew(n):
"""Generate n colors with equally spaced hues.
Parameters
----------
n : int
The number of colors required.
Returns
-------
color_list : list, length n
List of n tuples of form (R, G, B) being the components of each color.
"""
color_list = []
# Initialize saturation & value; calculate chroma & value shift
s, v = 0.75, 0.9
c = s * v
m = v - c
for h in np.arange(25, 385, 360. / n).astype(int):
# Calculate some intermediate values
h_bar = h / 60.
x = c * (1 - abs((h_bar % 2) - 1))
# Initialize RGB with same hue & chroma as our color
rgb = [(c, x, 0),
(x, c, 0),
(0, c, x),
(0, x, c),
(x, 0, c),
(c, 0, x),
(c, x, 0)]
r, g, b = rgb[int(h_bar)]
# Shift the initial RGB values to match value and store
rgb = [(int(255 * (r + m))),
(int(255 * (g + m))),
(int(255 * (b + m)))]
color_list.append(rgb)
return color_list
def export_graphviz(decision_tree, out_file="tree.dot", max_depth=None,
feature_names=None, class_names=None, label='all',
filled=False, leaves_parallel=False, impurity=True,
node_ids=False, proportion=False, rotate=False,
rounded=False, special_characters=False):
"""Export a decision tree in DOT format.
This function generates a GraphViz representation of the decision tree,
which is then written into `out_file`. Once exported, graphical renderings
can be generated using, for example::
$ dot -Tps tree.dot -o tree.ps (PostScript format)
$ dot -Tpng tree.dot -o tree.png (PNG format)
The sample counts that are shown are weighted with any sample_weights that
might be present.
Read more in the :ref:`User Guide <tree>`.
Parameters
----------
decision_tree : decision tree classifier
The decision tree to be exported to GraphViz.
out_file : file object or string, optional (default="tree.dot")
Handle or name of the output file.
max_depth : int, optional (default=None)
The maximum depth of the representation. If None, the tree is fully
generated.
feature_names : list of strings, optional (default=None)
Names of each of the features.
class_names : list of strings, bool or None, optional (default=None)
Names of each of the target classes in ascending numerical order.
Only relevant for classification and not supported for multi-output.
If ``True``, shows a symbolic representation of the class name.
label : {'all', 'root', 'none'}, optional (default='all')
Whether to show informative labels for impurity, etc.
Options include 'all' to show at every node, 'root' to show only at
the top root node, or 'none' to not show at any node.
filled : bool, optional (default=False)
When set to ``True``, paint nodes to indicate majority class for
classification, extremity of values for regression, or purity of node
for multi-output.
leaves_parallel : bool, optional (default=False)
When set to ``True``, draw all leaf nodes at the bottom of the tree.
impurity : bool, optional (default=True)
When set to ``True``, show the impurity at each node.
node_ids : bool, optional (default=False)
When set to ``True``, show the ID number on each node.
proportion : bool, optional (default=False)
When set to ``True``, change the display of 'values' and/or 'samples'
to be proportions and percentages respectively.
rotate : bool, optional (default=False)
When set to ``True``, orient tree left to right rather than top-down.
rounded : bool, optional (default=False)
When set to ``True``, draw node boxes with rounded corners and use
Helvetica fonts instead of Times-Roman.
special_characters : bool, optional (default=False)
When set to ``False``, ignore special characters for PostScript
compatibility.
Examples
--------
>>> from sklearn.datasets import load_iris
>>> from sklearn import tree
>>> clf = tree.DecisionTreeClassifier()
>>> iris = load_iris()
>>> clf = clf.fit(iris.data, iris.target)
>>> tree.export_graphviz(clf,
... out_file='tree.dot') # doctest: +SKIP
"""
def get_color(value):
# Find the appropriate color & intensity for a node
if colors['bounds'] is None:
# Classification tree
color = list(colors['rgb'][np.argmax(value)])
sorted_values = sorted(value, reverse=True)
alpha = int(255 * (sorted_values[0] - sorted_values[1]) /
(1 - sorted_values[1]))
else:
# Regression tree or multi-output
color = list(colors['rgb'][0])
alpha = int(255 * ((value - colors['bounds'][0]) /
(colors['bounds'][1] - colors['bounds'][0])))
# Return html color code in #RRGGBBAA format
color.append(alpha)
hex_codes = [str(i) for i in range(10)]
hex_codes.extend(['a', 'b', 'c', 'd', 'e', 'f'])
color = [hex_codes[c // 16] + hex_codes[c % 16] for c in color]
return '#' + ''.join(color)
def node_to_str(tree, node_id, criterion):
# Generate the node content string
if tree.n_outputs == 1:
value = tree.value[node_id][0, :]
else:
value = tree.value[node_id]
# Should labels be shown?
labels = (label == 'root' and node_id == 0) or label == 'all'
# PostScript compatibility for special characters
if special_characters:
characters = ['#', '<SUB>', '</SUB>', '≤', '<br/>', '>']
node_string = '<'
else:
characters = ['#', '[', ']', '<=', '\\n', '"']
node_string = '"'
# Write node ID
if node_ids:
if labels:
node_string += 'node '
node_string += characters[0] + str(node_id) + characters[4]
# Write decision criteria
if tree.children_left[node_id] != _tree.TREE_LEAF:
# Always write node decision criteria, except for leaves
if feature_names is not None:
feature = feature_names[tree.feature[node_id]]
else:
feature = "X%s%s%s" % (characters[1],
tree.feature[node_id],
characters[2])
node_string += '%s %s %s%s' % (feature,
characters[3],
round(tree.threshold[node_id], 4),
characters[4])
# Write impurity
if impurity:
if isinstance(criterion, _tree.FriedmanMSE):
criterion = "friedman_mse"
elif not isinstance(criterion, six.string_types):
criterion = "impurity"
if labels:
node_string += '%s = ' % criterion
node_string += (str(round(tree.impurity[node_id], 4)) +
characters[4])
# Write node sample count
if labels:
node_string += 'samples = '
if proportion:
percent = (100. * tree.n_node_samples[node_id] /
float(tree.n_node_samples[0]))
node_string += (str(round(percent, 1)) + '%' +
characters[4])
else:
node_string += (str(tree.n_node_samples[node_id]) +
characters[4])
# Write node class distribution / regression value
if proportion and tree.n_classes[0] != 1:
# For classification this will show the proportion of samples
value = value / tree.weighted_n_node_samples[node_id]
if labels:
node_string += 'value = '
if tree.n_classes[0] == 1:
# Regression
value_text = np.around(value, 4)
elif proportion:
# Classification
value_text = np.around(value, 2)
elif np.all(np.equal(np.mod(value, 1), 0)):
# Classification without floating-point weights
value_text = value.astype(int)
else:
# Classification with floating-point weights
value_text = np.around(value, 4)
# Strip whitespace
value_text = str(value_text.astype('S32')).replace("b'", "'")
value_text = value_text.replace("' '", ", ").replace("'", "")
if tree.n_classes[0] == 1 and tree.n_outputs == 1:
value_text = value_text.replace("[", "").replace("]", "")
value_text = value_text.replace("\n ", characters[4])
node_string += value_text + characters[4]
# Write node majority class
if (class_names is not None and
tree.n_classes[0] != 1 and
tree.n_outputs == 1):
# Only done for single-output classification trees
if labels:
node_string += 'class = '
if class_names is not True:
class_name = class_names[np.argmax(value)]
else:
class_name = "y%s%s%s" % (characters[1],
np.argmax(value),
characters[2])
node_string += class_name
# Clean up any trailing newlines
if node_string[-2:] == '\\n':
node_string = node_string[:-2]
if node_string[-5:] == '<br/>':
node_string = node_string[:-5]
return node_string + characters[5]
def recurse(tree, node_id, criterion, parent=None, depth=0):
if node_id == _tree.TREE_LEAF:
raise ValueError("Invalid node_id %s" % _tree.TREE_LEAF)
left_child = tree.children_left[node_id]
right_child = tree.children_right[node_id]
# Add node with description
if max_depth is None or depth <= max_depth:
# Collect ranks for 'leaf' option in plot_options
if left_child == _tree.TREE_LEAF:
ranks['leaves'].append(str(node_id))
elif str(depth) not in ranks:
ranks[str(depth)] = [str(node_id)]
else:
ranks[str(depth)].append(str(node_id))
out_file.write('%d [label=%s'
% (node_id,
node_to_str(tree, node_id, criterion)))
if filled:
# Fetch appropriate color for node
if 'rgb' not in colors:
# Initialize colors and bounds if required
colors['rgb'] = _color_brew(tree.n_classes[0])
if tree.n_outputs != 1:
# Find max and min impurities for multi-output
colors['bounds'] = (np.min(-tree.impurity),
np.max(-tree.impurity))
elif tree.n_classes[0] == 1:
# Find max and min values in leaf nodes for regression
colors['bounds'] = (np.min(tree.value),
np.max(tree.value))
if tree.n_outputs == 1:
node_val = (tree.value[node_id][0, :] /
tree.weighted_n_node_samples[node_id])
if tree.n_classes[0] == 1:
# Regression
node_val = tree.value[node_id][0, :]
else:
# If multi-output color node by impurity
node_val = -tree.impurity[node_id]
out_file.write(', fillcolor="%s"' % get_color(node_val))
out_file.write('] ;\n')
if parent is not None:
# Add edge to parent
out_file.write('%d -> %d' % (parent, node_id))
if parent == 0:
# Draw True/False labels if parent is root node
angles = np.array([45, -45]) * ((rotate - .5) * -2)
out_file.write(' [labeldistance=2.5, labelangle=')
if node_id == 1:
out_file.write('%d, headlabel="True"]' % angles[0])
else:
out_file.write('%d, headlabel="False"]' % angles[1])
out_file.write(' ;\n')
if left_child != _tree.TREE_LEAF:
recurse(tree, left_child, criterion=criterion, parent=node_id,
depth=depth + 1)
recurse(tree, right_child, criterion=criterion, parent=node_id,
depth=depth + 1)
else:
ranks['leaves'].append(str(node_id))
out_file.write('%d [label="(...)"' % node_id)
if filled:
# color cropped nodes grey
out_file.write(', fillcolor="#C0C0C0"')
out_file.write('] ;\n' % node_id)
if parent is not None:
# Add edge to parent
out_file.write('%d -> %d ;\n' % (parent, node_id))
own_file = False
try:
if isinstance(out_file, six.string_types):
if six.PY3:
out_file = open(out_file, "w", encoding="utf-8")
else:
out_file = open(out_file, "wb")
own_file = True
# The depth of each node for plotting with 'leaf' option
ranks = {'leaves': []}
# The colors to render each node with
colors = {'bounds': None}
out_file.write('digraph Tree {\n')
# Specify node aesthetics
out_file.write('node [shape=box')
rounded_filled = []
if filled:
rounded_filled.append('filled')
if rounded:
rounded_filled.append('rounded')
if len(rounded_filled) > 0:
out_file.write(', style="%s", color="black"'
% ", ".join(rounded_filled))
if rounded:
out_file.write(', fontname=helvetica')
out_file.write('] ;\n')
# Specify graph & edge aesthetics
if leaves_parallel:
out_file.write('graph [ranksep=equally, splines=polyline] ;\n')
if rounded:
out_file.write('edge [fontname=helvetica] ;\n')
if rotate:
out_file.write('rankdir=LR ;\n')
# Now recurse the tree and add node & edge attributes
if isinstance(decision_tree, _tree.Tree):
recurse(decision_tree, 0, criterion="impurity")
else:
recurse(decision_tree.tree_, 0, criterion=decision_tree.criterion)
# If required, draw leaf nodes at same depth as each other
if leaves_parallel:
for rank in sorted(ranks):
out_file.write("{rank=same ; " +
"; ".join(r for r in ranks[rank]) + "} ;\n")
out_file.write("}")
finally:
if own_file:
out_file.close()
| bsd-3-clause |
antoinebrl/practice-ML | rbf.py | 1 | 3759 | # Author : Antoine Broyelle
# Licence : MIT
# inspired by : KTH - DD2432 : Artificial Neural Networks and Other Learning Systems
# https://www.kth.se/student/kurser/kurs/DD2432?l=en
import numpy as np
from kmeans import Kmeans
from pcn import PCN
from utils.distances import euclidianDist
class RBF:
'''Radial Basis Function Network. Can be used for classification or function approximation'''
def __init__(self, inputs, targets, n=1, sigma=0, distance=euclidianDist,
weights=None, usage='class', normalization=False):
'''
:param inputs: set of data points as row vectors
:param targets: set of targets as row vectors
:param n: (int) number of weights.
:param sigma: (float) spread of receptive fields
:param distance: (function) compute metric between points
:param weights: set of weights. If None, weights are generated with K-means algorithm.
Otherwise provided weights are used no matter the value of n.
:param usage: (string) Should be equal to 'class' for classification and 'fctapprox' for
function approximation. Otherwise raise an error.
:param normalization: (bool) If true, perform a normalization of the hidden layer.
'''
if not usage is 'class' and not usage is 'fctapprox':
raise Exception('[RBF][__init__] the usage is unrecognized. Should be equal to '
'"class" for classification and "fctapprox" for function approximation')
self.targets = targets
self.inputs = inputs
self.dist = distance
self.n = n
self.weights = weights
self.usage = usage
self.normalization = normalization
if sigma == 0:
self.sigma = (inputs.max(axis=0)-inputs.min(axis=0)).max() / np.sqrt(2*n)
else:
self.sigma = sigma
def fieldActivation(self, inputs, weights, sigma, dist):
hidden = dist(inputs, weights)
hidden = np.exp(- hidden / sigma)
return hidden
def train(self, nbIte=100):
if self.weights is None:
km = Kmeans(self.inputs, k=self.n, distance=self.dist)
km.train(nbIte=1000)
self.weights = km.centers
hidden = self.fieldActivation(self.inputs, self.weights, self.sigma, self.dist)
if self.normalization:
hidden = hidden / np.sum(hidden, axis=1)[:, np.newaxis]
if self.usage is 'class':
self.pcn = PCN(inputs=hidden, targets=self.targets, delta=True)
return self.pcn.train(nbIte=nbIte)
else : # linear regression
self.weights2 = np.linalg.inv(np.dot(hidden.T, hidden))
self.weights2 = np.dot(self.weights2, np.dot(hidden.T, self.targets))
return np.dot(hidden, self.weights2)
def predict(self, data):
h = self.fieldActivation(data, self.weights, self.sigma, self.dist)
if self.usage is 'class':
return self.pcn.predict(h)
else:
return np.dot(h, self.weights2)
if __name__ == "__main__":
# Classification
inputs = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
XORtargets = np.array([[0], [1], [1], [0]])
rbf = RBF(inputs=inputs, targets=XORtargets, n=4)
print rbf.train(nbIte=300)
# Function approximation
import matplotlib.pyplot as plt
x = np.linspace(start=0, stop=2*np.pi, num=63)
y = np.sin(x)
w = np.linspace(start=0, stop=2 * np.pi, num=8)
x = x[:, np.newaxis]
y = y[:, np.newaxis]
w = w[:, np.newaxis]
rbf = RBF(inputs=x, targets=y, usage='fctapprox', weights=w, normalization=True)
out = rbf.train()
plt.plot(x,y, 'r')
plt.plot(x,out, 'b')
plt.show()
| mit |
gchrupala/reimaginet | imaginet/simple_data.py | 2 | 7498 | import numpy
import cPickle as pickle
import gzip
import os
import copy
import funktional.util as util
from funktional.util import autoassign
from sklearn.preprocessing import StandardScaler
import string
import random
# Types of tokenization
def words(sentence):
return sentence['tokens']
def characters(sentence):
return list(sentence['raw'])
def compressed(sentence):
return [ c.lower() for c in sentence['raw'] if c in string.letters ]
def phonemes(sentence):
return [ pho for pho in sentence['ipa'] if pho != "*" ]
class NoScaler():
def __init__(self):
pass
def fit_transform(self, x):
return x
def transform(self, x):
return x
def inverse_transform(self, x):
return x
class InputScaler():
def __init__(self):
self.scaler = StandardScaler()
def fit_transform(self, data):
flat = numpy.vstack(data)
self.scaler.fit(flat)
return [ self.scaler.transform(X) for X in data ]
def transform(self, data):
return [ self.scaler.transform(X) for X in data ]
def inverse_transform(self, data):
return [ self.scaler.inverse_transform(X) for X in data ]
def vector_padder(vecs):
"""Pads each vector in vecs with zeros at the beginning. Returns 3D tensor with dimensions:
(BATCH_SIZE, SAMPLE_LENGTH, NUMBER_FEATURES).
"""
max_len = max(map(len, vecs))
return numpy.array([ numpy.vstack([numpy.zeros((max_len-len(vec),vec.shape[1])) , vec])
for vec in vecs ], dtype='float32')
class Batcher(object):
def __init__(self, mapper, pad_end=False):
autoassign(locals())
self.BEG = self.mapper.BEG_ID
self.END = self.mapper.END_ID
def pad(self, xss): # PAD AT BEGINNING
max_len = max((len(xs) for xs in xss))
def pad_one(xs):
if self.pad_end:
return xs + [ self.END for _ in range(0,(max_len-len(xs))) ]
return [ self.BEG for _ in range(0,(max_len-len(xs))) ] + xs
return [ pad_one(xs) for xs in xss ]
def batch_inp(self, sents):
mb = self.padder(sents)
return mb[:,1:]
def padder(self, sents):
return numpy.array(self.pad([[self.BEG]+sent+[self.END] for sent in sents]), dtype='int32')
def batch(self, gr):
"""Prepare minibatch.
Returns:
- input string
- visual target vector
- output string at t-1
- target string
"""
mb_inp = self.padder([x['tokens_in'] for x in gr])
mb_target_t = self.padder([x['tokens_out'] for x in gr])
inp = mb_inp[:,1:]
target_t = mb_target_t[:,1:]
target_prev_t = mb_target_t[:,0:-1]
target_v = numpy.array([ x['img'] for x in gr ], dtype='float32')
audio = vector_padder([ x['audio'] for x in gr ]) if x['audio'] is not None else None
return { 'input': inp,
'target_v':target_v,
'target_prev_t':target_prev_t,
'target_t':target_t,
'audio': audio }
class SimpleData(object):
"""Training / validation data prepared to feed to the model."""
def __init__(self, provider, tokenize=words, min_df=10, scale=True, scale_input=False, batch_size=64, shuffle=False, limit=None, curriculum=False, val_vocab=False):
autoassign(locals())
self.data = {}
self.mapper = util.IdMapper(min_df=self.min_df)
self.scaler = StandardScaler() if scale else NoScaler()
self.audio_scaler = InputScaler() if scale_input else NoScaler()
parts = insideout(self.shuffled(arrange(provider.iterImages(split='train'),
tokenize=self.tokenize,
limit=limit)))
parts_val = insideout(self.shuffled(arrange(provider.iterImages(split='val'), tokenize=self.tokenize)))
# TRAINING
if self.val_vocab:
_ = list(self.mapper.fit_transform(parts['tokens_in'] + parts_val['tokens_in']))
parts['tokens_in'] = self.mapper.transform(parts['tokens_in']) # FIXME UGLY HACK
else:
parts['tokens_in'] = self.mapper.fit_transform(parts['tokens_in'])
parts['tokens_out'] = self.mapper.transform(parts['tokens_out'])
parts['img'] = self.scaler.fit_transform(parts['img'])
parts['audio'] = self.audio_scaler.fit_transform(parts['audio'])
self.data['train'] = outsidein(parts)
# VALIDATION
parts_val['tokens_in'] = self.mapper.transform(parts_val['tokens_in'])
parts_val['tokens_out'] = self.mapper.transform(parts_val['tokens_out'])
parts_val['img'] = self.scaler.transform(parts_val['img'])
parts_val['audio'] = self.audio_scaler.transform(parts_val['audio'])
self.data['valid'] = outsidein(parts_val)
self.batcher = Batcher(self.mapper, pad_end=False)
def shuffled(self, xs):
if not self.shuffle:
return xs
else:
zs = copy.copy(list(xs))
random.shuffle(zs)
return zs
def iter_train_batches(self):
# sort data by length
if self.curriculum:
data = [self.data['train'][i] for i in numpy.argsort([len(x['tokens_in']) for x in self.data['train']])]
else:
data = self.data['train']
for bunch in util.grouper(data, self.batch_size*20):
bunch_sort = [ bunch[i] for i in numpy.argsort([len(x['tokens_in']) for x in bunch]) ]
for item in util.grouper(bunch_sort, self.batch_size):
yield self.batcher.batch(item)
def iter_valid_batches(self):
for bunch in util.grouper(self.data['valid'], self.batch_size*20):
bunch_sort = [ bunch[i] for i in numpy.argsort([len(x['tokens_in']) for x in bunch]) ]
for item in util.grouper(bunch_sort, self.batch_size):
yield self.batcher.batch(item)
def dump(self, model_path):
"""Write scaler and batcher to disc."""
pickle.dump(self.scaler, gzip.open(os.path.join(model_path, 'scaler.pkl.gz'), 'w'),
protocol=pickle.HIGHEST_PROTOCOL)
pickle.dump(self.batcher, gzip.open(os.path.join(model_path, 'batcher.pkl.gz'), 'w'),
protocol=pickle.HIGHEST_PROTOCOL)
def arrange(data, tokenize=words, limit=None):
for i,image in enumerate(data):
if limit is not None and i > limit:
break
for sent in image['sentences']:
toks = tokenize(sent)
yield {'tokens_in': toks,
'tokens_out': toks,
'audio': sent.get('audio'),
'img': image['feat']}
def insideout(ds):
"""Transform a list of dictionaries to a dictionary of lists."""
ds = list(ds)
result = dict([(k, []) for k in ds[0].keys()])
for d in ds:
for k,v in d.items():
result[k].append(v)
return result
def outsidein(d):
"""Transform a dictionary of lists to a list of dictionaries."""
ds = []
keys = d.keys()
for key in keys:
d[key] = list(d[key])
for i in range(len(d.values()[0])):
ds.append(dict([(k, d[k][i]) for k in keys]))
return ds
| mit |
cicwi/tomo_box | tomobox.py | 1 | 90395 | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Fri Feb 10 15:39:33 2017
@author: kostenko & der sarkissian
*********** Pilot for the new tomobox *************
"""
#%% Initialization
import matplotlib.pyplot as plt
from scipy import misc # Reading BMPs
import os
import numpy
import re
import time
import transforms3d
from transforms3d import euler
#import pkg_resources
#pkg_resources.require("dxchange==0.1.2")
#from mayavi import mlab
#from tvtk.util.ctf import ColorTransferFunction
# **************************************************************
# Parent class for all sinogram subclasses:
# **************************************************************
class subclass(object):
def __init__(self, parent):
self._parent = parent
# **************************************************************
# Geometry class
# **************************************************************
class history():
'''
Container for the history reconrds of operations applied to the data.
'''
_records = []
@property
def records(self):
return self._records.copy()
def __init__(self):
self.add_record('Created')
def add_record(self, operation = '', properties = []):
# Add a new history record:
timestamp = time.ctime()
# Add new record:
self._records.append([operation, properties, timestamp, numpy.size(self._records) ])
@property
def keys(self):
'''
Return the list of operations.
'''
return [ii[0] for ii in self._records]
def find_record(self, operation):
# Find the last record by its operation name:
result = None
result = [ii for ii in self._records if operation == ii[0]]
return result[-1]
def time_last(self):
# Time between two latest records:
if self._records.size() > 0:
return self._records[-1][2] - self._records[-2][2]
def _delete_after(self, operation):
# Delete the history after the last backup record:
record = self.find_record(self, operation)
# Delete all records after the one that was found:
if not record is None:
self._records = self._records[0:record[3]]
# **************************************************************
# Geometry class
# **************************************************************
class geometry(subclass):
'''
For now, this class describes circular motion cone beam geometry.
It contains standard global parameters such as source to detector distance and magnification
but also allows to add modifiers to many degrees of freedom for each projection separately.
'''
# Private properties:
_src2obj = 0
_det2obj = 0
_det_pixel = [1, 1]
_thetas = [0, numpy.pi * 2]
# Additional misc public:
roi_fov = False
'''
Modifiers (dictionary of geometry modifiers that can be applied globaly or per projection)
VRT, HRZ and MAG are vertical, horizontal and prependicular directions relative to the original detector orientation
'''
def __init__(self, parent):
subclass.__init__(self, parent)
self.modifiers = {'det_vrt': 0, 'det_hrz': 0, 'det_mag': 0, 'src_vrt': 0, 'src_hrz': 0, 'src_mag': 0, 'det_rot': 0, 'dtheta':0, 'vol_x_tra': 0, 'vol_y_tra':0, 'vol_z_tra':0, 'vol_x_rot':0, 'vol_y_rot':0, 'vol_z_rot':0}
def initialize(self, src2obj, det2obj, det_pixel, theta_range, theta_n):
'''
Make sure that all relevant properties are set to some value.
'''
self._src2obj = src2obj
self._det2obj = det2obj
self._det_pixel = det_pixel
self.init_thetas(theta_range, theta_n)
def modifiers_reset(self):
for key in self.modifiers.keys():
self.modifiers[key] = 0
def get_modifier(self, key, index = None):
'''
Get a geometry modifier for a prjection with index = index. Or take the first modifier that corresponds to the key.
'''
if (numpy.size(self.modifiers[key]) == 1) or (index is None):
return self.modifiers[key]
elif numpy.size(self.modifiers[key]) > index:
return self.modifiers[key][index]
else: print('Geometry modifier not found!')
#else: self._parent.error('Geometry modifier not found!')
return None
def translate_volume(self, vector, additive = False):
if additive:
self.modifiers['vol_x_tra'] += vector[0]
self.modifiers['vol_y_tra'] += vector[1]
self.modifiers['vol_z_tra'] += vector[2]
else:
self.modifiers['vol_x_tra'] = vector[0]
self.modifiers['vol_y_tra'] = vector[1]
self.modifiers['vol_z_tra'] = vector[2]
def rotate_volume(self, vector):
self.modifiers['vol_x_rot'] += vector[0]
self.modifiers['vol_y_rot'] += vector[1]
self.modifiers['vol_z_rot'] += vector[2]
def add_thermal_shifts(self, thermal_shifts, additive = True):
'''
Shift the source according to the thermal shift data
'''
if additive:
self.modifiers['src_hrz'] -= thermal_shifts[:,0]/(self.magnification - 1.0)
self.modifiers['src_vrt'] -= thermal_shifts[:,1]/(self.magnification - 1.0)
else:
self.modifiers['src_hrz'] = -thermal_shifts[:,0]/(self.magnification - 1.0)
self.modifiers['src_vrt'] = -thermal_shifts[:,1]/(self.magnification - 1.0)
def rotation_axis_shift(self, shift, additive = False):
if additive:
self.modifiers['det_hrz'] += shift / self.magnification
self.modifiers['src_hrz'] += shift / self.magnification
else:
self.modifiers['det_hrz'] = shift / self.magnification
self.modifiers['src_hrz'] = shift / self.magnification
def optical_axis_shift(self, shift, additive = False):
if additive:
self.modifiers['src_vrt'] += shift
else:
self.modifiers['src_vrt'] = shift
# Center the volume around the new axis:
M = self.magnification
self.translate_volume([0, 0, -shift*(M-1)], additive = additive)
# Set/Get methods (very boring part of code but, hopefully, it will make geometry look prettier from outside):
@property
def src2obj(self):
return self._src2obj
@src2obj.setter
def src2obj(self, src2obj):
self._src2obj = src2obj
@property
def det2obj(self):
return self._det2obj
@det2obj.setter
def det2obj(self, det2obj):
self._det2obj = det2obj
@property
def magnification(self):
return (self._det2obj + self._src2obj) / self._src2obj
@property
def src2det(self):
return self._src2obj + self._det2obj
@property
def det_pixel(self):
return self._det_pixel
@det_pixel.setter
def det_pixel(self, det_pixel):
self._det_pixel = det_pixel
@property
def img_pixel(self):
return self._det_pixel / self.magnification
@img_pixel.setter
def img_pixel(self, img_pixel):
self._det_pixel = img_pixel * self.magnification
@property
def det_size(self):
# We wont take into account the det_size from the log file. Only use actual data size.
if self._parent.data is None:
self._parent.warning('No raw data in the pipeline. The detector size is not known.')
else:
return self._det_pixel * self._parent.data.shape[::2]
@property
def thetas(self):
dt = 1#self._parent._data_sampling
return numpy.array(self._thetas[::dt])
@thetas.setter
def thetas(self, thetas):
dt = self._parent._data_sampling
self._thetas[::dt] = numpy.array(thetas)
@property
def theta_n(self):
return self.thetas.size
@property
def theta_range(self):
return (self.thetas[0], self.thetas[-1])
@theta_range.setter
def theta_range(self, theta_range):
# Change the theta range:
if self.thetas.size() > 2:
self.thetas = numpy.linspace(theta_range[0], theta_range[1], self._thetas.size())
else:
self.thetas = [theta_range[0], theta_range[1]]
@property
def theta_step(self):
return numpy.mean(self._thetas[1:] - self._thetas[0:-1])
def init_thetas(self, theta_range = [], theta_n = 2):
# Initialize thetas array. You can first initialize with theta_range, and add theta_n later.
if theta_range == []:
self.thetas = numpy.linspace(self.thetas[0], self.thetas[-1], theta_n, True)
else:
self.thetas = numpy.linspace(theta_range[0], theta_range[1], theta_n, True)
# **************************************************************
# META class and subclasses
# **************************************************************
class meta(subclass):
'''
This object contains various properties of the imaging system and the history of pre-processing.
'''
geometry = None
history = history()
def __init__(self, parent):
subclass.__init__(self, parent)
self.geometry = geometry(self._parent)
physics = {'voltage': 0, 'current':0, 'exposure': 0}
lyrics = ''
# **************************************************************
# DATA class
# **************************************************************
import scipy.interpolate as interp_sc
import sys
class data(subclass):
'''
Memory allocation, reading and writing the data. This version only supports data stored in CPU memory.
'''
# Raw data, flat field (reference), dark field and backup
_data = None
_ref = None
_dark = None
_backup = None
# Keep a second copy of the data each time the data is modified?
_backup_update = False
# Get/Set methods:
@property
def data(self):
dx = self._parent._data_sampling
dt = 1#self._parent._data_sampling
if dx + dt > 2:
return numpy.ascontiguousarray(self._data[::dx, ::dt, ::dx])
else:
return self._data
@data.setter
def data(self, data):
dx = self._parent._data_sampling
dt = 1#self._parent._data_sampling
if self._backup_update:
self._parent.io.save_backup()
self._data[::dx, ::dt, ::dx] = data
self._parent.meta.history.add_record('set data.data', [])
@property
def shape(self):
dx = self._parent._data_sampling
dt = 1#self._parent._data_sampling
return self._data[::dx, ::dt, ::dx].shape
@property
def size_mb(self):
'''
Get the size of the data object in MB.
'''
return sys.getsizeof(self)
# Public methods:
def data_at_theta(self, target_theta):
'''
Use interpolation to get a projection at a given theta
'''
sz = self.shape
thetas = self._parent.meta.geometry.thetas
interp_grid = numpy.transpose(numpy.meshgrid(target_theta, numpy.arange(sz[0]), numpy.arange(sz[2])), (1,2,3,0))
original_grid = (numpy.arange(sz[0]), thetas, numpy.arange(sz[2]))
return interp_sc.interpn(original_grid, self.data, interp_grid)
# **************************************************************
# IO class and subroutines
# **************************************************************
from stat import ST_CTIME
import gc
import csv
def sort_by_date(files):
'''
Sort file entries by date
'''
# get all entries in the directory w/ stats
entries = [(os.stat(path)[ST_CTIME], path) for path in files]
return [path for date, path in sorted(entries)]
def sort_natural(files):
'''
Sort file entries using the natural (human) sorting
'''
# Keys
keys = [int(re.findall('\d+', f)[-1]) for f in files]
# Sort files using keys:
files = [f for (k, f) in sorted(zip(keys, files))]
return files
def read_image_stack(file):
'''
Read a stack of some image files
'''
# Remove the extention and the last few letters:
name = os.path.basename(file)
ext = os.path.splitext(name)[1]
name = os.path.splitext(name)[0]
digits = len(re.findall('\d+$', name)[0])
name_nonb = re.sub('\d+$', '', name)
path = os.path.dirname(file)
# Get the files of the same extension that finish by the same amount of numbers:
files = os.listdir(path)
files = [x for x in files if (re.findall('\d+$', os.path.splitext(x)[0]) and len(re.findall('\d+$', os.path.splitext(x)[0])[0]) == digits)]
# Get the files that are alike and sort:
files = [os.path.join(path,x) for x in files if ((name_nonb in x) and (os.path.splitext(x)[-1] == ext))]
#files = sorted(files)
files = sort_natural(files)
print('********************')
#print(files)
# Read the first file:
image = misc.imread(files[0], flatten= 0)
sz = numpy.shape(image)
data = numpy.zeros((len(files), sz[0], sz[1]), dtype = numpy.float32)
# Read all files:
ii = 0
for filename in files:
a = misc.imread(filename, flatten= 0)
if a.ndim > 2:
a = a.mean(2)
data[ii, :, :] = a
ii = ii + 1
print(ii, 'files were loaded.')
return data
def update_path(path, io):
'''
Memorize the path if it is provided, otherwise use the one that remember.
'''
if path == '':
path = io.path
else:
io.path = path
if path == '':
io._parent.error('Path to the file was not specified.')
return path
def extract_2d_array(dimension, index, data):
'''
Extract a 2d array from 3d.
'''
if dimension == 0:
return data[index, :, :]
elif dimension == 1:
return data[:, index, :]
else:
return data[:, :, index]
class io(subclass):
'''
Static class for loading / saving the data
'''
path = ''
#settings = {'sort_by_date':False}
def manual_init(self, src2obj = 100, det2obj = 100, theta_n = 128, theta_range = [0, 2*numpy.pi], det_width = 128, det_height = 128, det_pixel = [0.1, 0.1]):
'''
Manual initialization can be used when log file with methadata can not be read or
if a sinthetic data needs to be created.
'''
prnt = self._parent
# Initialize the geometry data:
prnt.meta.geometry.initialize(src2obj, det2obj, det_pixel, theta_range, theta_n)
# Make an empty projections:
prnt.data._data = (numpy.zeros([det_height, theta_n, det_width]))
prnt.meta.history.add_record('io.manual_init')
def read_raw(self, path = '', index_range = [], y_range = [], x_range = []):
'''
Read projection files automatically.
This will look for files with numbers in the last 4 letters of their names.
'''
path = update_path(path, self)
# Free memory:
self._parent.data._data = None
gc.collect()
# if it's a file, read all alike, if a directory find a file to start from:
if os.path.isfile(path):
filename = path
path = os.path.dirname(path)
# if file name is provided, the range is needed:
first = index_range[0]
last = index_range[1]
else:
# Try to find how many files do we need to read:
# Get the files only:
files = [x for x in os.listdir(path) if os.path.isfile(os.path.join(path, x))]
# Get the last 4 letters:
index = [os.path.splitext(x)[0][-4:] for x in files]
# Filter out non-numbers:
index = [int(re.findall('\d+', x)[0]) for x in index if re.findall('\d+', x)]
# Extract a number from the first element of the list:
first = min(index)
# Extract a number from the first element of the list:
last = max(index)
print('We found projections in the range from ', first, 'to ', last, flush=True)
# Find the file with the maximum index value:
filename = [x for x in os.listdir(path) if str(last) in x][0]
# Find the file with the minimum index:
filename = sorted([x for x in os.listdir(path) if (filename[:-8] in x)&(filename[-3:] in x)])[0]
print('Reading a stack of images')
print('Seed file name is:', filename)
#srt = self.settings['sort_by_date']AMT24-25-SU1/
if self._parent:
self._parent.data._data = (read_image_stack(os.path.join(path,filename)))
else:
return read_image_stack(os.path.join(path,filename))
# Trim the data with the provided inputs
if (index_range != []):
print(index_range)
self._parent.data._data = self._parent.data._data[index_range[0]:index_range[1], :, :]
if (y_range != []):
self._parent.data._data = self._parent.data._data[:, y_range[0]:y_range[1], :]
if (x_range != []):
self._parent.data._data = self._parent.data._data[:, :, x_range[0]:x_range[1]]
# Transpose to satisfy ASTRA dimensions:
self._parent.data._data = numpy.transpose(self._parent.data._data, (1, 0, 2))
self._parent.data._data = numpy.flipud(self._parent.data._data)
self._parent.data._data = numpy.ascontiguousarray(self._parent.data._data, dtype=numpy.float32)
# add record to the history:
self._parent.meta.history.add_record('io.read_raw', path)
def read_ref(self, path_file):
'''
Read reference flat field.
'''
ref = misc.imread(path_file, flatten= 0)
if self._parent:
self._parent.data._ref = ref
# Cast to float to avoid problems with divisions in the future:
self._parent.data._ref = numpy.float32(self._parent.data._ref)
# add record to the history:
self._parent.meta.history.add_record('io.read_ref', path_file)
self._parent.message('Flat field reference image loaded.')
def read_dark(self, path_file):
'''
Read reference flat field.
'''
dark = misc.imread(path_file, flatten= 0)
if self._parent:
self._parent.data._dark = dark
# Cast to float to avoid problems with divisions in the future:
self._parent.data._dark = numpy.float32(self._parent.data._dark)
# add record to the history:
self._parent.meta.history.add_record('io.read_ref', path_file)
self._parent.message('Flat field reference image loaded.')
def save_backup(self):
'''
Make a copy of data in memory, just in case.
'''
self._parent.data._backup = (self._parent.data._data.copy(), self._parent.meta.geometry.thetas.copy())
# add record to the history:
self._parent.meta.history.add_record('io.save_backup', 'backup saved')
self._parent.message('Backup saved.')
# In case the user wants to keep the backup...
return self._parent.data._backup
def load_backup(self, backup = None):
'''
Retrieve a copy of data from the backup.
'''
# If backup is provided:
if not backup is None:
self._parent.data._data = backup[0].copy()
self._parent.meta.geometry.thetas = backup[1].copy()
else:
if self._parent.data._backup == [] or self._parent.data._backup is None:
self._parent.error('I can`t find a backup, master.')
self._parent.data._data = self._parent.data._backup[0].copy()
self._parent.meta.geometry.thetas = self._parent.data._backup[1].copy()
# Clean memory:
self._parent.data._backup = None
gc.collect()
# Add record to the history:
self._parent.meta.history.add_record('io.load_backup', 'backup loaded')
self._parent.message('Backup loaded.')
def read_meta(self, path = '', kind = 'flexray'):
'''
Parser for the metadata file that contains information about the acquisition system.
'''
path = update_path(path, self)
if (str.lower(kind) == 'skyscan'):
# Parse the SkyScan log file
self._parse_skyscan_meta(path)
elif (str.lower(kind) == 'flexray'):
# Parse the SkyScan log file
self._parse_flexray_meta(path)
elif (str.lower(kind) == 'asi'):
# Parse the ASI log file
self._parse_asi_meta(path)
# add record to the history:
self._parent.meta.history.add_record('io.read_meta', path)
self._parent.message('Meta data loaded.')
# **************************************************************
# Parsers for metadata files
# **************************************************************
def _parse_asi_meta(self, path = ''):
'''
Use this routine to parse a text file generated by Navrit
'''
path = update_path(path,self)
# Try to find the log file in the selected path
log_file = [x for x in os.listdir(path) if (os.path.isfile(os.path.join(path, x)) and 'txt' in os.path.join(path, x))]
if len(log_file) == 0:
raise FileNotFoundError('Log file not found in path: ' + path)
if len(log_file) > 1:
self._parent.warning('Found several log files. Currently using: ' + log_file[0])
log_file = os.path.join(path, log_file[0])
else:
log_file = os.path.join(path, log_file[0])
# Create an empty dictionary:
records = {}
# Create a dictionary of keywords (skyscan -> our geometry definition):
geom_dict = {'pixel pitch':'det_pixel', 'object to source':'src2obj', 'object to detector':'det2obj', 'tube voltage':'voltage', 'tube power':'power', 'tube current':'current'}
with open(log_file, 'r') as logfile:
for line in logfile:
name, var = line.partition("=")[::2]
name = name.strip().lower()
# If there is unit after the value:
if len(var.split()) > 1:
unit = var.split()[1]
var = var.split()[0]
# If name contains one of the keys (names can contain other stuff like units):
geom_key = [geom_dict[key] for key in geom_dict.keys() if key in name]
if geom_key != []:
factor = self._parse_unit(unit)
records[geom_key[0]] = float(var)*factor
# Convert the geometry dictionary to geometry object:
self._parent.meta.geometry.src2obj = records['src2obj']
self._parent.meta.geometry.det2obj = records['det2obj']
self._parent.meta.geometry.det_pixel = [records['det_pixel'], records['det_pixel']] * self._parse_unit('um')
self._parent.meta.geometry.theta_range = [0, 2*numpy.pi]
# Set some physics properties:
self._parent.meta.physics['voltage'] = records['voltage']
self._parent.meta.physics['power'] = records['power']
self._parent.meta.physics['current'] = records['current']
def _parse_flexray_meta(self, path = ''):
'''
Use this routine to parse 'scan settings.txt' file generated by FlexRay machine
'''
path = update_path(path,self)
# Try to find the log file in the selected path
log_file = [x for x in os.listdir(path) if (os.path.isfile(os.path.join(path, x)) and 'settings.txt' in os.path.join(path, x))]
if len(log_file) == 0:
raise FileNotFoundError('Log file not found in path: ' + path)
if len(log_file) > 1:
#raise UserWarning('Found several log files. Currently using: ' + log_file[0])
self._parent.warning('Found several log files. Currently using: ' + log_file[0])
log_file = os.path.join(path, log_file[0])
else:
log_file = os.path.join(path, log_file[0])
# Create an empty dictionary:
records = {}
# Create a dictionary of keywords (skyscan -> our geometry definition):
geom_dict = {'voxel size':'img_pixel', 'sod':'src2obj', 'sdd':'src2det', '# projections':'theta_n',
'last angle':'last_angle', 'start angle':'first_angle', 'tube voltage':'voltage', 'tube power':'power', 'Exposure time (ms)':'exposure'}
with open(log_file, 'r') as logfile:
for line in logfile:
name, var = line.partition(":")[::2]
name = name.strip().lower()
# If name contains one of the keys (names can contain other stuff like units):
geom_key = [geom_dict[key] for key in geom_dict.keys() if key in name]
if geom_key != []:
factor = self._parse_unit(name)
records[geom_key[0]] = float(var)*factor
# Convert the geometry dictionary to geometry object:
self._parent.meta.geometry.src2obj = records['src2obj']
self._parent.meta.geometry.det2obj = records['det2obj']
self._parent.meta.geometry.img_pixel = [records['img_pixel'], records['img_pixel']] * self._parse_unit('um')
self._parent.meta.geometry.theta_range = [records['first_angle'], records['last_angle']] * self._parse_unit('deg')
# Set some physics properties:
self._parent.meta.physics['voltage'] = records['voltage']
self._parent.meta.physics['power'] = records['power']
self._parent.meta.physics['current'] = records['current']
self._parent.meta.physics['current'] = records['exposure']
def _parse_skyscan_meta(self, path = ''):
path = update_path(path,self)
# Try to find the log file in the selected path
log_file = [x for x in os.listdir(path) if (os.path.isfile(os.path.join(path, x)) and os.path.splitext(os.path.join(path, x))[1] == '.log')]
if len(log_file) == 0:
raise FileNotFoundError('Log file not found in path: ' + path)
if len(log_file) > 1:
#raise UserWarning('Found several log files. Currently using: ' + log_file[0])
self._parent.warning('Found several log files. Currently using: ' + log_file[0])
log_file = os.path.join(path, log_file[0])
else:
log_file = os.path.join(path, log_file[0])
#Once the file is found, parse it
records = {}
# Create a dictionary of keywords (skyscan -> our geometry definition):
geom_dict = {'camera pixel size': 'det_pixel', 'image pixel size': 'img_pixel', 'object to source':'src2obj', 'camera to source':'src2det',
'optical axis':'optical_axis', 'rotation step':'rot_step', 'exposure':'exposure', 'source voltage':'voltage', 'source current':'current',
'camera binning':'det_binning', 'number of rows':'det_rows', 'number of columns':'det_cols', 'postalignment':'det_offset', 'object bigger than fov':'roi_fov'}
# Removed 'image rotation':'det_tilt', can be added again but not sure of the purpose of this key
with open(log_file, 'r') as logfile:
for line in logfile:
name, val = line.partition("=")[::2]
name = name.strip().lower()
# If name contains one of the keys (names can contain other stuff like units):
geom_key = [geom_dict[key] for key in geom_dict.keys() if key in name]
if geom_key != [] and (geom_key[0] != 'det_binning') and (geom_key[0] != 'det_offset') and (geom_key[0] != 'roi_fov') :
factor = self._parse_unit(name)
records[geom_key[0]] = float(val)*factor
elif geom_key != [] and geom_key[0] == 'det_binning':
# Parse with the 'x' separator
bin_x, bin_y = val.partition("x")[::2]
records[geom_key[0]] = [float(bin_x), float(bin_y)]
elif geom_key != [] and geom_key[0] == 'det_offset':
records[geom_key[0]] = float(val)
elif geom_key != [] and geom_key[0] == 'roi_fov':
if val.strip().lower() == 'off':
records[geom_key[0]] = False
else:
records[geom_key[0]] = True
# Convert the geometry dictionary to geometry object:
self._parent.meta.geometry.src2obj = records['src2obj']
self._parent.meta.geometry.det2obj = records['src2det'] - records['src2obj']
self._parent.meta.geometry.src2obj = records['src2obj']
self._parent.meta.geometry.roi_fov = records.get('roi_fov')
if self._parent.meta.geometry.roi_fov is None:
self._parent.meta.geometry.roi_fov = False
self._parent.meta.geometry.det_pixel = [records['det_pixel'] * b for b in records['det_binning']]
# Initialize the thetas
records['first_angle'] = 0
if not 'nb_angle' in records.keys():
self._parent.warning('Number of angles in not found by parser. Will use the raw data shape instead.')
records['nb_angle'] = self._parent.data.shape[1]
if not 'last_angle' in records.keys():
if (not 'rot_step' in records.keys()) or ('rot_step' in records.keys() and records['rot_step'] == 0.0):
self._parent.warning('Assuming that the last rotation angle is 360 degrees and rotation step is adjusted accordingly to the number of projections')
records['last_angle'] = 2 * numpy.pi
records['rot_step'] = (records['last_angle'] - records['first_angle']) / (records['nb_angle'] - 1)
else:
records['last_angle'] = records['first_angle'] + records['rot_step'] * (records['nb_angle'] - 1)
self._parent.meta.geometry.init_thetas([records['first_angle'], records['last_angle']], records['nb_angle'])
# Set some physics properties:
self._parent.meta.physics['voltage'] = records.get('voltage')
self._parent.meta.physics['power'] = records.get('power')
self._parent.meta.physics['current'] = records.get('current')
self._parent.meta.physics['exposure'] = records.get('exposure')
# Convert optical axis into detector offset (skyscan measures lines from the bottom)
self._parent.meta.geometry.rotation_axis_shift(records['det_offset'], additive = False)
self._parent.meta.geometry.optical_axis_shift(records['optical_axis'] - records['det_rows']/2.0, additive = False)
# Convert detector tilt into radian units (degrees assumed)
if 'det_tilt' in records.keys():
self._parent.meta.geometry.modifiers['det_rot'] = records['det_tilt'] * self._parse_unit('deg')
def _parse_unit(self, string):
# Look at the inside of trailing parenthesis
unit = ''
factor = 1.0
if string[-1] == ')':
unit = string[string.rfind('(')+1:-1].strip().lower()
else:
unit = string.strip().lower()
units_dictionary = {'um':0.001, 'mm':1, 'cm':10.0, 'm':1e3, 'rad':1, 'deg':numpy.pi / 180.0, 'ms':1, 's':1e3, 'us':0.001, 'kev':1, 'mev':1e3, 'ev':0.001,
'kv':1, 'mv':1e3, 'v':0.001, 'ua':1, 'ma':1e3, 'a':1e6, 'line':1}
if unit in units_dictionary.keys():
factor = units_dictionary[unit]
else:
factor = 1.0
self._parent.warning('Unknown unit: ' + unit + '. Skipping.')
return factor
def read_skyscan_thermalshifts(self, path = ''):
path = update_path(path,self)
# Try to find the log file in the selected path
fname = [x for x in os.listdir(path) if (os.path.isfile(os.path.join(path, x)) and os.path.splitext(os.path.join(path, x))[1] == '.csv')]
if len(fname) == 0:
self._parent.warning('XY shifts csv file not found in path: ' + path)
return
if len(fname) > 1:
#raise UserWarning('Found several log files. Currently using: ' + log_file[0])
self._parent.warning('Found several csv files. Currently using: ' + fname[0])
fname = os.path.join(path, fname[0])
else:
fname = os.path.join(path, fname[0])
with open(fname) as csvfile:
reader = csv.DictReader(csvfile, fieldnames=['slice', 'x','y'])
for row in reader:
# Find the first useful row
if row['slice'].replace('.','',1).isdigit():
break
shifts = [[float(row['x']), float(row['y'])]]
#[row['x'], row['y']]
for row in reader:
shifts.append([float(row['x']), float(row['y'])])
self._parent.meta.geometry.add_thermal_shifts(numpy.array(shifts))
def save(self, path = '', fname='data', fmt = 'tiff', slc_range = None, window = None, digits = 4, dtype = None):
'''
Saves the data to tiff files
'''
from PIL import Image
if self._parent.data._data is not None:
# First check if digit is large enough, otherwise add a digit
im_nb = self._parent.data._data.shape[0]
if digits <= numpy.log10(im_nb):
digits = int(numpy.log10(im_nb)) + 1
path = update_path(path, self)
fname = os.path.join(path, fname)
if slc_range is None:
slc_range = range(0,self._parent.data._data.shape[0])
if dtype is None:
dtype = self._parent.data._data.dtype
maxi = numpy.max(self._parent.data._data)
mini = numpy.min(self._parent.data._data)
if (not (window is None)):
maxi = numpy.min([maxi, window[1]])
mini = numpy.max([mini, window[0]])
for i in slc_range:
# Fix the file name
fname_tmp = fname
fname_tmp += '_'
fname_tmp += str(i).zfill(digits)
fname_tmp += '.' + fmt
# Fix the windowing and output type
slc = numpy.array(self._parent.data._data[i,:,:], dtype = numpy.float32)
if (not (window is None)):
numpy.clip(a = slc, a_min = window[0], a_max = window[1], out = slc)
# Rescale if integer type
if not (numpy.issubdtype(dtype, numpy.floating)):
slc -= mini
if (maxi != mini):
slc /= (maxi - mini)
slc *= numpy.iinfo(dtype).max
# Save the image
im = Image.fromarray(numpy.asarray(slc, dtype=dtype))
im.save(fname_tmp)
def save_tiff(self, path = '', fname='data', axis = 0, digits = 4):
'''
Saves the data to tiff files
'''
from PIL import Image
if self._parent.data._data is not None:
# First check if digit is large enough, otherwise add a digit
im_nb = self._parent.data._data.shape[axis]
if digits <= numpy.log10(im_nb):
digits = int(numpy.log10(im_nb)) + 1
path = update_path(path, self)
fname = os.path.join(path, fname)
for i in range(0,self._parent.data._data.shape[axis]):
fname_tmp = fname
fname_tmp += '_'
fname_tmp += str(i).zfill(digits)
fname_tmp += '.tiff'
im = Image.fromarray(self._parent.data._data[i,:,:])
im.save(fname_tmp)
#misc.imsave(name = os.path.join(path, fname_tmp), arr = self._parent.data._data[i,:,:])
#dxchange.writer.write_tiff_stack(self._parent.data.get_data(),fname=os.path.join(path, fname), axis=axis,overwrite=True)
# **************************************************************
# DISPLAY class and subclasses
# **************************************************************
class display(subclass):
'''
This is a collection of display tools for the raw and reconstructed data
'''
def __init__(self, parent = []):
subclass.__init__(self, parent)
self._cmap = 'gray'
def _figure_maker_(self, fig_num):
'''
Make a new figure or use old one.
'''
if fig_num:
plt.figure(fig_num)
else:
plt.figure()
def slice(self, slice_num = None, dim_num = 0, fig_num = [], mirror = False, upsidedown = False):
'''
Display a 2D slice of 3D volumel
'''
self._figure_maker_(fig_num)
if slice_num is None:
slice_num = self._parent.data.shape[0] // 2
img = extract_2d_array(dim_num, slice_num, self._parent.data.data)
if mirror: img = numpy.fliplr(img)
if upsidedown: img = numpy.flipud(img)
plt.imshow(img, cmap = self._cmap, origin='lower')
plt.colorbar()
plt.show()
def slice_movie(self, dim_num, fig_num = []):
'''
Display a 2D slice of 3D volumel
'''
self._figure_maker_(fig_num)
slice_num = 0
img = extract_2d_array(dim_num, slice_num, self._parent.data.data)
fig = plt.imshow(img, cmap = self._cmap)
plt.colorbar()
plt.show()
for slice_num in range(1, self._parent.data.shape[dim_num]):
img = extract_2d_array(dim_num, slice_num, self._parent.data.data)
fig.set_data(img)
plt.show()
plt.title(slice_num)
plt.pause(0.0001)
def projection(self, dim_num, fig_num = []):
'''
Get a projection image of the 3d data.
'''
self._figure_maker_(fig_num)
img = self._parent.data.data.sum(dim_num)
plt.imshow(img, cmap = self._cmap)
plt.colorbar()
plt.show()
def max_projection(self, dim_num, fig_num = []):
'''
Get maximum projection image of the 3d data.
'''
self._figure_maker_(fig_num)
img = self._parent.data.data.max(dim_num)
plt.imshow(img, cmap = self._cmap)
plt.colorbar()
plt.show()
def min_projection(self, dim_num, fig_num = []):
'''
Get maximum projection image of the 3d data.
'''
self._figure_maker_(fig_num)
img = self._parent.data.data.min(dim_num)
plt.imshow(img, cmap = self._cmap)
plt.colorbar()
plt.show()
def volume_viewer(self, orientation = 'x_axes', min_max = []):
'''
Use mayavi to view the volume slice by slice
'''
data = self._parent.data.data.copy()
# Clip intensities if needed:
if numpy.size(min_max) == 2:
data[data < min_max[0]] = min_max[0]
data[data < min_max[1]] = min_max[1]
mlab.pipeline.image_plane_widget(mlab.pipeline.scalar_field(data),
plane_orientation=orientation,
slice_index=0, colormap='gray')
mlab.colorbar()
mlab.outline()
def render(self, min_max = []):
'''
Render volume using mayavi routines
'''
data = self._parent.data.data.copy()
# Clip intensities if needed:
if numpy.size(min_max) == 2:
data[data < min_max[0]] = min_max[0]
data[data < min_max[1]] = min_max[1]
vol = mlab.pipeline.volume(mlab.pipeline.scalar_field(numpy.fliplr(data)), vmin = 0.001, vmax = 0.01)
mlab.colorbar()
# Adjust colors:
ctf = ColorTransferFunction()
for ii in numpy.linspace(0, 1, 10):
ctf.add_hsv_point(ii * 0.01, 0.99 - ii, 1, 1)
ctf.range= [0, 1]
vol._volume_property.set_color(ctf)
vol._ctf = ctf
vol.update_ctf = True
mlab.outline()
# **************************************************************
# ANALYSE class and subclasses
# **************************************************************
from scipy.ndimage import measurements
class analyse(subclass):
'''
This is an anlysis toolbox for the raw and reconstructed data
'''
def l2_norm(self):
return numpy.sum(self._parent.data.data ** 2)
def l1_norm(self):
return numpy.sum(numpy.abs(self._parent.data.data))
def mean(self):
return numpy.mean(self._parent.data.data)
def min(self):
return numpy.min(self._parent.data.data)
def max(self):
return numpy.max(self._parent.data.data)
def center_of_mass(self):
return measurements.center_of_mass(self._parent.data.data.max(1))
def histogram(self, nbin = 256, plot = True, log = False):
mi = self.min()
ma = self.max()
a, b = numpy.histogram(self._parent.data.data, bins = nbin, range = [mi, ma])
# Set bin values to the middle of the bin:
b = (b[0:-1] + b[1:]) / 2
if plot:
plt.figure()
if log:
plt.semilogy(b, a)
else:
plt.plot(b, a)
plt.show()
return a, b
def threshold(self, threshold = None):
'''
Apply simple segmentation
'''
# **************************************************************
# PROCESS class and subclasses
# **************************************************************
from scipy import ndimage
import scipy.ndimage.interpolation as interp
class process(subclass):
'''
Various preprocessing routines
'''
def arbitrary_function(self, func):
'''
Apply an arbitrary function:
'''
print(func)
self._parent.data.data = func(self._parent.data.data)
# add a record to the history:
self._parent.meta.history.add_record('process.arbitrary_function', func.__name__)
self._parent.message('Arbitrary function applied.')
def pixel_calibration(self, kernel=5):
'''
Apply correction to miscalibrated pixels.
'''
# Compute mean image of intensity variations that are < 5x5 pixels
res = self._parent.data.data - ndimage.filters.median_filter(self._parent.data.data, [kernel, 1, kernel])
res = res.mean(1)
self._parent.data.data -= res.reshape((res.shape[0], 1, res.shape[1]))
self._parent.meta.history.add_record('Pixel calibration', 1)
self._parent.message('Pixel calibration correction applied.')
def medipix_quadrant_shift(self):
'''
Expand the middle line
'''
self._parent.data.data[:,:, 0:self._parent.data.shape[2]//2 - 2] = self._parent.data.data[:,:, 2:self._parent.data.shape[2]/2]
self._parent.data.data[:,:, self._parent.data.shape[2]//2 + 2:] = self._parent.data.data[:,:, self._parent.data.shape[2]//2:-2]
# Fill in two extra pixels:
for ii in range(-2,2):
closest_offset = -3 if (numpy.abs(-3-ii) < numpy.abs(2-ii)) else 2
self._parent.data.data[:,:, self._parent.data.shape[2]//2 - ii] = self._parent.data.data[:,:, self._parent.data.shape[2]//2 + closest_offset]
# Then in columns
self._parent.data.data[0:self._parent.data.shape[0]//2 - 2,:,:] = self._parent.data.data[2:self._parent.data.shape[0]//2,:,:]
self._parent.data.data[self._parent.data.shape[0]//2 + 2:, :, :] = self._parent.data.data[self._parent.data.shape[0]//2:-2,:,:]
# Fill in two extra pixels:
for jj in range(-2,2):
closest_offset = -3 if (numpy.abs(-3-jj) < numpy.abs(2-jj)) else 2
self._parent.data.data[self._parent.data.shape[0]//2 - jj,:,:] = self._parent.data.data[self._parent.data.shape[0]//2 + closest_offset,:,:]
self._parent.meta.history.add_record('Quadrant shift', 1)
self._parent.message('Medipix quadrant shift applied.')
def flat_field(self, kind=''):
'''
Apply flat field correction.
'''
if (str.lower(kind) == 'skyscan'):
if self._parent.meta.geometry.roi_fov:
self._parent.message('Object is larger than the FOV!')
air_values = numpy.ones_like(self._parent.data.data[:,:,0]) * 2**16 - 1
else:
air_values = numpy.max(self._parent.data.data, axis = 2)
air_values = air_values.reshape((air_values.shape[0],air_values.shape[1],1))
self._parent.data.data = self._parent.data.data / air_values
# add a record to the history:
self._parent.meta.history.add_record('process.flat_field', 1)
self._parent.message('Skyscan flat field correction applied.')
else:
if numpy.min(self._parent.data._ref) <= 0:
self._parent.warning('Flat field reference image contains zero (or negative) values! Will replace those with little tiny numbers.')
tiny = self._parent.data._ref[self._parent.data._ref > 0].min()
self._parent.data._ref[self._parent.data._ref <= 0] = tiny
# How many projections:
n_proj = self._parent.data.shape[1]
if not self._parent.data._dark is None:
for ii in range(0, n_proj):
self._parent.data.data[:, ii, :] = (self._parent.data.data[:, ii, :] - self._parent.data._dark) / (self._parent.data._ref - self._parent.data._dark)
else:
for ii in range(0, n_proj):
self._parent.data.data[:, ii, :] = (self._parent.data.data[:, ii, :]) / (self._parent.data._ref)
# add a record to the history:
self._parent.meta.history.add_record('process.flat_field', 1)
self._parent.message('Flat field correction applied.')
def short_scan_weights(self, fan_angle):
'''
Apply parker weights correction.
'''
def _Parker_window(theta, gamma, fan):
weight = 0.0
if (0 <= theta < 2*(gamma+fan)):
weight = numpy.sin((numpy.pi/4)*(theta/(gamma+fan)))**2
elif (2*(gamma+fan) <= theta < numpy.pi + 2*gamma):
weight = 1.0
elif (numpy.pi + 2*gamma <= theta < numpy.pi + 2*fan):
weight = numpy.sin((numpy.pi/4)*((numpy.pi + 2*fan - theta)/(gamma+fan)))**2
else:
weight = 0.0
return weight
weights = numpy.zeros_like(self._parent.data.data, dtype=numpy.float32)
sdd = self._parent.meta.geometry.src2det
for u in range(0,weights.shape[2]):
weights[:,:,u] = u
weights = weights - weights.shape[2]/2
weights = self._parent.meta.geometry.det_pixel[1]*weights
weights = numpy.arctan(weights/sdd)
theta = self._parent.meta.geometry.thetas
for ang in range(0,theta.shape[0]):
tet = theta[ang]
for u in range(0, weights.shape[2]):
weights[:,ang,u] = _Parker_window(theta = tet, gamma = weights[0,ang,u], fan=fan_angle)
self._parent.data.data *= weights
# add a record to the history:
self._parent.meta.history.add_record('process.short_scan', 1)
self._parent.message('Short scan correction applied.')
def log(self, air_intensity = 1.0, lower_bound = -10, upper_bound = numpy.log(256)):
'''
Apply -log(x) to the sinogram
'''
# Check if the log was already applied:
#self._parent._check_double_hist('process.log(upper_bound)')
# If not, apply!
if (air_intensity != 1.0):
self._parent.data.data /= air_intensity
# In-place negative logarithm
numpy.log(self._parent.data.data, out = self._parent.data.data)
numpy.negative(self._parent.data.data, out = self._parent.data.data)
self._parent.data.data = numpy.float32(self._parent.data.data)
# Apply a bound to large values:
numpy.clip(self._parent.data.data, a_min = lower_bound, a_max = upper_bound, out = self._parent.data.data)
self._parent.message('Logarithm is applied.')
self._parent.meta.history.add_record('process.log(upper_bound)', upper_bound)
def salt_pepper(self, kernel = 3):
'''
Gets rid of nasty speakles
'''
# Make a smooth version of the data and look for outlayers:
smooth = ndimage.filters.median_filter(self._parent.data.data, [kernel, 1, kernel])
mask = self._parent.data.data / smooth
mask = (numpy.abs(mask) > 1.5) | (numpy.abs(mask) < 0.75)
self._parent.data.data[mask] = smooth[mask]
self._parent.message('Salt and pepper filter is applied.')
self._parent.meta.history.add_record('process.salt_pepper(kernel)', kernel)
def simple_tilt(self, tilt):
'''
Tilts the sinogram
'''
for ii in range(0, self._parent.data.shape[1]):
self._parent.data.data[:, ii, :] = interp.rotate(numpy.squeeze(self._parent.data.data[:, ii, :]), -tilt, reshape=False)
self._parent.message('Tilt is applied.')
def bin_data(self, bin_theta = True):
'''
Bin data with a factor of two
'''
self._parent.data._data = (self._parent.data._data[:, :, 0:-1:2] + self._parent.data._data[:, :, 1::2]) / 2
self._parent.data._data = (self._parent.data._data[0:-1:2, :, :] + self._parent.data._data[1::2, :, :]) / 2
self._parent.meta.geometry.det_pixel *= 2
'''
Bin angles with a factor of two
'''
if bin_theta:
self._parent.data._data = (self._parent.data._data[:,0:-1:2,:] + self._parent.data._data[:,1::2,:]) / 2
self._parent.meta.geometry.thetas = numpy.array(self._parent.meta.geometry.thetas[0:-1:2] + self._parent.meta.geometry.thetas[1::2]) / 2
def crop(self, top_left, bottom_right):
'''
Crop the sinogram
'''
if bottom_right[1] > 0:
self._parent.data._data = self._parent.data._data[top_left[1]:-bottom_right[1], :, :]
else:
self._parent.data._data = self._parent.data._data[top_left[1]:, :, :]
if bottom_right[0] > 0:
self._parent.data._data = self._parent.data._data[:, :, top_left[0]:-bottom_right[0]]
else:
self._parent.data._data = self._parent.data._data[:, :, top_left[0]:]
self._parent.data._data = numpy.ascontiguousarray(self._parent.data._data, dtype=numpy.float32)
gc.collect()
self._parent.meta.history.add_record('process.ccrop(top_left, bottom_right)', [top_left, bottom_right])
self._parent.message('Sinogram cropped.')
def crop_centered(self, center, dimensions):
'''
Crop the sinogram
'''
self._parent.data._data = self._parent.data._data[center[0] - dimensions[0]//2:center[0] + dimensions[0]//2, :, center[1] - dimensions[1]//2:center[1] + dimensions[1]//2]
self._parent.data._data = numpy.ascontiguousarray(self._parent.data._data, dtype=numpy.float32)
gc.collect()
self._parent.meta.history.add_record('process.crop_centered(center, dimensions)', [center, dimensions])
self._parent.message('Sinogram cropped.')
# **************************************************************
# RECONSTRUCTION class and subclasses
# **************************************************************
import astra
from scipy import interpolate
import math
import odl
from scipy import optimize
from scipy.optimize import minimize_scalar
class reconstruct(subclass):
'''
Reconstruction algorithms: FDK, SIRT, KL, FISTA etc.
'''
# Some precalculated masks for ASTRA:
_projection_mask = None
_reconstruction_mask = None
_projection_filter = None
# Display while computing:
_display_callback = False
def __init__(self, proj = []):
subclass.__init__(self, proj)
self.vol_geom = None
self.proj_geom = None
def _modifier_l2cost(self, value, modifier = 'rotation_axis'):
# Compute an image from the shifted data:
if modifier == 'rotation_axis':
self._parent.meta.geometry.rotation_axis_shift(value)
elif modifier in self._parent.meta.geometry.modifiers.keys():
self._parent.meta.geometry.modifiers[modifier] = value
else:
self._parent.error('Modifier not found!')
vol = self.FDK()
return -vol.analyse.l2_norm()
def _parabolic_min(self, values, index, space):
'''
Use parabolic interpolation to find the minimum:
'''
if (index > 0) & (index < (values.size - 1)):
# Compute parabolae:
x = space[index-1:index+2]
y = values[index-1:index+2]
denom = (x[0]-x[1]) * (x[0]-x[2]) * (x[1]-x[2])
A = (x[2] * (y[1]-y[0]) + x[1] * (y[0]-y[2]) + x[0] * (y[2]-y[1])) / denom
B = (x[2]*x[2] * (y[0]-y[1]) + x[1]*x[1] * (y[2]-y[0]) + x[0]*x[0] * (y[1]-y[2])) / denom
x0 = -B / 2 / A
else:
x0 = space[index]
return x0
def _full_search(self, func, bounds, maxiter, args):
'''
Performs full search of the minimum inside the given bounds.
'''
func_values = numpy.zeros(maxiter)
space = numpy.linspace(bounds[0], bounds[1], maxiter)
ii = 0
for val in space:
func_values[ii] = func(val, modifier = args)
ii += 1
# print('***********')
# print(func_values)
min_index = func_values.argmin()
print('Found minimum', space[min_index])
x0 = self._parabolic_min(func_values, min_index, space)
print('Parabolic minimum', x0)
return x0
def optimize_geometry_modifier(self, modifier = 'det_rot', guess = 0, subscale = 8, full_search = False):
'''
Maximize the sharpness of the reconstruction by optimizing one of the geometry modifiers:
'''
self._parent.message('Optimization is started...')
self._parent.message('Initial guess is %01f' % guess)
# Downscale the data:
while subscale >= 1:
self._parent.message('Subscale factor %01d' % subscale)
self._parent._data_sampling = subscale
# Create a ramp filter so FDK will calculate a gradient:
self._initialize_ramp_filter(power = 2)
if full_search:
guess = self._full_search(self._modifier_l2cost, bounds = [guess / subscale - 2, guess / subscale + 2], maxiter = 5, args = modifier) * subscale
else:
opt = optimize.minimize(self._modifier_l2cost, x0 = guess / subscale, bounds = ((guess / subscale - 2, guess / subscale + 2),), method='COBYLA',
options = {'maxiter': 15, 'disp': False}, args = modifier)
guess = opt.x * subscale
self._parent.message('Current guess is %01f' % guess)
vol = self.FDK()
vol.display.slice()
subscale = subscale // 2
self._parent._data_sampling = 1
self._initialize_ramp_filter(power = 1)
return guess
def optimize_rotation_center(self, guess = 0, subscale = 8, center_of_mass = True, full_search = False):
'''
Find the rotation center using a subscaling factor.
Make sure that if you dont use center_of_mass or the initial guess, use subscale that is large enough.
'''
if center_of_mass:
guess = self._parent.analyse.center_of_mass()[1] - self._parent.data.shape[2] // 2
self._parent.message('Searching for the center of rotation...')
self._parent.message('Initial guess is %01f' % guess)
# Downscale the data:
while subscale >= 1:
self._parent.message('Subscale factor %01d' % subscale)
self._parent._data_sampling = subscale
# Create a ramp filter so FDK will calculate a gradient:
self._initialize_ramp_filter(power = 2)
if full_search:
guess = self._full_search(self._modifier_l2cost, bounds = [guess / subscale - 2, guess / subscale + 2], maxiter = 5, args = 'rotation_axis') * subscale
else:
opt = optimize.minimize(self._modifier_l2cost, x0 = guess / subscale, bounds = ((guess / subscale - 2, guess / subscale + 2),), method='COBYLA',
options = {'maxiter': 15, 'disp': False}, args = 'rotation_axis')
guess = opt.x * subscale
self._parent.meta.geometry.rotation_axis_shift(guess / subscale)
self._parent.message('Current guess is %01f' % guess)
vol = self.FDK()
vol.display.slice()
subscale = subscale // 2
self._parent._data_sampling = 1
self._initialize_ramp_filter(power = 1)
return guess
def initialize_projection_mask(self, weight_poisson = False, weight_histogram = None, pixel_mask = None):
'''
Genrate weights proportional to the square root of intensity that map onto the projection data
weight_poisson - weight rays according to the square root of the normalized intensity
weight_histogram - weight intensities according to a predifined hystogram, defined as (x, y),
where x is intensity value and y is the corresponding weight
pixel_mask - assign different weights depending on the pixel location
'''
prnt = self._parent
# Initialize ASTRA:
self._initialize_astra()
# Create a volume containing only ones for forward projection weights
sz = self._parent.data.shape
self._projection_mask = prnt.data.data * 0
# if weight_poisson: introduce weights based on the value of intensity image:
if not weight_poisson is None:
self._projection_mask = self._projection_mask * numpy.sqrt(numpy.exp(-prnt.data.data))
# if weight_histogram is provided:
if not weight_histogram is None:
x = weight_histogram[0]
y = weight_histogram[1]
f = interpolate.interp1d(x, y, kind = 'linear', fill_value = 'extrapolate')
self._projection_mask = self._projection_mask * f(numpy.exp(-prnt.data.data))
# apply pixel mask to every projection if it is provided:
if not pixel_mask is None:
for ii in range(0, sz[1]):
self._projection_mask = self._projection_mask[:, ii, :] * pixel_mask
prnt.message('Projection mask is initialized')
def initialize_reconstruction_mask(self):
'''
Make volume mask to avoid projecting errors into the corners of the volume
'''
prnt = self._parent
sz = prnt.data.shape[2]
# compute radius of the defined cylinder
det_width = prnt.data.shape[2] / 2
src2obj = prnt.meta.geometry.src2obj
total = prnt.meta.geometry.src2det
pixel = prnt.meta.geometry.det_pixel
# Compute the smallest radius and cut the cornenrs:
radius = 2 * det_width * src2obj / numpy.sqrt(total**2 + (det_width*pixel[0])**2) - 3
# Create 2D mask:
yy,xx = numpy.ogrid[-sz//2:sz//2, -sz//2:sz//2]
self._reconstruction_mask = numpy.array(xx**2 + yy**2 < radius**2, dtype = 'float32')
# Replicate to 3D:
self._reconstruction_mask = numpy.ascontiguousarray((numpy.tile(self._reconstruction_mask[None, :,:], [prnt.data.shape[0], 1, 1])))
prnt.message('Reconstruction mask is initialized')
def _initialize_odl(self):
'''
Initialize da RayTransform!
'''
prnt = self._parent
sz = self._parent.data.shape
geom = prnt.meta.geometry
# Discrete reconstruction space: discretized functions on the rectangle.
dim = numpy.array([sz[0], sz[2], sz[2]])
space = odl.uniform_discr(min_pt = -dim / 2 * geom['img_pixel'], max_pt = dim / 2 * geom['img_pixel'], shape=dim, dtype='float32')
# Angles: uniformly spaced, n = 1000, min = 0, max = pi
angle_partition = odl.uniform_partition(geom['theta_range'][0], geom['theta_range'][1], geom['theta_n'])
# Detector: uniformly sampled, n = 500, min = -30, max = 30
dim = numpy.array([sz[0], sz[2]])
detector_partition = odl.uniform_partition(-dim / 2 * geom['det_pixel'], dim / 2 * geom['det_pixel'], dim)
# Make a parallel beam geometry with flat detector
geometry = odl.tomo.CircularConeFlatGeometry(angle_partition, detector_partition, src_radius=geom['src2obj'], det_radius=geom['det2obj'])
# Ray transform (= forward projection). We use the ASTRA CUDA backend.
ray_trafo = odl.tomo.RayTransform(space, geometry, impl='astra_cuda')
return ray_trafo, space
def odl_TV(self, iterations = 10, lam = 0.01, min_l1_norm = False):
'''
'''
ray_trafo, space = self._initialize_odl()
# Initialize gradient operator
gradient = odl.Gradient(space, method='forward')
# Column vector of two operators
op = odl.BroadcastOperator(ray_trafo, gradient)
# Do not use the g functional, set it to zero.
g = odl.solvers.ZeroFunctional(op.domain)
# Chambol pock with TV
# Isotropic TV-regularization i.e. the l1-norm
# l2-squared data matching unless min_l1_norm == True
if min_l1_norm:
l2_norm = (odl.solvers.L1Norm(ray_trafo.range)).translated(numpy.transpose(self._parent.data.data, axes = [1, 2, 0]))
else:
l2_norm = (odl.solvers.L2NormSquared(ray_trafo.range)).translated(numpy.transpose(self._parent.data.data, axes = [1, 2, 0]))
if not self._projection_mask is None:
l2_norm = l2_norm * ray_trafo.range.element(self._projection_mask)
l1_norm = lam * odl.solvers.L1Norm(gradient.range)
# Combine functionals, order must correspond to the operator K
f = odl.solvers.SeparableSum(l2_norm, l1_norm)
# Estimated operator norm, add 10 percent to ensure ||K||_2^2 * sigma * tau < 1
op_norm = 1.1 * odl.power_method_opnorm(op)
tau = 1.0 / op_norm # Step size for the primal variable
sigma = 1.0 / op_norm # Step size for the dual variable
gamma = 0.2
# Optionally pass callback to the solver to display intermediate results
if self._display_callback:
callback = (odl.solvers.CallbackShow())
else:
callback = None
# Choose a starting point
x = op.domain.zero()
# Run the algorithm
odl.solvers.chambolle_pock_solver(
x, f, g, op, tau=tau, sigma=sigma, niter=iterations, gamma=gamma,
callback=callback)
return volume(numpy.transpose(x.asarray(), axes = [2, 0, 1])[:,::-1,:])
def odl_FBP(self):
'''
'''
import odl
ray_trafo, space = self._initialize_odl()
# FBP:
fbp = odl.tomo.fbp_op(ray_trafo, filter_type='Shepp-Logan', frequency_scaling=0.8)
# Run the algorithm
x = fbp(numpy.transpose(self._parent.data.data, axes = [1, 2, 0]))
return volume(numpy.transpose(x.asarray(), axes = [2, 0, 1])[:,::-1,:])
def odl_EM(self, iterations = 10):
'''
Expect the Maximamum
'''
import odl
ray_trafo, space = self._initialize_odl()
# Optionally pass callback to the solver to display intermediate results
if self._display_callback:
callback = (odl.solvers.CallbackShow())
else:
callback = None
# Choose a starting point
x = ray_trafo.domain.one()
# FBP:
odl.solvers.mlem(ray_trafo, x, numpy.transpose(self._parent.data.data, axes = [1, 2, 0]),
niter = iterations, callback = callback)
return volume(numpy.transpose(x.asarray(), axes = [2, 0, 1])[:,::-1,:])
def FDK(self, short_scan = None, min_constraint = None):
'''
'''
prnt = self._parent
# Initialize ASTRA:
self._initialize_astra()
# Run the reconstruction:
#epsilon = numpy.pi / 180.0 # 1 degree - I deleted a part of code here by accident...
theta = self._parent.meta.geometry.thetas
if short_scan is None:
short_scan = (theta.max() - theta.min()) < (numpy.pi * 1.99)
vol = self._backproject(prnt.data.data, algorithm='FDK_CUDA', short_scan = short_scan)
# Reconstruction mask is applied only in native ASTRA SIRT. Apply it here:
if not self._reconstruction_mask is None:
vol = self._reconstruction_mask * vol
vol = volume(vol)
#vol.history['FDK'] = 'generated in '
self._parent.meta.history.add_record('set data.data', [])
self._parent.message('FDK reconstruction performed.')
return vol
# No need to make a history record - sinogram is not changed.
def SIRT(self, iterations = 10, min_constraint = None):
'''
'''
prnt = self._parent
# Initialize ASTRA:
self._initialize_astra()
# Run the reconstruction:
vol = self._backproject(prnt.data.data, algorithm = 'SIRT3D_CUDA', iterations = iterations, min_constraint= min_constraint)
text = 'SIRT reconstruction performed with %d iterations.' % iterations
self._parent.message(text)
return volume(vol)
# No need to make a history record - sinogram is not changed.
def SIRT_CPU(self, proj_type = 'cuda3d', iterations = 10, relaxation = 1.0, min_constraint = None, max_constraint = None):
'''
'''
prnt = self._parent
# Initialize ASTRA:
self._initialize_astra()
out = numpy.zeros(astra.functions.geom_size(self.vol_geom), dtype=numpy.float32)
cfg = {}
proj_id = 0
rec_id = 0
sino_id = 0
sirt = astra.plugins.SIRTPlugin()
try:
proj_id = astra.create_projector(proj_type = proj_type, proj_geom = self.proj_geom, vol_geom = self.vol_geom)
rec_id = astra.data3d.link('-vol', self.vol_geom, out)
sino_id = astra.data3d.link('-sino', self.proj_geom, prnt.data._data)
cfg['ProjectorId'] = proj_id
cfg['ReconstructionDataId'] = rec_id
cfg['ProjectionDataId'] = sino_id
sirt.initialize(cfg = cfg, Relaxation = relaxation, MinConstraint = min_constraint, MaxConstraint = max_constraint)
sirt.run(its = iterations)
finally:
astra.projector.delete(proj_id)
astra.data3d.delete([rec_id, sino_id])
text = 'SIRT-CPU reconstruction performed with %d iterations.' % iterations
self._parent.message(text)
return volume(out)
# No need to make a history record - sinogram is not changed.
def SIRT_custom(self, iterations = 10, min_constraint = None):
'''
'''
prnt = self._parent
# Initialize ASTRA:
self._initialize_astra()
# Create a volume containing only ones for forward projection weights
sz = self._parent.data.shape
theta = self._parent.meta.geometry.thetas
# Initialize weights:
vol_ones = numpy.ones((sz[0], sz[2], sz[2]), dtype=numpy.float32)
vol = numpy.zeros_like(vol_ones, dtype=numpy.float32)
weights = self._forwardproject(vol_ones)
weights = 1.0 / (weights + (weights == 0))
bwd_weights = 1.0 / (theta.shape[0])
vol = numpy.zeros((sz[0], sz[2], sz[2]), dtype=numpy.float32)
for ii_iter in range(iterations):
fwd_proj_vols = self._forwardproject(vol)
residual = (prnt.data.data - fwd_proj_vols) * weights
if not self._projection_mask is None:
residual *= self._projection_mask
vol += bwd_weights * self._backproject(residual, algorithm='BP3D_CUDA')
if min_constraint != None:
vol[vol < min_constraint] = min_constraint
return volume(vol)
# No need to make a history record - sinogram is not changed.
def CPLS(self, iterations = 10, min_constraint = None):
'''
Chambolle-Pock Least Squares
'''
prnt = self._parent
# Initialize ASTRA:
self._initialize_astra()
# Create a volume containing only ones for forward projection weights
sz = self._parent.data.shape
theta = self._parent.meta.geometry.thetas
vol_ones = numpy.ones((sz[0], sz[2], sz[2]), dtype=numpy.float32)
vol = numpy.zeros_like(vol_ones, dtype=numpy.float32)
theta = self._parent.meta.geometry.thetas
sigma = self._forwardproject(vol_ones)
sigma = 1.0 / (sigma + (sigma == 0))
sigma_1 = 1.0 / (1.0 + sigma)
tau = 1.0 / theta.shape[0]
p = numpy.zeros_like(prnt.data.data)
ehn_sol = vol.copy()
for ii_iter in range(iterations):
p = (p + prnt.data.data - self._forwardproject(ehn_sol) * sigma) * sigma_1
old_vol = vol.copy()
vol += self._backproject(p, algorithm='BP3D_CUDA', min_constraint=min_constraint) * tau
vol *= (vol > 0)
ehn_sol = vol + (vol - old_vol)
gc.collect()
return volume(vol)
# No need to make a history record - sinogram is not changed.
def CGLS(self, iterations = 10, min_constraint = None):
'''
'''
prnt = self._parent
# Initialize ASTRA:
self._initialize_astra()
# Run the reconstruction:
vol = self._backproject(prnt.data.data, algorithm = 'CGLS3D_CUDA', iterations = iterations, min_constraint=min_constraint)
return volume(vol)
# No need to make a history record - sinogram is not changed.
def CGLS_CPU(self, proj_type = 'cuda3d', iterations = 10):
'''
'''
prnt = self._parent
# Initialize ASTRA:
self._initialize_astra()
out = numpy.zeros(astra.functions.geom_size(self.vol_geom), dtype=numpy.float32)
cfg = {}
proj_id = 0
rec_id = 0
sino_id = 0
cgls = astra.plugins.CGLSPlugin()
try:
proj_id = astra.create_projector(proj_type = proj_type, proj_geom = self.proj_geom, vol_geom = self.vol_geom)
rec_id = astra.data3d.link('-vol', self.vol_geom, out)
sino_id = astra.data3d.link('-sino', self.proj_geom, prnt.data._data)
cfg['ProjectorId'] = proj_id
cfg['ReconstructionDataId'] = rec_id
cfg['ProjectionDataId'] = sino_id
cgls.initialize(cfg)
cgls.run(its = iterations)
finally:
astra.projector.delete(proj_id)
astra.data3d.delete([rec_id, sino_id])
text = 'CGLS-CPU reconstruction performed with %d iterations.' % iterations
self._parent.message(text)
return volume(out)
# No need to make a history record - sinogram is not changed.
'''
This routine produces a single slice with a sigle ray projected into it from each projection angle.
Can be used as a simple diagnostics for angle coverage.
'''
prnt = self._parent
sz = prnt.data.shape
# Make a synthetic sinogram:
sinogram = numpy.zeroes((1, sz[1], sz[2]))
# For compatibility purposes make sure that the result is 3D:
sinogram = numpy.ascontiguousarray(sinogram)
# Initialize ASTRA:
sz = numpy.array(prnt.data.shape)
theta = prnt.meta.geometry.thetas
# Synthetic sinogram contains values of thetas at the central pixel
ii = 0
for theta_i in theta:
sinogram[:, ii, sz[2]//2] = theta_i
ii += 1
self._initialize_astra()
# Run the reconstruction:
epsilon = self._parse_unit('deg') # 1 degree
short_scan = numpy.abs(theta[-1] - 2*numpy.pi) > epsilon
vol = self._backproject(prnt.data.data, algorithm='FDK_CUDA', short_scan=short_scan)
return volume(vol)
# No need to make a history record - sinogram is not changed.
def _apply_modifiers(self):
'''
Apply arbitrary geometrical modifiers to the ASTRA projection geometry vector
'''
if (self.proj_geom['type'] == 'cone'):
self.proj_geom = astra.functions.geom_2vec(self.proj_geom)
vectors = self.proj_geom['Vectors']
for ii in range(0, vectors.shape[0]):
# Define vectors:
src_vect = vectors[ii, 0:3]
det_vect = vectors[ii, 3:6]
det_axis_hrz = vectors[ii, 6:9]
det_axis_vrt = vectors[ii, 9:12]
#Precalculate vector perpendicular to the detector plane:
det_normal = numpy.cross(det_axis_hrz, det_axis_vrt)
det_normal = det_normal / numpy.sqrt(numpy.dot(det_normal, det_normal))
geom = self._parent.meta.geometry
# Translations relative to the detecotor plane:
#Detector shift (V):
det_vect += geom.get_modifier('det_vrt', ii) * det_axis_vrt
#Detector shift (H):
det_vect += geom.get_modifier('det_hrz', ii) * det_axis_hrz
#Detector shift (M):
det_vect += geom.get_modifier('det_mag', ii) * det_normal
#Source shift (V):
src_vect += geom.get_modifier('src_vrt', ii) * det_axis_vrt
#self.vol_geom['option']['WindowMinZ'] = -self.vol_geom['GridSliceCount'] / 2.0 + src_vect[2]*(geom.magnification - 1)
#self.vol_geom['option']['WindowMaxZ'] = self.vol_geom['GridSliceCount'] / 2.0 + src_vect[2]*(geom.magnification - 1)
#Source shift (H):
src_vect += geom.get_modifier('src_hrz', ii) * det_axis_hrz
#Source shift (M):
src_vect += geom.get_modifier('det_mag', ii) * det_normal
# Rotation relative to the detector plane:
# Compute rotation matrix
if geom.get_modifier('det_rot', ii) != 0:
T = transforms3d.axangles.axangle2mat(det_normal, geom.get_modifier('det_rot', ii))
det_axis_hrz[:] = numpy.dot(T.T, det_axis_hrz)
det_axis_vrt[:] = numpy.dot(T, det_axis_vrt)
# Global transformation:
# Rotation matrix based on Euler angles:
R = euler.euler2mat(geom.get_modifier('vol_x_rot'), geom.get_modifier('vol_y_rot'), geom.get_modifier('vol_z_rot'), 'syxz')
# Apply transformation:
det_axis_hrz[:] = numpy.dot(R, det_axis_hrz)
det_axis_vrt[:] = numpy.dot(R, det_axis_vrt)
src_vect[:] = numpy.dot(R, src_vect)
det_vect[:] = numpy.dot(R, det_vect)
# Add translation:
T = numpy.array([geom.get_modifier('vol_x_tra'), geom.get_modifier('vol_y_tra'), geom.get_modifier('vol_z_tra')])
src_vect[:] += T
det_vect[:] += T
# Modifiers applied... Extend the volume if needed
def _initialize_astra(self, sz = None, det_pixel = None,
det2obj = None, src2obj = None, theta = None, vec_geom = False):
if sz is None: sz = self._parent.data.shape
if det_pixel is None: det_pixel = self._parent.meta.geometry.det_pixel
if det2obj is None: det2obj = self._parent.meta.geometry.det2obj
if src2obj is None: src2obj = self._parent.meta.geometry.src2obj
if theta is None: theta = self._parent.meta.geometry.thetas
# Initialize ASTRA (3D):
det_count_x = sz[2]
det_count_z = sz[0]
# Make volume count x > detector count to include corneres of the object:
vol_count_x = sz[2]
vol_count_z = sz[0]
M = self._parent.meta.geometry.magnification
self.vol_geom = astra.create_vol_geom(vol_count_x, vol_count_x, vol_count_z)
self.proj_geom = astra.create_proj_geom('cone', M, M, det_count_z, det_count_x, theta, (src2obj*M)/det_pixel[0], (det2obj*M)/det_pixel[0])
self._apply_modifiers()
def _initialize_ramp_filter(self, power = 1):
sz = self._parent.data.shape
# Next power of 2:
order = numpy.int32(2 ** numpy.ceil(math.log2(sz[2]) - 1))
n = numpy.arange(0, order)
# Create 1D array:
filtImpResp = numpy.zeros(order+1)
# Populate it with ramp
filtImpResp[0] = 1/4
filtImpResp[1::2] = -1 / ((numpy.pi * n[1::2]) ** 2)
filtImpResp = numpy.concatenate([filtImpResp, filtImpResp[::-1]])
filtImpResp = filtImpResp[:-1]
filt = numpy.real(numpy.fft.fft(filtImpResp)) ** power
#filt = filt[0:order]
# Back to 32 bit...
filt = numpy.float32(filt)
self._projection_filter = numpy.matlib.repmat(filt, sz[1], 1)
#def _geometry_modifiers(self, sx = 0, sy = 09, sz, )
def _backproject(self, y, algorithm = 'FDK_CUDA', iterations=1, min_constraint = None, short_scan=False):
cfg = astra.astra_dict(algorithm)
cfg['option'] = {}
if short_scan:
cfg['option']['ShortScan'] = True
if (min_constraint is not None):
cfg['option']['MinConstraint'] = min_constraint
output = numpy.zeros(astra.functions.geom_size(self.vol_geom), dtype=numpy.float32)
rec_id = 0
sinogram_id = 0
alg_id = 0
try:
rec_id = astra.data3d.link('-vol', self.vol_geom, output)
sinogram_id = astra.data3d.link('-sino', self.proj_geom, y)
cfg['ReconstructionDataId'] = rec_id
cfg['ProjectionDataId'] = sinogram_id
#cfg['option'] = {}
# Use projection and reconstruction masks:
if not self._reconstruction_mask is None:
print(self.vol_geom)
mask_id = astra.data3d.link('-vol', self.vol_geom, self._reconstruction_mask)
cfg['option']['ReconstructionMaskId'] = mask_id
if not self._projection_mask is None:
mask_id = astra.data3d.link('-sino', self.proj_geom, self._projection_mask)
cfg['option']['SinogramMaskId'] = mask_id
# Use modified filter:
if not self._projection_filter is None:
sz = self._projection_filter.shape
slice_proj_geom = astra.create_proj_geom('parallel', 1.0, sz[1], self._parent.meta.geometry.thetas)
filt_id = astra.data2d.link('-sino', slice_proj_geom, self._projection_filter)
cfg['option']['FilterSinogramId'] = filt_id
alg_id = astra.algorithm.create(cfg)
astra.algorithm.run(alg_id, iterations)
except Exception as detail:
self._parent.message(detail)
finally:
astra.algorithm.delete(alg_id)
astra.data3d.delete([rec_id, sinogram_id])
return output #astra.data3d.get(self.rec_id)
def _forwardproject(self, x, algorithm = 'FP3D_CUDA'):
cfg = astra.astra_dict(algorithm)
output = numpy.zeros(astra.functions.geom_size(self.proj_geom), dtype=numpy.float32)
rec_id = []
sinogram_id = []
alg_id = []
try:
rec_id = astra.data3d.link('-vol', self.vol_geom, x)
sinogram_id = astra.data3d.link('-sino', self.proj_geom, output)
cfg['VolumeDataId'] = rec_id
cfg['ProjectionDataId'] = sinogram_id
alg_id = astra.algorithm.create(cfg)
astra.algorithm.run(alg_id, 1)
finally:
astra.data3d.delete([rec_id, sinogram_id])
astra.algorithm.delete(alg_id)
return output
def get_vol_ROI(self):
# Computes a mask of minimal projection ROI needed to reconstruct a ROI for FDK
prnt = self._parent
# Initialize ASTRA:
self._initialize_astra()
# Run the reconstruction:
vol = self._backproject(numpy.ones(prnt.data.shape, dtype = 'float32'))
return volume(vol)
def get_proj_ROI(self, rows=[0,512], cols=[0,512], algorithm='FP3D_CUDA'):
# Computes a mask of minimal projection ROI needed to reconstruct a ROI for FDK
prnt = self._parent
# Initialize ASTRA:
sz = prnt.data.shape
pixel_size = prnt.meta.geometry.det_pixel
det2obj = prnt.meta.geometry.det2obj
src2obj = prnt.meta.geometry.src2obj
theta = prnt.meta.geometry.thetas
roi = numpy.zeros((sz[0],sz[2], sz[2]), dtype=numpy.float32)
roi[rows[0]:rows[1],cols[0]:cols[1],cols[0]:cols[1]] = 1.0
self._initialize_astra(sz, pixel_size, det2obj, src2obj, theta)
mask = self._forwardproject(roi, algorithm=algorithm)
# TODO: Compute the bounds of the minimal non-zero rectangle
'''
mask[mask>0]=1.0
bounds = [[0,0],[0,0]]
bounds[0][0] = numpy.min(numpy.argmax(numpy.argmax(mask,axis=2),axis=1))
for row in range(mask.shape[0],-1,-1))
bounds[0][1] = numpy.argmin(mask,axis=0)
bounds[1][0] = numpy.argmax(mask,axis=2)
bounds[1][1] = numpy.argmin(mask,axis=2)
print(bounds)
'''
return mask
# **************************************************************
# VOLUME class and subclasses
# **************************************************************
from scipy import ndimage
from skimage import morphology
class postprocess(subclass):
'''
Includes postprocessing of the reconstructed volume.
'''
def threshold(self, volume, threshold = None):
if threshold is None: threshold = volume.analyse.max() / 2
volume.data.data = ((volume.data.data > threshold) * 1.0)
def measure_thickness(self, volume, obj_intensity = None):
'''
Measure average thickness of an object.
'''
# Apply threshold:
self.treshold(volume, obj_intensity)
# Skeletonize:
skeleton = morphology.skeletonize3d(volume.data.data)
# Compute distance across the wall:
distance = ndimage.distance_transform_bf(volume.data.data) * 2
# Average distance:
return numpy.mean(distance[skeleton])
class volume(object):
data = []
io = []
analyse = []
display = []
meta = []
postprocess = []
# Sampling of the projection data volume:
_data_sampling = 1
def __init__(self, vol = []):
self.io = io(self)
self.display = display(self)
self.analyse = analyse(self)
self.data = data(self)
self.meta = meta(self)
self.postprocess = postprocess(self)
# Get the data in:
self.data._data = vol
# **************************************************************
# SINOGRAM class
# **************************************************************
import warnings
import logging
import copy
_min_history = ['io.read_raw', 'io.read_ref', 'io.read_meta', 'process.flat_field', 'process.log']
_wisdoms = ['You’d better get busy, though, buddy. The goddam *sands* run out on you \
every time you turn around. I know what I’m talking about. You’re lucky if \
you get time to sneeze in this goddam phenomenal world.',
'Work done with anxiety about results is far inferior to work done without\
such anxiety, in the calm of self-surrender. Seek refuge in the knowledge\
of Brahman. They who work selfishly for results are miserable.',
'You have the right to work, but for the work`s sake only. You have no right\
to the fruits of work. Desire for the fruits of work must never be your\
motive in working. Never give way to laziness, either.',
'Perform every action with your heart fixed on the Supreme Lord. Renounce\
attachment to the fruits. Be even-tempered [underlined by one of the \
cal-ligraphers] in success and failure; for it is this evenness of temper which is meant by yoga.',
'God instructs the heart, not by ideas but by pains and contradictions.',
'Sir, we ought to teach the people that they are doing wrong in worshipping\
the images and pictures in the temple.',
'Hard work beats talent.', 'It will never be perfect. Make it work!',
'Although, many of us fear death, I think there is something illogical about it.',
'I have nothing but respect for you -- and not much of that.',
'Prediction is very difficult, especially about the future.',
'You rely too much on brain. The brain is the most overrated organ.',
'A First Sign of the Beginning of Understanding is the Wish to Die.']
class projections(object):
'''
Class that will contain the raw data and links to all operations that we need
to process and reconstruct it.
'''
# Public stuff:
io = []
meta = []
display = []
analyse = []
process = []
reconstruct = []
data = []
# Private:
_wisdom_status = 1
# Sampling of the projection data volume:
_data_sampling = 1
def __init__(self):
self.io = io(self)
self.meta = meta(self)
self.display = display(self)
self.analyse = analyse(self)
self.process = process(self)
self.reconstruct = reconstruct(self)
self.data = data(self)
def message(self, msg):
'''
Send a message to IPython console.
'''
#log = logging.getLogger()
#log.setLevel(logging.DEBUG)
#log.debug(msg)
print(msg)
def error(self, msg):
'''
Throw an error:
'''
self.meta.history.add_record('error', msg)
raise ValueError(msg)
def warning(self, msg):
'''
Throw a warning. In their face!
'''
self.meta.history.add_record('warning', msg)
warnings.warn(msg)
def what_to_do(self):
if not self._pronounce_wisdom():
self._check_min_hist_keys()
def copy(self):
'''
Deep copy of the sinogram object:
'''
return copy.deepcopy(self)
def _pronounce_wisdom(self):
randomator = 0
# Beef up the randomator:
for ii in range(0, self._wisdom_status):
randomator += numpy.random.randint(0, 100)
# If randomator is small, utter a wisdom!
if (randomator < 50):
self._wisdom_status += 1
# Pick one wisdom:
l = numpy.size(_wisdoms)
self.message(_wisdoms[numpy.random.randint(0, l)])
return 1
return 0
def _check_min_hist_keys(self):
'''
Check the history and tell the user which operation should be used next.
'''
finished = True
for k in _min_history:
self.message((k in self.meta.history.keys))
if not(k in self.meta.history.key):
self.message('You should use ' + k + ' as a next step')
finished = False
break
if finished:
self.message('All basic processing steps were done. Use "reconstruct.FDK" to compute filtered backprojection.')
def _check_double_hist(self, new_key):
'''
Check if the operation was already done
'''
if new_key in self.meta.history.keys:
self.error(new_key + ' is found in the history of operations! Aborting.')
| gpl-3.0 |
ahnqirage/spark | python/setup.py | 4 | 10245 | #!/usr/bin/env python
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import glob
import os
import sys
from setuptools import setup, find_packages
from shutil import copyfile, copytree, rmtree
if sys.version_info < (2, 7):
print("Python versions prior to 2.7 are not supported for pip installed PySpark.",
file=sys.stderr)
sys.exit(-1)
try:
exec(open('pyspark/version.py').read())
except IOError:
print("Failed to load PySpark version file for packaging. You must be in Spark's python dir.",
file=sys.stderr)
sys.exit(-1)
VERSION = __version__ # noqa
# A temporary path so we can access above the Python project root and fetch scripts and jars we need
TEMP_PATH = "deps"
SPARK_HOME = os.path.abspath("../")
# Provide guidance about how to use setup.py
incorrect_invocation_message = """
If you are installing pyspark from spark source, you must first build Spark and
run sdist.
To build Spark with maven you can run:
./build/mvn -DskipTests clean package
Building the source dist is done in the Python directory:
cd python
python setup.py sdist
pip install dist/*.tar.gz"""
# Figure out where the jars are we need to package with PySpark.
JARS_PATH = glob.glob(os.path.join(SPARK_HOME, "assembly/target/scala-*/jars/"))
if len(JARS_PATH) == 1:
JARS_PATH = JARS_PATH[0]
elif (os.path.isfile("../RELEASE") and len(glob.glob("../jars/spark*core*.jar")) == 1):
# Release mode puts the jars in a jars directory
JARS_PATH = os.path.join(SPARK_HOME, "jars")
elif len(JARS_PATH) > 1:
print("Assembly jars exist for multiple scalas ({0}), please cleanup assembly/target".format(
JARS_PATH), file=sys.stderr)
sys.exit(-1)
elif len(JARS_PATH) == 0 and not os.path.exists(TEMP_PATH):
print(incorrect_invocation_message, file=sys.stderr)
sys.exit(-1)
EXAMPLES_PATH = os.path.join(SPARK_HOME, "examples/src/main/python")
SCRIPTS_PATH = os.path.join(SPARK_HOME, "bin")
DATA_PATH = os.path.join(SPARK_HOME, "data")
LICENSES_PATH = os.path.join(SPARK_HOME, "licenses")
SCRIPTS_TARGET = os.path.join(TEMP_PATH, "bin")
JARS_TARGET = os.path.join(TEMP_PATH, "jars")
EXAMPLES_TARGET = os.path.join(TEMP_PATH, "examples")
DATA_TARGET = os.path.join(TEMP_PATH, "data")
LICENSES_TARGET = os.path.join(TEMP_PATH, "licenses")
# Check and see if we are under the spark path in which case we need to build the symlink farm.
# This is important because we only want to build the symlink farm while under Spark otherwise we
# want to use the symlink farm. And if the symlink farm exists under while under Spark (e.g. a
# partially built sdist) we should error and have the user sort it out.
in_spark = (os.path.isfile("../core/src/main/scala/org/apache/spark/SparkContext.scala") or
(os.path.isfile("../RELEASE") and len(glob.glob("../jars/spark*core*.jar")) == 1))
def _supports_symlinks():
"""Check if the system supports symlinks (e.g. *nix) or not."""
return getattr(os, "symlink", None) is not None
if (in_spark):
# Construct links for setup
try:
os.mkdir(TEMP_PATH)
except:
print("Temp path for symlink to parent already exists {0}".format(TEMP_PATH),
file=sys.stderr)
sys.exit(-1)
# If you are changing the versions here, please also change ./python/pyspark/sql/utils.py and
# ./python/run-tests.py. In case of Arrow, you should also check ./pom.xml.
_minimum_pandas_version = "0.19.2"
_minimum_pyarrow_version = "0.8.0"
try:
# We copy the shell script to be under pyspark/python/pyspark so that the launcher scripts
# find it where expected. The rest of the files aren't copied because they are accessed
# using Python imports instead which will be resolved correctly.
try:
os.makedirs("pyspark/python/pyspark")
except OSError:
# Don't worry if the directory already exists.
pass
copyfile("pyspark/shell.py", "pyspark/python/pyspark/shell.py")
if (in_spark):
# Construct the symlink farm - this is necessary since we can't refer to the path above the
# package root and we need to copy the jars and scripts which are up above the python root.
if _supports_symlinks():
os.symlink(JARS_PATH, JARS_TARGET)
os.symlink(SCRIPTS_PATH, SCRIPTS_TARGET)
os.symlink(EXAMPLES_PATH, EXAMPLES_TARGET)
os.symlink(DATA_PATH, DATA_TARGET)
os.symlink(LICENSES_PATH, LICENSES_TARGET)
else:
# For windows fall back to the slower copytree
copytree(JARS_PATH, JARS_TARGET)
copytree(SCRIPTS_PATH, SCRIPTS_TARGET)
copytree(EXAMPLES_PATH, EXAMPLES_TARGET)
copytree(DATA_PATH, DATA_TARGET)
copytree(LICENSES_PATH, LICENSES_TARGET)
else:
# If we are not inside of SPARK_HOME verify we have the required symlink farm
if not os.path.exists(JARS_TARGET):
print("To build packaging must be in the python directory under the SPARK_HOME.",
file=sys.stderr)
if not os.path.isdir(SCRIPTS_TARGET):
print(incorrect_invocation_message, file=sys.stderr)
sys.exit(-1)
# Scripts directive requires a list of each script path and does not take wild cards.
script_names = os.listdir(SCRIPTS_TARGET)
scripts = list(map(lambda script: os.path.join(SCRIPTS_TARGET, script), script_names))
# We add find_spark_home.py to the bin directory we install so that pip installed PySpark
# will search for SPARK_HOME with Python.
scripts.append("pyspark/find_spark_home.py")
# Parse the README markdown file into rst for PyPI
long_description = "!!!!! missing pandoc do not upload to PyPI !!!!"
try:
import pypandoc
long_description = pypandoc.convert('README.md', 'rst')
except ImportError:
print("Could not import pypandoc - required to package PySpark", file=sys.stderr)
except OSError:
print("Could not convert - pandoc is not installed", file=sys.stderr)
setup(
name='pyspark',
version=VERSION,
description='Apache Spark Python API',
long_description=long_description,
author='Spark Developers',
author_email='dev@spark.apache.org',
url='https://github.com/apache/spark/tree/master/python',
packages=['pyspark',
'pyspark.mllib',
'pyspark.mllib.linalg',
'pyspark.mllib.stat',
'pyspark.ml',
'pyspark.ml.linalg',
'pyspark.ml.param',
'pyspark.sql',
'pyspark.streaming',
'pyspark.bin',
'pyspark.jars',
'pyspark.python.pyspark',
'pyspark.python.lib',
'pyspark.data',
'pyspark.licenses',
'pyspark.examples.src.main.python'],
include_package_data=True,
package_dir={
'pyspark.jars': 'deps/jars',
'pyspark.bin': 'deps/bin',
'pyspark.python.lib': 'lib',
'pyspark.data': 'deps/data',
'pyspark.licenses': 'deps/licenses',
'pyspark.examples.src.main.python': 'deps/examples',
},
package_data={
'pyspark.jars': ['*.jar'],
'pyspark.bin': ['*'],
'pyspark.python.lib': ['*.zip'],
'pyspark.data': ['*.txt', '*.data'],
'pyspark.licenses': ['*.txt'],
'pyspark.examples.src.main.python': ['*.py', '*/*.py']},
scripts=scripts,
license='http://www.apache.org/licenses/LICENSE-2.0',
install_requires=['py4j==0.10.8.1'],
setup_requires=['pypandoc'],
extras_require={
'ml': ['numpy>=1.7'],
'mllib': ['numpy>=1.7'],
'sql': [
'pandas>=%s' % _minimum_pandas_version,
'pyarrow>=%s' % _minimum_pyarrow_version,
]
},
classifiers=[
'Development Status :: 5 - Production/Stable',
'License :: OSI Approved :: Apache Software License',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: Implementation :: CPython',
'Programming Language :: Python :: Implementation :: PyPy']
)
finally:
# We only cleanup the symlink farm if we were in Spark, otherwise we are installing rather than
# packaging.
if (in_spark):
# Depending on cleaning up the symlink farm or copied version
if _supports_symlinks():
os.remove(os.path.join(TEMP_PATH, "jars"))
os.remove(os.path.join(TEMP_PATH, "bin"))
os.remove(os.path.join(TEMP_PATH, "examples"))
os.remove(os.path.join(TEMP_PATH, "data"))
os.remove(os.path.join(TEMP_PATH, "licenses"))
else:
rmtree(os.path.join(TEMP_PATH, "jars"))
rmtree(os.path.join(TEMP_PATH, "bin"))
rmtree(os.path.join(TEMP_PATH, "examples"))
rmtree(os.path.join(TEMP_PATH, "data"))
rmtree(os.path.join(TEMP_PATH, "licenses"))
os.rmdir(TEMP_PATH)
| apache-2.0 |
arcyfelix/Courses | 17-06-05-Machine-Learning-For-Trading/40_portfolio_optimization.py | 1 | 4642 | import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tqdm import tqdm
''' Read: http://pandas.pydata.org/pandas-docs/stable/api.html#api-dataframe-stats '''
def symbol_to_path(symbol, base_dir = 'data'):
return os.path.join(base_dir, "{}.csv".format(str(symbol)))
def dates_creator(start_date, end_date):
dates = pd.date_range(start_date, end_date)
return dates
def get_data(start, end, stocks):
dates = dates_creator(start, end)
df = pd.DataFrame(index = dates)
if 'SPY' not in stocks: # adding SPY as the main reference
stocks.insert(0, 'SPY')
for symbol in stocks:
df_temp = pd.read_csv(symbol_to_path(symbol),
index_col = 'Date',
parse_dates = True,
usecols = ['Date', 'Adj Close'],
na_values = ['nan'])
df_temp = df_temp.rename(columns = {'Adj Close': symbol})
df = df.join(df_temp)
if symbol == 'SPY':
df = df.dropna(subset = ['SPY'])
return df
def normalize_data(df):
return df / df.iloc[0,:]
def find_portfolio_statistics(allocs, df, gen_plot = False):
dfcopy = df.copy()
'''
Compute portfolio statistics:
1) Cumulative return
2) Daily return
3) Average daily return
4) Standard deviation of the daily returns
5) (Annual) Sharpe Ratio
6) Final value
7) Total returns
Parameters:
-----------
allocs: list of allocation fractions for each stock
The sum must be equal to 1!
example: allocs = [0.0, 0.5, 0.35, 0.15]
df: DataFrame with the data
Optional:
---------
gen_plot: if True, a plot with performance of the allocation
compared to SPY500 will be shown.
'''
# Normalization
df = (df / df.iloc[0])
# Allocation of the resources
df = df * allocs
# Sum of the value of the resources
df = df.sum(axis = 1)
# Compute Portfolio Statistics
# Cumulative return
cumulative_return = (df.iloc[-1] / df.iloc[0]) - 1
# Daily returns
dailyreturns = (df.iloc[1:] / df.iloc[:-1].values) - 1
average_daily_return = dailyreturns.mean(axis = 0)
yearly_return = average_daily_return #* 252 # 252 days of trading in a year
# Standard deviation of the daily returns
std_daily_return = dailyreturns.std(axis = 0)
# Sharpe Ratio
sharpe_ratio = (252 ** (0.5)) * ((average_daily_return - 0) / std_daily_return)
ending_value = df.iloc[-1]
total_returns = average_daily_return*(252 / 252)
if gen_plot == True:
#Plot portfolio along SPY
dfcopynormed = dfcopy['SPY'] / dfcopy['SPY'].iloc[0]
ax = dfcopynormed.plot(title = 'Daily Portfolio Value and SPY', label = 'SPY')
sumcopy = dfcopy.sum(axis = 1)
normed = sumcopy/sumcopy.iloc[0]
normed.plot(label='Portfolio Value', ax = ax)
ax.set_xlabel('Date')
ax.set_ylabel('Price')
ax.legend(loc = 2)
plt.show()
'''
print('For allocation as follows:')
print(allocs)
print('Mean return:')
print(mean_return)
print('Standard deviation:')
print(std_return)
print('Annualized Sharpe ratio:')
print(sharpe_ratio)
'''
return yearly_return, std_daily_return, sharpe_ratio
def generate_random_portfolios(num_portfolios, stocks, include_SPY = False):
start = '2013-01-01'
end = '2013-12-31'
df = get_data(start, end, stocks)
df = df.drop('SPY', 1)
# Number of stocks (-1 to not to include SPY)
num_stocks = len(stocks) - 1
# Initialization the final result matrix with zeros
result_matrix = np.zeros([num_portfolios,3])
for i in tqdm(range(num_portfolios)):
random = np.random.random(num_stocks)
allocs = random/ np.sum(random)
mean_return, std_return, sharpe_ratio = find_portfolio_statistics(allocs, df, gen_plot = False)
result_matrix[i, 0] = mean_return
result_matrix[i, 1] = std_return
result_matrix[i, 2] = sharpe_ratio
return result_matrix
if __name__ == "__main__":
stocks = ['SPY', 'AAPL', 'GOOG', 'TSLA']
result_matrix = generate_random_portfolios(10000, stocks)
#convert results array to Pandas DataFrame
results_frame = pd.DataFrame(result_matrix,columns=['ret','stdev','sharpe'])
#create scatter plot coloured by Sharpe Ratio
plt.scatter(results_frame.stdev,results_frame.ret,c=results_frame.sharpe,cmap='RdYlBu')
plt.colorbar()
plt.show()
| apache-2.0 |
gyoto/Gyoto | python/example.py | 1 | 8032 | #/bin/env python
# -*- coding: utf-8 -*-
# Example file for gyoto
#
# Copyright 2014-2018 Thibaut Paumard
#
# This file is part of Gyoto.
#
# Gyoto is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Gyoto is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Gyoto. If not, see <http://www.gnu.org/licenses/>.
import numpy
import matplotlib as ml
import matplotlib.pyplot as plt
import gyoto.core
import gyoto.std
# Simple stuff
scr=gyoto.core.Screen()
gg=gyoto.std.KerrBL()
scr.metric(gg)
pos=scr.getObserverPos()
# Load Scenery
a=gyoto.core.Factory("../doc/examples/example-moving-star.xml")
sc=a.scenery()
sc.nThreads(8)
sc.astrobj().opticallyThin(False)
scr=sc.screen()
dest=numpy.zeros(8, float)
scr.getRayCoord(1,1,dest)
dest=numpy.ndarray(3, float)
scr.coordToSky((0., 5., numpy.pi/2, 0), dest)
# Trace and plot NULL geodesic:
ph=gyoto.core.Photon()
ph.setInitialCondition(sc.metric(), sc.astrobj(), sc.screen(), 0., 0.)
ph.hit()
n=ph.get_nelements()
# We try to map Gyoto arrays to NumPy arrays wherever possible.
# Create NumPy arrays
t=numpy.ndarray(n)
r=numpy.ndarray(n)
theta=numpy.ndarray(n)
phi=numpy.ndarray(n)
# Call Gyoto method that takes these arrays as argument:
ph.get_t(t)
ph.getCoord(t, r, theta, phi)
plt.plot(t, r)
plt.show()
# Trace and plot timelike geodesic
# We need to cast the object to a gyoto.std.Star:
wl=gyoto.std.Star(sc.astrobj())
wl.xFill(1000)
n=wl.get_nelements()
x=numpy.ndarray(n)
y=numpy.ndarray(n)
z=numpy.ndarray(n)
wl.get_xyz(x, y, z)
plt.plot(x, y)
plt.show()
# Ray-trace scenery
# For that, we can use the short-hand:
sc.requestedQuantitiesString('Intensity EmissionTime MinDistance')
results=sc.rayTrace()
plt.imshow(results['Intensity'])
plt.show()
plt.imshow(results['EmissionTime'])
plt.show()
plt.imshow(results['MinDistance'])
plt.show()
# Or we can do it manually to understand how the Gyoto API works:
res=sc.screen().resolution()
intensity=numpy.zeros((res, res), dtype=float)
time=numpy.zeros((res, res), dtype=float)
distance=numpy.zeros((res, res), dtype=float)
aop=gyoto.core.AstrobjProperties()
# Here we will use the low-level AstrobjProperties facilities. This is
# one of a few Gyoto functionalities where NumPy arrays are not
# directly supported. We use lower-level C-like arrays through the
# gyoto.core.array_double and gyoto.core.array_unsigned_long classes. Beware
# that this type does not provide any safeguards, it is quite easy to
# get it to SEGFAULT. As we develop Gyoto, we try to remove the need
# for the gyoto.core.array_* classes in favor of NumPy arrays. Code that
# uses this... ``feature'' therefore may break in future releases.
#
# To (indirectly) use NumPy arrays with a functionality that requires
# gyoto.core.array_* arguments, create the arrays using numpy (see above:
# `intensity', `time' and `distance' arrays) , then cast them using
# the fromnumpyN static methods, where the digit N indicates the
# dimensionality of the NumPy array. The underlying storage belongs to
# the NumPy variable and will be deleted with it: don't use the
# array_double() variable (for anyting else that destroying it) past
# the destruction of the corresponding NumPy variable.
aop.intensity=gyoto.core.array_double.fromnumpy2(intensity)
aop.time=gyoto.core.array_double.fromnumpy2(time)
aop.distance=gyoto.core.array_double.fromnumpy2(distance)
ii=gyoto.core.Range(1, res, 1)
jj=gyoto.core.Range(1, res, 1)
grid=gyoto.core.Grid(ii, jj, "\rj = ")
sc.rayTrace(grid, aop)
plt.imshow(intensity)
plt.show()
plt.imshow(time)
plt.show()
plt.imshow(distance)
plt.show()
# Another Scenery, with spectrum
sc=gyoto.core.Factory("../doc/examples/example-polish-doughnut.xml").scenery()
sc.screen().resolution(32)
res=sc.screen().resolution()
ns=sc.screen().spectrometer().nSamples()
spectrum=numpy.zeros((ns, res, res), dtype=float)
ii=gyoto.core.Range(1, res, 1)
jj=gyoto.core.Range(1, res, 1)
grid=gyoto.core.Grid(ii, jj, "\rj = ")
aop=gyoto.core.AstrobjProperties()
aop.spectrum=gyoto.core.array_double.fromnumpy3(spectrum)
aop.offset=res*res
sc.rayTrace(grid, aop)
plt.imshow(spectrum[1,:,:])
plt.show()
# Another Scenery, with impact coords, created from within Python
met=gyoto.core.Metric("KerrBL")
met.mass(4e6, "sunmass")
ao=gyoto.core.Astrobj("PageThorneDisk")
ao.metric(met)
ao.opticallyThin(False)
ao.rMax(100)
screen=gyoto.core.Screen()
screen.distance(8, "kpc")
screen.time(8, "kpc")
screen.resolution(64)
screen.inclination(numpy.pi/4)
screen.PALN(numpy.pi)
screen.time(8, "kpc")
screen.fieldOfView(100, "µas")
sc=gyoto.core.Scenery()
sc.metric(met)
sc.astrobj(ao)
sc.screen(screen)
sc.delta(1, "kpc")
sc.adaptive(True)
sc.nThreads(8)
res=sc.screen().resolution()
ii=gyoto.core.Range(1, res, 1)
jj=gyoto.core.Range(1, res, 1)
grid=gyoto.core.Grid(ii, jj, "\rj = ")
ipct=numpy.zeros((res, res, 16), dtype=float)
aop=gyoto.core.AstrobjProperties()
aop.impactcoords=gyoto.core.array_double.fromnumpy3(ipct)
aop.offset=res*res
sc.rayTrace(grid, aop)
plt.imshow(ipct[:,:,0], interpolation="nearest", vmin=-100, vmax=0)
plt.show()
# Trace one line of the above using alpha and delta
N=10
buf=numpy.linspace(screen.fieldOfView()*-0.5, screen.fieldOfView()*0.5, N)
a=gyoto.core.Angles(buf)
d=gyoto.core.RepeatAngle(screen.fieldOfView()*-0.5, N)
bucket=gyoto.core.Bucket(a, d)
ipct=numpy.zeros((N, 16), dtype=float)
aop=gyoto.core.AstrobjProperties()
aop.impactcoords=gyoto.core.array_double.fromnumpy2(ipct)
aop.offset=N
sc.rayTrace(bucket, aop)
plt.plot(buf, ipct[:,0])
plt.show()
# Trace the diagonal of the above using i and j. The Range and Indices
# definitions below are equivalent. Range is more efficient for a
# range, Indices can hold arbitrary indices.
ind=numpy.arange(1, res+1, dtype=numpy.uintp) # on 64bit arch...
ii=gyoto.core.Indices(ind)
# Or:
# ind=gyoto.core.array_size_t(res)
# for i in range(0, res):
# ind[i]=i+1
# ii=gyoto.core.Indices(ind, res)
jj=gyoto.core.Range(1, res, 1)
bucket=gyoto.core.Bucket(ii, jj)
ipct=numpy.zeros((res, 16), dtype=float)
aop=gyoto.core.AstrobjProperties()
aop.impactcoords=gyoto.core.array_double.fromnumpy2(ipct)
aop.offset=res
sc.rayTrace(bucket, aop)
t=numpy.clip(ipct[:,0], a_min=-200, a_max=0)
plt.plot(t)
plt.show()
# Any derived class can be instantiated from its name, as soon as the
# corresponding plug-in has been loaded into Gyoto. The standard
# plug-in is normally loaded automatically (and is always loaded when
# gyoto.std is imported), but this can also be forced with
# gyoto.core.requirePlugin():
gyoto.core.requirePlugin('stdplug')
tt=gyoto.core.Astrobj('Torus')
kerr=gyoto.core.Metric('KerrBL')
# Most properties that can be set in an XML file can also be accessed
# from Python using the Property/Value mechanism:
# Low-level access:
p=tt.property("SmallRadius")
p.type==gyoto.core.Property.double_t
tt.set(p, gyoto.core.Value(0.2))
tt.get(p) == 0.2
# Higher-level:
kerr.set("Spin", 0.95)
kerr.get("Spin") == 0.95
# However, we also have Python extensions around the standard Gyoto
# plug-ins.
import gyoto.std
# And if the lorene plug-in has been compiled:
# import gyoto.lorene
# It then becomes possible to access the methods specific to derived
# classes. They can be instantiated directly from the gyoto_* extension:
tr2=gyoto.std.Torus()
# and we can cast a generic pointer (from the gyoto extension) to a
# derived class:
tr=gyoto.std.Torus(tt)
tt.get("SmallRadius") == tr.smallRadius()
# Another example: using a complex (i.e. compound) Astrobj:
cplx=gyoto.std.ComplexAstrobj()
cplx.append(tr)
cplx.append(sc.astrobj())
sc.astrobj(cplx)
print("All done, exiting")
| gpl-3.0 |
richardwolny/sms-tools | lectures/09-Sound-description/plots-code/spectralFlux-onsetFunction.py | 25 | 1330 | import numpy as np
import matplotlib.pyplot as plt
import essentia.standard as ess
M = 1024
N = 1024
H = 512
fs = 44100
spectrum = ess.Spectrum(size=N)
window = ess.Windowing(size=M, type='hann')
flux = ess.Flux()
onsetDetection = ess.OnsetDetection(method='hfc')
x = ess.MonoLoader(filename = '../../../sounds/speech-male.wav', sampleRate = fs)()
fluxes = []
onsetDetections = []
for frame in ess.FrameGenerator(x, frameSize=M, hopSize=H, startFromZero=True):
mX = spectrum(window(frame))
flux_val = flux(mX)
fluxes.append(flux_val)
onsetDetection_val = onsetDetection(mX, mX)
onsetDetections.append(onsetDetection_val)
onsetDetections = np.array(onsetDetections)
fluxes = np.array(fluxes)
plt.figure(1, figsize=(9.5, 7))
plt.subplot(2,1,1)
plt.plot(np.arange(x.size)/float(fs), x)
plt.axis([0, x.size/float(fs), min(x), max(x)])
plt.ylabel('amplitude')
plt.title('x (speech-male.wav)')
plt.subplot(2,1,2)
frmTime = H*np.arange(fluxes.size)/float(fs)
plt.plot(frmTime, fluxes/max(fluxes), 'g', lw=1.5, label ='normalized spectral flux')
plt.plot(frmTime, onsetDetections/max(onsetDetections), 'c', lw=1.5, label = 'normalized onset detection')
plt.axis([0, x.size/float(fs), 0, 1])
plt.legend()
plt.tight_layout()
plt.savefig('spectralFlux-onsetFunction.png')
plt.show()
| agpl-3.0 |
dwettstein/pattern-recognition-2016 | mlp/neural_network/exceptions.py | 35 | 4329 | """
The :mod:`sklearn.exceptions` module includes all custom warnings and error
classes used across scikit-learn.
"""
__all__ = ['NotFittedError',
'ChangedBehaviorWarning',
'ConvergenceWarning',
'DataConversionWarning',
'DataDimensionalityWarning',
'EfficiencyWarning',
'FitFailedWarning',
'NonBLASDotWarning',
'UndefinedMetricWarning']
class NotFittedError(ValueError, AttributeError):
"""Exception class to raise if estimator is used before fitting.
This class inherits from both ValueError and AttributeError to help with
exception handling and backward compatibility.
Examples
--------
>>> from sklearn.svm import LinearSVC
>>> from sklearn.exceptions import NotFittedError
>>> try:
... LinearSVC().predict([[1, 2], [2, 3], [3, 4]])
... except NotFittedError as e:
... print(repr(e))
... # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS
NotFittedError('This LinearSVC instance is not fitted yet',)
"""
class ChangedBehaviorWarning(UserWarning):
"""Warning class used to notify the user of any change in the behavior."""
class ConvergenceWarning(UserWarning):
"""Custom warning to capture convergence problems"""
class DataConversionWarning(UserWarning):
"""Warning used to notify implicit data conversions happening in the code.
This warning occurs when some input data needs to be converted or
interpreted in a way that may not match the user's expectations.
For example, this warning may occur when the user
- passes an integer array to a function which expects float input and
will convert the input
- requests a non-copying operation, but a copy is required to meet the
implementation's data-type expectations;
- passes an input whose shape can be interpreted ambiguously.
"""
class DataDimensionalityWarning(UserWarning):
"""Custom warning to notify potential issues with data dimensionality.
For example, in random projection, this warning is raised when the
number of components, which quantifies the dimensionality of the target
projection space, is higher than the number of features, which quantifies
the dimensionality of the original source space, to imply that the
dimensionality of the problem will not be reduced.
"""
class EfficiencyWarning(UserWarning):
"""Warning used to notify the user of inefficient computation.
This warning notifies the user that the efficiency may not be optimal due
to some reason which may be included as a part of the warning message.
This may be subclassed into a more specific Warning class.
"""
class FitFailedWarning(RuntimeWarning):
"""Warning class used if there is an error while fitting the estimator.
This Warning is used in meta estimators GridSearchCV and RandomizedSearchCV
and the cross-validation helper function cross_val_score to warn when there
is an error while fitting the estimator.
Examples
--------
>>> from sklearn.model_selection import GridSearchCV
>>> from sklearn.svm import LinearSVC
>>> from sklearn.exceptions import FitFailedWarning
>>> import warnings
>>> warnings.simplefilter('always', FitFailedWarning)
>>> gs = GridSearchCV(LinearSVC(), {'C': [-1, -2]}, error_score=0)
>>> X, y = [[1, 2], [3, 4], [5, 6], [7, 8], [8, 9]], [0, 0, 0, 1, 1]
>>> with warnings.catch_warnings(record=True) as w:
... try:
... gs.fit(X, y) # This will raise a ValueError since C is < 0
... except ValueError:
... pass
... print(repr(w[-1].message))
... # doctest: +NORMALIZE_WHITESPACE
FitFailedWarning("Classifier fit failed. The score on this train-test
partition for these parameters will be set to 0.000000. Details:
\\nValueError('Penalty term must be positive; got (C=-2)',)",)
"""
class NonBLASDotWarning(EfficiencyWarning):
"""Warning used when the dot operation does not use BLAS.
This warning is used to notify the user that BLAS was not used for dot
operation and hence the efficiency may be affected.
"""
class UndefinedMetricWarning(UserWarning):
"""Warning used when the metric is invalid"""
| mit |
AstroFloyd/LearningPython | 3D_plotting/sphere.py | 1 | 2168 | #!/bin/env python3
# https://stackoverflow.com/a/32427177/1386750
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# Define constants:
r2d = 180 / np.pi
d2r = np.pi / 180
# Choose projection:
vpAlt = 10.0 * d2r
vpAz = 80.0 * d2r
# Setup plot:
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
### SPHERE ###
# Create a sphere:
r = 1
phi = np.linspace(0, 2*np.pi, 100) # Azimuthal coordinate
theta = np.linspace(0, np.pi, 100) # Altitude coordinate
x = r * np.outer(np.cos(phi), np.sin(theta))
y = r * np.outer(np.sin(phi), np.sin(theta))
z = r * np.outer(np.ones(np.size(phi)), np.cos(theta))
# Plot sphere surface:
ax.plot_surface(x, y, z, rstride=2, cstride=4, color='b', linewidth=0, alpha=0.5)
### EQUATOR ###
# Plot whole equator, dashed:
ax.plot(np.sin(phi), np.cos(phi), 0, color='k', linestyle='dashed')
# Overplot equator, front:
eq_front = np.linspace(0, np.pi, 100)
ax.plot( np.sin(eq_front), np.cos(eq_front), 0, color='k') # Circle with z=0
### MERIDIAN ###
# Calculate vectors for meridian:
a = np.array([-np.sin(vpAlt), 0, np.cos(vpAlt)])
b = np.array([0, 1, 0])
b = b * np.cos(vpAz) + np.cross(a, b) * np.sin(vpAz) + a * np.dot(a, b) * (1 - np.cos(vpAz))
# Plot whole meridian, dashed:
ax.plot( a[0] * np.sin(phi) + b[0] * np.cos(phi), b[1] * np.cos(phi), a[2] * np.sin(phi) + b[2] * np.cos(phi), color='k', linestyle='dashed')
# Overplot meridian, front:
meri_front = np.linspace(1/2*np.pi, 3/2*np.pi, 100) # 1/2 pi - 3/2 pi
ax.plot( a[0] * np.sin(meri_front) + b[0] * np.cos(meri_front), b[1] * np.cos(meri_front), a[2] * np.sin(meri_front) + b[2] * np.cos(meri_front), color='k')
### FINISH PLOT ###
# Choose projection angle:
ax.view_init(elev=vpAlt*r2d, azim=0)
ax.axis('off')
# Force narrow margins:
pllim = r*0.6
ax.set_aspect('equal') # Set axes to a 'square grid' by changing the x,y limits to match image size - do this before setting ranges? - this works for x-y only?
ax.set_xlim3d(-pllim,pllim)
ax.set_ylim3d(-pllim,pllim)
ax.set_zlim3d(-pllim,pllim)
#plt.show()
plt.tight_layout()
plt.savefig('sphere.png')
plt.close()
| gpl-3.0 |
guoxiaolongzte/spark | dev/sparktestsupport/modules.py | 6 | 15623 | #
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from functools import total_ordering
import itertools
import re
all_modules = []
@total_ordering
class Module(object):
"""
A module is the basic abstraction in our test runner script. Each module consists of a set
of source files, a set of test commands, and a set of dependencies on other modules. We use
modules to define a dependency graph that let us determine which tests to run based on which
files have changed.
"""
def __init__(self, name, dependencies, source_file_regexes, build_profile_flags=(), environ={},
sbt_test_goals=(), python_test_goals=(), blacklisted_python_implementations=(),
test_tags=(), should_run_r_tests=False, should_run_build_tests=False):
"""
Define a new module.
:param name: A short module name, for display in logging and error messages.
:param dependencies: A set of dependencies for this module. This should only include direct
dependencies; transitive dependencies are resolved automatically.
:param source_file_regexes: a set of regexes that match source files belonging to this
module. These regexes are applied by attempting to match at the beginning of the
filename strings.
:param build_profile_flags: A set of profile flags that should be passed to Maven or SBT in
order to build and test this module (e.g. '-PprofileName').
:param environ: A dict of environment variables that should be set when files in this
module are changed.
:param sbt_test_goals: A set of SBT test goals for testing this module.
:param python_test_goals: A set of Python test goals for testing this module.
:param blacklisted_python_implementations: A set of Python implementations that are not
supported by this module's Python components. The values in this set should match
strings returned by Python's `platform.python_implementation()`.
:param test_tags A set of tags that will be excluded when running unit tests if the module
is not explicitly changed.
:param should_run_r_tests: If true, changes in this module will trigger all R tests.
:param should_run_build_tests: If true, changes in this module will trigger build tests.
"""
self.name = name
self.dependencies = dependencies
self.source_file_prefixes = source_file_regexes
self.sbt_test_goals = sbt_test_goals
self.build_profile_flags = build_profile_flags
self.environ = environ
self.python_test_goals = python_test_goals
self.blacklisted_python_implementations = blacklisted_python_implementations
self.test_tags = test_tags
self.should_run_r_tests = should_run_r_tests
self.should_run_build_tests = should_run_build_tests
self.dependent_modules = set()
for dep in dependencies:
dep.dependent_modules.add(self)
all_modules.append(self)
def contains_file(self, filename):
return any(re.match(p, filename) for p in self.source_file_prefixes)
def __repr__(self):
return "Module<%s>" % self.name
def __lt__(self, other):
return self.name < other.name
def __eq__(self, other):
return self.name == other.name
def __ne__(self, other):
return not (self.name == other.name)
def __hash__(self):
return hash(self.name)
tags = Module(
name="tags",
dependencies=[],
source_file_regexes=[
"common/tags/",
]
)
catalyst = Module(
name="catalyst",
dependencies=[tags],
source_file_regexes=[
"sql/catalyst/",
],
sbt_test_goals=[
"catalyst/test",
],
)
sql = Module(
name="sql",
dependencies=[catalyst],
source_file_regexes=[
"sql/core/",
],
sbt_test_goals=[
"sql/test",
],
)
hive = Module(
name="hive",
dependencies=[sql],
source_file_regexes=[
"sql/hive/",
"bin/spark-sql",
],
build_profile_flags=[
"-Phive",
],
sbt_test_goals=[
"hive/test",
],
test_tags=[
"org.apache.spark.tags.ExtendedHiveTest"
]
)
repl = Module(
name="repl",
dependencies=[hive],
source_file_regexes=[
"repl/",
],
sbt_test_goals=[
"repl/test",
],
)
hive_thriftserver = Module(
name="hive-thriftserver",
dependencies=[hive],
source_file_regexes=[
"sql/hive-thriftserver",
"sbin/start-thriftserver.sh",
],
build_profile_flags=[
"-Phive-thriftserver",
],
sbt_test_goals=[
"hive-thriftserver/test",
]
)
avro = Module(
name="avro",
dependencies=[sql],
source_file_regexes=[
"external/avro",
],
sbt_test_goals=[
"avro/test",
]
)
sql_kafka = Module(
name="sql-kafka-0-10",
dependencies=[sql],
source_file_regexes=[
"external/kafka-0-10-sql",
],
sbt_test_goals=[
"sql-kafka-0-10/test",
]
)
sketch = Module(
name="sketch",
dependencies=[tags],
source_file_regexes=[
"common/sketch/",
],
sbt_test_goals=[
"sketch/test"
]
)
graphx = Module(
name="graphx",
dependencies=[tags],
source_file_regexes=[
"graphx/",
],
sbt_test_goals=[
"graphx/test"
]
)
streaming = Module(
name="streaming",
dependencies=[tags],
source_file_regexes=[
"streaming",
],
sbt_test_goals=[
"streaming/test",
]
)
# Don't set the dependencies because changes in other modules should not trigger Kinesis tests.
# Kinesis tests depends on external Amazon kinesis service. We should run these tests only when
# files in streaming_kinesis_asl are changed, so that if Kinesis experiences an outage, we don't
# fail other PRs.
streaming_kinesis_asl = Module(
name="streaming-kinesis-asl",
dependencies=[tags],
source_file_regexes=[
"external/kinesis-asl/",
"external/kinesis-asl-assembly/",
],
build_profile_flags=[
"-Pkinesis-asl",
],
environ={
"ENABLE_KINESIS_TESTS": "1"
},
sbt_test_goals=[
"streaming-kinesis-asl/test",
]
)
streaming_kafka_0_10 = Module(
name="streaming-kafka-0-10",
dependencies=[streaming],
source_file_regexes=[
# The ending "/" is necessary otherwise it will include "sql-kafka" codes
"external/kafka-0-10/",
"external/kafka-0-10-assembly",
],
sbt_test_goals=[
"streaming-kafka-0-10/test",
]
)
mllib_local = Module(
name="mllib-local",
dependencies=[tags],
source_file_regexes=[
"mllib-local",
],
sbt_test_goals=[
"mllib-local/test",
]
)
mllib = Module(
name="mllib",
dependencies=[mllib_local, streaming, sql],
source_file_regexes=[
"data/mllib/",
"mllib/",
],
sbt_test_goals=[
"mllib/test",
]
)
examples = Module(
name="examples",
dependencies=[graphx, mllib, streaming, hive],
source_file_regexes=[
"examples/",
],
sbt_test_goals=[
"examples/test",
]
)
pyspark_core = Module(
name="pyspark-core",
dependencies=[],
source_file_regexes=[
"python/(?!pyspark/(ml|mllib|sql|streaming))"
],
python_test_goals=[
# doctests
"pyspark.rdd",
"pyspark.context",
"pyspark.conf",
"pyspark.broadcast",
"pyspark.accumulators",
"pyspark.serializers",
"pyspark.profiler",
"pyspark.shuffle",
"pyspark.util",
# unittests
"pyspark.tests.test_appsubmit",
"pyspark.tests.test_broadcast",
"pyspark.tests.test_conf",
"pyspark.tests.test_context",
"pyspark.tests.test_daemon",
"pyspark.tests.test_join",
"pyspark.tests.test_profiler",
"pyspark.tests.test_rdd",
"pyspark.tests.test_readwrite",
"pyspark.tests.test_serializers",
"pyspark.tests.test_shuffle",
"pyspark.tests.test_taskcontext",
"pyspark.tests.test_util",
"pyspark.tests.test_worker",
]
)
pyspark_sql = Module(
name="pyspark-sql",
dependencies=[pyspark_core, hive],
source_file_regexes=[
"python/pyspark/sql"
],
python_test_goals=[
# doctests
"pyspark.sql.types",
"pyspark.sql.context",
"pyspark.sql.session",
"pyspark.sql.conf",
"pyspark.sql.catalog",
"pyspark.sql.column",
"pyspark.sql.dataframe",
"pyspark.sql.group",
"pyspark.sql.functions",
"pyspark.sql.readwriter",
"pyspark.sql.streaming",
"pyspark.sql.udf",
"pyspark.sql.window",
# unittests
"pyspark.sql.tests.test_appsubmit",
"pyspark.sql.tests.test_arrow",
"pyspark.sql.tests.test_catalog",
"pyspark.sql.tests.test_column",
"pyspark.sql.tests.test_conf",
"pyspark.sql.tests.test_context",
"pyspark.sql.tests.test_dataframe",
"pyspark.sql.tests.test_datasources",
"pyspark.sql.tests.test_functions",
"pyspark.sql.tests.test_group",
"pyspark.sql.tests.test_pandas_udf",
"pyspark.sql.tests.test_pandas_udf_grouped_agg",
"pyspark.sql.tests.test_pandas_udf_grouped_map",
"pyspark.sql.tests.test_pandas_udf_scalar",
"pyspark.sql.tests.test_pandas_udf_window",
"pyspark.sql.tests.test_readwriter",
"pyspark.sql.tests.test_serde",
"pyspark.sql.tests.test_session",
"pyspark.sql.tests.test_streaming",
"pyspark.sql.tests.test_types",
"pyspark.sql.tests.test_udf",
"pyspark.sql.tests.test_utils",
]
)
pyspark_streaming = Module(
name="pyspark-streaming",
dependencies=[
pyspark_core,
streaming,
streaming_kinesis_asl
],
source_file_regexes=[
"python/pyspark/streaming"
],
python_test_goals=[
# doctests
"pyspark.streaming.util",
# unittests
"pyspark.streaming.tests.test_context",
"pyspark.streaming.tests.test_dstream",
"pyspark.streaming.tests.test_kinesis",
"pyspark.streaming.tests.test_listener",
]
)
pyspark_mllib = Module(
name="pyspark-mllib",
dependencies=[pyspark_core, pyspark_streaming, pyspark_sql, mllib],
source_file_regexes=[
"python/pyspark/mllib"
],
python_test_goals=[
# doctests
"pyspark.mllib.classification",
"pyspark.mllib.clustering",
"pyspark.mllib.evaluation",
"pyspark.mllib.feature",
"pyspark.mllib.fpm",
"pyspark.mllib.linalg.__init__",
"pyspark.mllib.linalg.distributed",
"pyspark.mllib.random",
"pyspark.mllib.recommendation",
"pyspark.mllib.regression",
"pyspark.mllib.stat._statistics",
"pyspark.mllib.stat.KernelDensity",
"pyspark.mllib.tree",
"pyspark.mllib.util",
# unittests
"pyspark.mllib.tests.test_algorithms",
"pyspark.mllib.tests.test_feature",
"pyspark.mllib.tests.test_linalg",
"pyspark.mllib.tests.test_stat",
"pyspark.mllib.tests.test_streaming_algorithms",
"pyspark.mllib.tests.test_util",
],
blacklisted_python_implementations=[
"PyPy" # Skip these tests under PyPy since they require numpy and it isn't available there
]
)
pyspark_ml = Module(
name="pyspark-ml",
dependencies=[pyspark_core, pyspark_mllib],
source_file_regexes=[
"python/pyspark/ml/"
],
python_test_goals=[
# doctests
"pyspark.ml.classification",
"pyspark.ml.clustering",
"pyspark.ml.evaluation",
"pyspark.ml.feature",
"pyspark.ml.fpm",
"pyspark.ml.image",
"pyspark.ml.linalg.__init__",
"pyspark.ml.recommendation",
"pyspark.ml.regression",
"pyspark.ml.stat",
"pyspark.ml.tuning",
# unittests
"pyspark.ml.tests.test_algorithms",
"pyspark.ml.tests.test_base",
"pyspark.ml.tests.test_evaluation",
"pyspark.ml.tests.test_feature",
"pyspark.ml.tests.test_image",
"pyspark.ml.tests.test_linalg",
"pyspark.ml.tests.test_param",
"pyspark.ml.tests.test_persistence",
"pyspark.ml.tests.test_pipeline",
"pyspark.ml.tests.test_stat",
"pyspark.ml.tests.test_training_summary",
"pyspark.ml.tests.test_tuning",
"pyspark.ml.tests.test_wrapper",
],
blacklisted_python_implementations=[
"PyPy" # Skip these tests under PyPy since they require numpy and it isn't available there
]
)
sparkr = Module(
name="sparkr",
dependencies=[hive, mllib],
source_file_regexes=[
"R/",
],
should_run_r_tests=True
)
docs = Module(
name="docs",
dependencies=[],
source_file_regexes=[
"docs/",
]
)
build = Module(
name="build",
dependencies=[],
source_file_regexes=[
".*pom.xml",
"dev/test-dependencies.sh",
],
should_run_build_tests=True
)
yarn = Module(
name="yarn",
dependencies=[],
source_file_regexes=[
"resource-managers/yarn/",
"common/network-yarn/",
],
build_profile_flags=["-Pyarn"],
sbt_test_goals=[
"yarn/test",
"network-yarn/test",
],
test_tags=[
"org.apache.spark.tags.ExtendedYarnTest"
]
)
mesos = Module(
name="mesos",
dependencies=[],
source_file_regexes=["resource-managers/mesos/"],
build_profile_flags=["-Pmesos"],
sbt_test_goals=["mesos/test"]
)
kubernetes = Module(
name="kubernetes",
dependencies=[],
source_file_regexes=["resource-managers/kubernetes"],
build_profile_flags=["-Pkubernetes"],
sbt_test_goals=["kubernetes/test"]
)
spark_ganglia_lgpl = Module(
name="spark-ganglia-lgpl",
dependencies=[],
build_profile_flags=["-Pspark-ganglia-lgpl"],
source_file_regexes=[
"external/spark-ganglia-lgpl",
]
)
# The root module is a dummy module which is used to run all of the tests.
# No other modules should directly depend on this module.
root = Module(
name="root",
dependencies=[build], # Changes to build should trigger all tests.
source_file_regexes=[],
# In order to run all of the tests, enable every test profile:
build_profile_flags=list(set(
itertools.chain.from_iterable(m.build_profile_flags for m in all_modules))),
sbt_test_goals=[
"test",
],
python_test_goals=list(itertools.chain.from_iterable(m.python_test_goals for m in all_modules)),
should_run_r_tests=True,
should_run_build_tests=True
)
| apache-2.0 |
rafaelmds/fatiando | cookbook/seismic_wavefd_scalar.py | 7 | 2067 | """
Seismic: 2D finite difference simulation of scalar wave propagation.
Difraction example in cylindrical wedge model. Based on:
R. M. Alford, K. R. Kelly and D. M. Boore -
Accuracy of finite-difference modeling of the acoustic wave equation.
Geophysics 1974
"""
import numpy as np
from matplotlib import animation
from fatiando.seismic import wavefd
from fatiando.vis import mpl
# Set the parameters of the finite difference grid
shape = (200, 200)
ds = 100. # spacing
area = [0, shape[0] * ds, 0, shape[1] * ds]
# Set the parameters of the finite difference grid
velocity = np.zeros(shape) + 6000.
velocity[100:, 100:] = 0.
fc = 15.
sources = [wavefd.GaussSource(125 * ds, 75 * ds, area, shape, 1., fc)]
dt = wavefd.scalar_maxdt(area, shape, np.max(velocity))
duration = 2.5
maxit = int(duration / dt)
stations = [[75 * ds, 125 * ds]] # x, z coordinate of the seismometer
snapshots = 3 # every 3 iterations plots one
simulation = wavefd.scalar(
velocity, area, dt, maxit, sources, stations, snapshots)
# This part makes an animation using matplotlibs animation API
background = (velocity - 4000) * 10 ** -1
fig = mpl.figure(figsize=(8, 6))
mpl.subplots_adjust(right=0.98, left=0.11, hspace=0.5, top=0.93)
mpl.subplot2grid((4, 3), (0, 0), colspan=3, rowspan=3)
wavefield = mpl.imshow(np.zeros_like(velocity), extent=area,
cmap=mpl.cm.gray_r, vmin=-1000, vmax=1000)
mpl.points(stations, '^b', size=8)
mpl.ylim(area[2:][::-1])
mpl.xlabel('x (km)')
mpl.ylabel('z (km)')
mpl.m2km()
mpl.subplot2grid((4, 3), (3, 0), colspan=3)
seismogram1, = mpl.plot([], [], '-k')
mpl.xlim(0, duration)
mpl.ylim(-200, 200)
mpl.ylabel('Amplitude')
times = np.linspace(0, dt * maxit, maxit)
# This function updates the plot every few timesteps
def animate(i):
t, u, seismogram = simulation.next()
seismogram1.set_data(times[:t + 1], seismogram[0][:t + 1])
wavefield.set_array(background[::-1] + u[::-1])
return wavefield, seismogram1
anim = animation.FuncAnimation(
fig, animate, frames=maxit / snapshots, interval=1)
mpl.show()
| bsd-3-clause |
kelseyoo14/Wander | venv_2_7/lib/python2.7/site-packages/pandas/tests/test_internals.py | 9 | 45145 | # -*- coding: utf-8 -*-
# pylint: disable=W0102
from datetime import datetime, date
import nose
import numpy as np
import re
import itertools
from pandas import Index, MultiIndex, DataFrame, DatetimeIndex, Series, Categorical
from pandas.compat import OrderedDict, lrange
from pandas.sparse.array import SparseArray
from pandas.core.internals import (BlockPlacement, SingleBlockManager, make_block,
BlockManager)
import pandas.core.common as com
import pandas.core.internals as internals
import pandas.util.testing as tm
import pandas as pd
from pandas.util.testing import (
assert_almost_equal, assert_frame_equal, randn, assert_series_equal)
from pandas.compat import zip, u
def assert_block_equal(left, right):
assert_almost_equal(left.values, right.values)
assert(left.dtype == right.dtype)
assert_almost_equal(left.mgr_locs, right.mgr_locs)
def get_numeric_mat(shape):
arr = np.arange(shape[0])
return np.lib.stride_tricks.as_strided(
x=arr, shape=shape,
strides=(arr.itemsize,) + (0,) * (len(shape) - 1)).copy()
N = 10
def create_block(typestr, placement, item_shape=None, num_offset=0):
"""
Supported typestr:
* float, f8, f4, f2
* int, i8, i4, i2, i1
* uint, u8, u4, u2, u1
* complex, c16, c8
* bool
* object, string, O
* datetime, dt, M8[ns], M8[ns, tz]
* timedelta, td, m8[ns]
* sparse (SparseArray with fill_value=0.0)
* sparse_na (SparseArray with fill_value=np.nan)
* category, category2
"""
placement = BlockPlacement(placement)
num_items = len(placement)
if item_shape is None:
item_shape = (N,)
shape = (num_items,) + item_shape
mat = get_numeric_mat(shape)
if typestr in ('float', 'f8', 'f4', 'f2',
'int', 'i8', 'i4', 'i2', 'i1',
'uint', 'u8', 'u4', 'u2', 'u1'):
values = mat.astype(typestr) + num_offset
elif typestr in ('complex', 'c16', 'c8'):
values = 1.j * (mat.astype(typestr) + num_offset)
elif typestr in ('object', 'string', 'O'):
values = np.reshape(['A%d' % i for i in mat.ravel() + num_offset],
shape)
elif typestr in ('b','bool',):
values = np.ones(shape, dtype=np.bool_)
elif typestr in ('datetime', 'dt', 'M8[ns]'):
values = (mat * 1e9).astype('M8[ns]')
elif typestr.startswith('M8[ns'):
# datetime with tz
m = re.search('M8\[ns,\s*(\w+\/?\w*)\]', typestr)
assert m is not None, "incompatible typestr -> {0}".format(typestr)
tz = m.groups()[0]
assert num_items == 1, "must have only 1 num items for a tz-aware"
values = DatetimeIndex(np.arange(N) * 1e9, tz=tz)
elif typestr in ('timedelta', 'td', 'm8[ns]'):
values = (mat * 1).astype('m8[ns]')
elif typestr in ('category',):
values = Categorical([1,1,2,2,3,3,3,3,4,4])
elif typestr in ('category2',):
values = Categorical(['a','a','a','a','b','b','c','c','c','d'])
elif typestr in ('sparse', 'sparse_na'):
# FIXME: doesn't support num_rows != 10
assert shape[-1] == 10
assert all(s == 1 for s in shape[:-1])
if typestr.endswith('_na'):
fill_value = np.nan
else:
fill_value = 0.0
values = SparseArray([fill_value, fill_value, 1, 2, 3, fill_value,
4, 5, fill_value, 6], fill_value=fill_value)
arr = values.sp_values.view()
arr += (num_offset - 1)
else:
raise ValueError('Unsupported typestr: "%s"' % typestr)
return make_block(values, placement=placement, ndim=len(shape))
def create_single_mgr(typestr, num_rows=None):
if num_rows is None:
num_rows = N
return SingleBlockManager(
create_block(typestr, placement=slice(0, num_rows), item_shape=()),
np.arange(num_rows))
def create_mgr(descr, item_shape=None):
"""
Construct BlockManager from string description.
String description syntax looks similar to np.matrix initializer. It looks
like this::
a,b,c: f8; d,e,f: i8
Rules are rather simple:
* see list of supported datatypes in `create_block` method
* components are semicolon-separated
* each component is `NAME,NAME,NAME: DTYPE_ID`
* whitespace around colons & semicolons are removed
* components with same DTYPE_ID are combined into single block
* to force multiple blocks with same dtype, use '-SUFFIX'::
'a:f8-1; b:f8-2; c:f8-foobar'
"""
if item_shape is None:
item_shape = (N,)
offset = 0
mgr_items = []
block_placements = OrderedDict()
for d in descr.split(';'):
d = d.strip()
names, blockstr = d.partition(':')[::2]
blockstr = blockstr.strip()
names = names.strip().split(',')
mgr_items.extend(names)
placement = list(np.arange(len(names)) + offset)
try:
block_placements[blockstr].extend(placement)
except KeyError:
block_placements[blockstr] = placement
offset += len(names)
mgr_items = Index(mgr_items)
blocks = []
num_offset = 0
for blockstr, placement in block_placements.items():
typestr = blockstr.split('-')[0]
blocks.append(create_block(typestr, placement, item_shape=item_shape,
num_offset=num_offset,))
num_offset += len(placement)
return BlockManager(sorted(blocks, key=lambda b: b.mgr_locs[0]),
[mgr_items] + [np.arange(n) for n in item_shape])
class TestBlock(tm.TestCase):
_multiprocess_can_split_ = True
def setUp(self):
# self.fblock = get_float_ex() # a,c,e
# self.cblock = get_complex_ex() #
# self.oblock = get_obj_ex()
# self.bool_block = get_bool_ex()
# self.int_block = get_int_ex()
self.fblock = create_block('float', [0, 2, 4])
self.cblock = create_block('complex', [7])
self.oblock = create_block('object', [1, 3])
self.bool_block = create_block('bool', [5])
self.int_block = create_block('int', [6])
def test_constructor(self):
int32block = create_block('i4', [0])
self.assertEqual(int32block.dtype, np.int32)
def test_pickle(self):
def _check(blk):
assert_block_equal(self.round_trip_pickle(blk), blk)
_check(self.fblock)
_check(self.cblock)
_check(self.oblock)
_check(self.bool_block)
def test_mgr_locs(self):
assert_almost_equal(self.fblock.mgr_locs, [0, 2, 4])
def test_attrs(self):
self.assertEqual(self.fblock.shape, self.fblock.values.shape)
self.assertEqual(self.fblock.dtype, self.fblock.values.dtype)
self.assertEqual(len(self.fblock), len(self.fblock.values))
def test_merge(self):
avals = randn(2, 10)
bvals = randn(2, 10)
ref_cols = Index(['e', 'a', 'b', 'd', 'f'])
ablock = make_block(avals,
ref_cols.get_indexer(['e', 'b']))
bblock = make_block(bvals,
ref_cols.get_indexer(['a', 'd']))
merged = ablock.merge(bblock)
assert_almost_equal(merged.mgr_locs, [0, 1, 2, 3])
assert_almost_equal(merged.values[[0, 2]], avals)
assert_almost_equal(merged.values[[1, 3]], bvals)
# TODO: merge with mixed type?
def test_copy(self):
cop = self.fblock.copy()
self.assertIsNot(cop, self.fblock)
assert_block_equal(self.fblock, cop)
def test_reindex_index(self):
pass
def test_reindex_cast(self):
pass
def test_insert(self):
pass
def test_delete(self):
newb = self.fblock.copy()
newb.delete(0)
assert_almost_equal(newb.mgr_locs, [2, 4])
self.assertTrue((newb.values[0] == 1).all())
newb = self.fblock.copy()
newb.delete(1)
assert_almost_equal(newb.mgr_locs, [0, 4])
self.assertTrue((newb.values[1] == 2).all())
newb = self.fblock.copy()
newb.delete(2)
assert_almost_equal(newb.mgr_locs, [0, 2])
self.assertTrue((newb.values[1] == 1).all())
newb = self.fblock.copy()
self.assertRaises(Exception, newb.delete, 3)
def test_split_block_at(self):
# with dup column support this method was taken out
# GH3679
raise nose.SkipTest("skipping for now")
bs = list(self.fblock.split_block_at('a'))
self.assertEqual(len(bs), 1)
self.assertTrue(np.array_equal(bs[0].items, ['c', 'e']))
bs = list(self.fblock.split_block_at('c'))
self.assertEqual(len(bs), 2)
self.assertTrue(np.array_equal(bs[0].items, ['a']))
self.assertTrue(np.array_equal(bs[1].items, ['e']))
bs = list(self.fblock.split_block_at('e'))
self.assertEqual(len(bs), 1)
self.assertTrue(np.array_equal(bs[0].items, ['a', 'c']))
bblock = get_bool_ex(['f'])
bs = list(bblock.split_block_at('f'))
self.assertEqual(len(bs), 0)
def test_get(self):
pass
def test_set(self):
pass
def test_fillna(self):
pass
def test_repr(self):
pass
class TestDatetimeBlock(tm.TestCase):
_multiprocess_can_split_ = True
def test_try_coerce_arg(self):
block = create_block('datetime', [0])
# coerce None
none_coerced = block._try_coerce_args(block.values, None)[2]
self.assertTrue(pd.Timestamp(none_coerced) is pd.NaT)
# coerce different types of date bojects
vals = (np.datetime64('2010-10-10'),
datetime(2010, 10, 10),
date(2010, 10, 10))
for val in vals:
coerced = block._try_coerce_args(block.values, val)[2]
self.assertEqual(np.int64, type(coerced))
self.assertEqual(pd.Timestamp('2010-10-10'), pd.Timestamp(coerced))
class TestBlockManager(tm.TestCase):
_multiprocess_can_split_ = True
def setUp(self):
self.mgr = create_mgr('a: f8; b: object; c: f8; d: object; e: f8;'
'f: bool; g: i8; h: complex')
def test_constructor_corner(self):
pass
def test_attrs(self):
mgr = create_mgr('a,b,c: f8-1; d,e,f: f8-2')
self.assertEqual(mgr.nblocks, 2)
self.assertEqual(len(mgr), 6)
def test_is_mixed_dtype(self):
self.assertFalse(create_mgr('a,b:f8').is_mixed_type)
self.assertFalse(create_mgr('a:f8-1; b:f8-2').is_mixed_type)
self.assertTrue(create_mgr('a,b:f8; c,d: f4').is_mixed_type)
self.assertTrue(create_mgr('a,b:f8; c,d: object').is_mixed_type)
def test_is_indexed_like(self):
mgr1 = create_mgr('a,b: f8')
mgr2 = create_mgr('a:i8; b:bool')
mgr3 = create_mgr('a,b,c: f8')
self.assertTrue(mgr1._is_indexed_like(mgr1))
self.assertTrue(mgr1._is_indexed_like(mgr2))
self.assertTrue(mgr1._is_indexed_like(mgr3))
self.assertFalse(mgr1._is_indexed_like(
mgr1.get_slice(slice(-1), axis=1)))
def test_duplicate_ref_loc_failure(self):
tmp_mgr = create_mgr('a:bool; a: f8')
axes, blocks = tmp_mgr.axes, tmp_mgr.blocks
blocks[0].mgr_locs = np.array([0])
blocks[1].mgr_locs = np.array([0])
# test trying to create block manager with overlapping ref locs
self.assertRaises(AssertionError, BlockManager, blocks, axes)
blocks[0].mgr_locs = np.array([0])
blocks[1].mgr_locs = np.array([1])
mgr = BlockManager(blocks, axes)
mgr.iget(1)
def test_contains(self):
self.assertIn('a', self.mgr)
self.assertNotIn('baz', self.mgr)
def test_pickle(self):
mgr2 = self.round_trip_pickle(self.mgr)
assert_frame_equal(DataFrame(self.mgr), DataFrame(mgr2))
# share ref_items
# self.assertIs(mgr2.blocks[0].ref_items, mgr2.blocks[1].ref_items)
# GH2431
self.assertTrue(hasattr(mgr2, "_is_consolidated"))
self.assertTrue(hasattr(mgr2, "_known_consolidated"))
# reset to False on load
self.assertFalse(mgr2._is_consolidated)
self.assertFalse(mgr2._known_consolidated)
def test_non_unique_pickle(self):
mgr = create_mgr('a,a,a:f8')
mgr2 = self.round_trip_pickle(mgr)
assert_frame_equal(DataFrame(mgr), DataFrame(mgr2))
mgr = create_mgr('a: f8; a: i8')
mgr2 = self.round_trip_pickle(mgr)
assert_frame_equal(DataFrame(mgr), DataFrame(mgr2))
def test_categorical_block_pickle(self):
mgr = create_mgr('a: category')
mgr2 = self.round_trip_pickle(mgr)
assert_frame_equal(DataFrame(mgr), DataFrame(mgr2))
smgr = create_single_mgr('category')
smgr2 = self.round_trip_pickle(smgr)
assert_series_equal(Series(smgr), Series(smgr2))
def test_get_scalar(self):
for item in self.mgr.items:
for i, index in enumerate(self.mgr.axes[1]):
res = self.mgr.get_scalar((item, index))
exp = self.mgr.get(item, fastpath=False)[i]
assert_almost_equal(res, exp)
exp = self.mgr.get(item).internal_values()[i]
assert_almost_equal(res, exp)
def test_get(self):
cols = Index(list('abc'))
values = np.random.rand(3, 3)
block = make_block(values=values.copy(),
placement=np.arange(3))
mgr = BlockManager(blocks=[block], axes=[cols, np.arange(3)])
assert_almost_equal(mgr.get('a', fastpath=False), values[0])
assert_almost_equal(mgr.get('b', fastpath=False), values[1])
assert_almost_equal(mgr.get('c', fastpath=False), values[2])
assert_almost_equal(mgr.get('a').internal_values(), values[0])
assert_almost_equal(mgr.get('b').internal_values(), values[1])
assert_almost_equal(mgr.get('c').internal_values(), values[2])
def test_set(self):
mgr = create_mgr('a,b,c: int', item_shape=(3,))
mgr.set('d', np.array(['foo'] * 3))
mgr.set('b', np.array(['bar'] * 3))
assert_almost_equal(mgr.get('a').internal_values(), [0] * 3)
assert_almost_equal(mgr.get('b').internal_values(), ['bar'] * 3)
assert_almost_equal(mgr.get('c').internal_values(), [2] * 3)
assert_almost_equal(mgr.get('d').internal_values(), ['foo'] * 3)
def test_insert(self):
self.mgr.insert(0, 'inserted', np.arange(N))
self.assertEqual(self.mgr.items[0], 'inserted')
assert_almost_equal(self.mgr.get('inserted'), np.arange(N))
for blk in self.mgr.blocks:
yield self.assertIs, self.mgr.items, blk.ref_items
def test_set_change_dtype(self):
self.mgr.set('baz', np.zeros(N, dtype=bool))
self.mgr.set('baz', np.repeat('foo', N))
self.assertEqual(self.mgr.get('baz').dtype, np.object_)
mgr2 = self.mgr.consolidate()
mgr2.set('baz', np.repeat('foo', N))
self.assertEqual(mgr2.get('baz').dtype, np.object_)
mgr2.set('quux', randn(N).astype(int))
self.assertEqual(mgr2.get('quux').dtype, np.int_)
mgr2.set('quux', randn(N))
self.assertEqual(mgr2.get('quux').dtype, np.float_)
def test_set_change_dtype_slice(self): # GH8850
cols = MultiIndex.from_tuples([('1st','a'), ('2nd','b'), ('3rd','c')])
df = DataFrame([[1.0, 2, 3], [4.0, 5, 6]], columns=cols)
df['2nd'] = df['2nd'] * 2.0
self.assertEqual(sorted(df.blocks.keys()), ['float64', 'int64'])
assert_frame_equal(df.blocks['float64'],
DataFrame([[1.0, 4.0], [4.0, 10.0]], columns=cols[:2]))
assert_frame_equal(df.blocks['int64'],
DataFrame([[3], [6]], columns=cols[2:]))
def test_copy(self):
shallow = self.mgr.copy(deep=False)
# we don't guaranteee block ordering
for blk in self.mgr.blocks:
found = False
for cp_blk in shallow.blocks:
if cp_blk.values is blk.values:
found = True
break
self.assertTrue(found)
def test_sparse(self):
mgr = create_mgr('a: sparse-1; b: sparse-2')
# what to test here?
self.assertEqual(mgr.as_matrix().dtype, np.float64)
def test_sparse_mixed(self):
mgr = create_mgr('a: sparse-1; b: sparse-2; c: f8')
self.assertEqual(len(mgr.blocks), 3)
self.assertIsInstance(mgr, BlockManager)
# what to test here?
def test_as_matrix_float(self):
mgr = create_mgr('c: f4; d: f2; e: f8')
self.assertEqual(mgr.as_matrix().dtype, np.float64)
mgr = create_mgr('c: f4; d: f2')
self.assertEqual(mgr.as_matrix().dtype, np.float32)
def test_as_matrix_int_bool(self):
mgr = create_mgr('a: bool-1; b: bool-2')
self.assertEqual(mgr.as_matrix().dtype, np.bool_)
mgr = create_mgr('a: i8-1; b: i8-2; c: i4; d: i2; e: u1')
self.assertEqual(mgr.as_matrix().dtype, np.int64)
mgr = create_mgr('c: i4; d: i2; e: u1')
self.assertEqual(mgr.as_matrix().dtype, np.int32)
def test_as_matrix_datetime(self):
mgr = create_mgr('h: datetime-1; g: datetime-2')
self.assertEqual(mgr.as_matrix().dtype, 'M8[ns]')
def test_as_matrix_datetime_tz(self):
mgr = create_mgr('h: M8[ns, US/Eastern]; g: M8[ns, CET]')
self.assertEqual(mgr.get('h').dtype, 'datetime64[ns, US/Eastern]')
self.assertEqual(mgr.get('g').dtype, 'datetime64[ns, CET]')
self.assertEqual(mgr.as_matrix().dtype, 'object')
def test_astype(self):
# coerce all
mgr = create_mgr('c: f4; d: f2; e: f8')
for t in ['float16', 'float32', 'float64', 'int32', 'int64']:
t = np.dtype(t)
tmgr = mgr.astype(t)
self.assertEqual(tmgr.get('c').dtype.type, t)
self.assertEqual(tmgr.get('d').dtype.type, t)
self.assertEqual(tmgr.get('e').dtype.type, t)
# mixed
mgr = create_mgr('a,b: object; c: bool; d: datetime;'
'e: f4; f: f2; g: f8')
for t in ['float16', 'float32', 'float64', 'int32', 'int64']:
t = np.dtype(t)
tmgr = mgr.astype(t, raise_on_error=False)
self.assertEqual(tmgr.get('c').dtype.type, t)
self.assertEqual(tmgr.get('e').dtype.type, t)
self.assertEqual(tmgr.get('f').dtype.type, t)
self.assertEqual(tmgr.get('g').dtype.type, t)
self.assertEqual(tmgr.get('a').dtype.type, np.object_)
self.assertEqual(tmgr.get('b').dtype.type, np.object_)
if t != np.int64:
self.assertEqual(tmgr.get('d').dtype.type, np.datetime64)
else:
self.assertEqual(tmgr.get('d').dtype.type, t)
def test_convert(self):
def _compare(old_mgr, new_mgr):
""" compare the blocks, numeric compare ==, object don't """
old_blocks = set(old_mgr.blocks)
new_blocks = set(new_mgr.blocks)
self.assertEqual(len(old_blocks), len(new_blocks))
# compare non-numeric
for b in old_blocks:
found = False
for nb in new_blocks:
if (b.values == nb.values).all():
found = True
break
self.assertTrue(found)
for b in new_blocks:
found = False
for ob in old_blocks:
if (b.values == ob.values).all():
found = True
break
self.assertTrue(found)
# noops
mgr = create_mgr('f: i8; g: f8')
new_mgr = mgr.convert()
_compare(mgr,new_mgr)
mgr = create_mgr('a, b: object; f: i8; g: f8')
new_mgr = mgr.convert()
_compare(mgr,new_mgr)
# convert
mgr = create_mgr('a,b,foo: object; f: i8; g: f8')
mgr.set('a', np.array(['1'] * N, dtype=np.object_))
mgr.set('b', np.array(['2.'] * N, dtype=np.object_))
mgr.set('foo', np.array(['foo.'] * N, dtype=np.object_))
new_mgr = mgr.convert(numeric=True)
self.assertEqual(new_mgr.get('a').dtype, np.int64)
self.assertEqual(new_mgr.get('b').dtype, np.float64)
self.assertEqual(new_mgr.get('foo').dtype, np.object_)
self.assertEqual(new_mgr.get('f').dtype, np.int64)
self.assertEqual(new_mgr.get('g').dtype, np.float64)
mgr = create_mgr('a,b,foo: object; f: i4; bool: bool; dt: datetime;'
'i: i8; g: f8; h: f2')
mgr.set('a', np.array(['1'] * N, dtype=np.object_))
mgr.set('b', np.array(['2.'] * N, dtype=np.object_))
mgr.set('foo', np.array(['foo.'] * N, dtype=np.object_))
new_mgr = mgr.convert(numeric=True)
self.assertEqual(new_mgr.get('a').dtype, np.int64)
self.assertEqual(new_mgr.get('b').dtype, np.float64)
self.assertEqual(new_mgr.get('foo').dtype, np.object_)
self.assertEqual(new_mgr.get('f').dtype, np.int32)
self.assertEqual(new_mgr.get('bool').dtype, np.bool_)
self.assertEqual(new_mgr.get('dt').dtype.type, np.datetime64)
self.assertEqual(new_mgr.get('i').dtype, np.int64)
self.assertEqual(new_mgr.get('g').dtype, np.float64)
self.assertEqual(new_mgr.get('h').dtype, np.float16)
def test_interleave(self):
# self
for dtype in ['f8','i8','object','bool','complex','M8[ns]','m8[ns]']:
mgr = create_mgr('a: {0}'.format(dtype))
self.assertEqual(mgr.as_matrix().dtype,dtype)
mgr = create_mgr('a: {0}; b: {0}'.format(dtype))
self.assertEqual(mgr.as_matrix().dtype,dtype)
# will be converted according the actual dtype of the underlying
mgr = create_mgr('a: category')
self.assertEqual(mgr.as_matrix().dtype,'i8')
mgr = create_mgr('a: category; b: category')
self.assertEqual(mgr.as_matrix().dtype,'i8'),
mgr = create_mgr('a: category; b: category2')
self.assertEqual(mgr.as_matrix().dtype,'object')
mgr = create_mgr('a: category2')
self.assertEqual(mgr.as_matrix().dtype,'object')
mgr = create_mgr('a: category2; b: category2')
self.assertEqual(mgr.as_matrix().dtype,'object')
# combinations
mgr = create_mgr('a: f8')
self.assertEqual(mgr.as_matrix().dtype,'f8')
mgr = create_mgr('a: f8; b: i8')
self.assertEqual(mgr.as_matrix().dtype,'f8')
mgr = create_mgr('a: f4; b: i8')
self.assertEqual(mgr.as_matrix().dtype,'f4')
mgr = create_mgr('a: f4; b: i8; d: object')
self.assertEqual(mgr.as_matrix().dtype,'object')
mgr = create_mgr('a: bool; b: i8')
self.assertEqual(mgr.as_matrix().dtype,'object')
mgr = create_mgr('a: complex')
self.assertEqual(mgr.as_matrix().dtype,'complex')
mgr = create_mgr('a: f8; b: category')
self.assertEqual(mgr.as_matrix().dtype,'object')
mgr = create_mgr('a: M8[ns]; b: category')
self.assertEqual(mgr.as_matrix().dtype,'object')
mgr = create_mgr('a: M8[ns]; b: bool')
self.assertEqual(mgr.as_matrix().dtype,'object')
mgr = create_mgr('a: M8[ns]; b: i8')
self.assertEqual(mgr.as_matrix().dtype,'object')
mgr = create_mgr('a: m8[ns]; b: bool')
self.assertEqual(mgr.as_matrix().dtype,'object')
mgr = create_mgr('a: m8[ns]; b: i8')
self.assertEqual(mgr.as_matrix().dtype,'object')
mgr = create_mgr('a: M8[ns]; b: m8[ns]')
self.assertEqual(mgr.as_matrix().dtype,'object')
def test_interleave_non_unique_cols(self):
df = DataFrame([
[pd.Timestamp('20130101'), 3.5],
[pd.Timestamp('20130102'), 4.5]],
columns=['x', 'x'],
index=[1, 2])
df_unique = df.copy()
df_unique.columns = ['x', 'y']
self.assertEqual(df_unique.values.shape, df.values.shape)
tm.assert_numpy_array_equal(df_unique.values[0], df.values[0])
tm.assert_numpy_array_equal(df_unique.values[1], df.values[1])
def test_consolidate(self):
pass
def test_consolidate_ordering_issues(self):
self.mgr.set('f', randn(N))
self.mgr.set('d', randn(N))
self.mgr.set('b', randn(N))
self.mgr.set('g', randn(N))
self.mgr.set('h', randn(N))
cons = self.mgr.consolidate()
self.assertEqual(cons.nblocks, 1)
assert_almost_equal(cons.blocks[0].mgr_locs,
np.arange(len(cons.items)))
def test_reindex_index(self):
pass
def test_reindex_items(self):
# mgr is not consolidated, f8 & f8-2 blocks
mgr = create_mgr('a: f8; b: i8; c: f8; d: i8; e: f8;'
'f: bool; g: f8-2')
reindexed = mgr.reindex_axis(['g', 'c', 'a', 'd'], axis=0)
self.assertEqual(reindexed.nblocks, 2)
assert_almost_equal(reindexed.items, ['g', 'c', 'a', 'd'])
assert_almost_equal(mgr.get('g',fastpath=False), reindexed.get('g',fastpath=False))
assert_almost_equal(mgr.get('c',fastpath=False), reindexed.get('c',fastpath=False))
assert_almost_equal(mgr.get('a',fastpath=False), reindexed.get('a',fastpath=False))
assert_almost_equal(mgr.get('d',fastpath=False), reindexed.get('d',fastpath=False))
assert_almost_equal(mgr.get('g').internal_values(), reindexed.get('g').internal_values())
assert_almost_equal(mgr.get('c').internal_values(), reindexed.get('c').internal_values())
assert_almost_equal(mgr.get('a').internal_values(), reindexed.get('a').internal_values())
assert_almost_equal(mgr.get('d').internal_values(), reindexed.get('d').internal_values())
def test_multiindex_xs(self):
mgr = create_mgr('a,b,c: f8; d,e,f: i8')
index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
['one', 'two', 'three']],
labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
[0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
names=['first', 'second'])
mgr.set_axis(1, index)
result = mgr.xs('bar', axis=1)
self.assertEqual(result.shape, (6, 2))
self.assertEqual(result.axes[1][0], ('bar', 'one'))
self.assertEqual(result.axes[1][1], ('bar', 'two'))
def test_get_numeric_data(self):
mgr = create_mgr('int: int; float: float; complex: complex;'
'str: object; bool: bool; obj: object; dt: datetime',
item_shape=(3,))
mgr.set('obj', np.array([1, 2, 3], dtype=np.object_))
numeric = mgr.get_numeric_data()
assert_almost_equal(numeric.items, ['int', 'float', 'complex', 'bool'])
assert_almost_equal(mgr.get('float',fastpath=False), numeric.get('float',fastpath=False))
assert_almost_equal(mgr.get('float').internal_values(), numeric.get('float').internal_values())
# Check sharing
numeric.set('float', np.array([100., 200., 300.]))
assert_almost_equal(mgr.get('float',fastpath=False), np.array([100., 200., 300.]))
assert_almost_equal(mgr.get('float').internal_values(), np.array([100., 200., 300.]))
numeric2 = mgr.get_numeric_data(copy=True)
assert_almost_equal(numeric.items, ['int', 'float', 'complex', 'bool'])
numeric2.set('float', np.array([1000., 2000., 3000.]))
assert_almost_equal(mgr.get('float',fastpath=False), np.array([100., 200., 300.]))
assert_almost_equal(mgr.get('float').internal_values(), np.array([100., 200., 300.]))
def test_get_bool_data(self):
mgr = create_mgr('int: int; float: float; complex: complex;'
'str: object; bool: bool; obj: object; dt: datetime',
item_shape=(3,))
mgr.set('obj', np.array([True, False, True], dtype=np.object_))
bools = mgr.get_bool_data()
assert_almost_equal(bools.items, ['bool'])
assert_almost_equal(mgr.get('bool',fastpath=False), bools.get('bool',fastpath=False))
assert_almost_equal(mgr.get('bool').internal_values(), bools.get('bool').internal_values())
bools.set('bool', np.array([True, False, True]))
assert_almost_equal(mgr.get('bool',fastpath=False), [True, False, True])
assert_almost_equal(mgr.get('bool').internal_values(), [True, False, True])
# Check sharing
bools2 = mgr.get_bool_data(copy=True)
bools2.set('bool', np.array([False, True, False]))
assert_almost_equal(mgr.get('bool',fastpath=False), [True, False, True])
assert_almost_equal(mgr.get('bool').internal_values(), [True, False, True])
def test_unicode_repr_doesnt_raise(self):
str_repr = repr(create_mgr(u('b,\u05d0: object')))
def test_missing_unicode_key(self):
df = DataFrame({"a": [1]})
try:
df.ix[:, u("\u05d0")] # should not raise UnicodeEncodeError
except KeyError:
pass # this is the expected exception
def test_equals(self):
# unique items
bm1 = create_mgr('a,b,c: i8-1; d,e,f: i8-2')
bm2 = BlockManager(bm1.blocks[::-1], bm1.axes)
self.assertTrue(bm1.equals(bm2))
bm1 = create_mgr('a,a,a: i8-1; b,b,b: i8-2')
bm2 = BlockManager(bm1.blocks[::-1], bm1.axes)
self.assertTrue(bm1.equals(bm2))
def test_equals_block_order_different_dtypes(self):
# GH 9330
mgr_strings = [
"a:i8;b:f8", # basic case
"a:i8;b:f8;c:c8;d:b", # many types
"a:i8;e:dt;f:td;g:string", # more types
"a:i8;b:category;c:category2;d:category2", # categories
"c:sparse;d:sparse_na;b:f8", # sparse
]
for mgr_string in mgr_strings:
bm = create_mgr(mgr_string)
block_perms = itertools.permutations(bm.blocks)
for bm_perm in block_perms:
bm_this = BlockManager(bm_perm, bm.axes)
self.assertTrue(bm.equals(bm_this))
self.assertTrue(bm_this.equals(bm))
def test_single_mgr_ctor(self):
mgr = create_single_mgr('f8', num_rows=5)
self.assertEqual(mgr.as_matrix().tolist(), [0., 1., 2., 3., 4.])
class TestIndexing(object):
# Nosetests-style data-driven tests.
#
# This test applies different indexing routines to block managers and
# compares the outcome to the result of same operations on np.ndarray.
#
# NOTE: sparse (SparseBlock with fill_value != np.nan) fail a lot of tests
# and are disabled.
MANAGERS = [
create_single_mgr('f8', N),
create_single_mgr('i8', N),
#create_single_mgr('sparse', N),
create_single_mgr('sparse_na', N),
# 2-dim
create_mgr('a,b,c,d,e,f: f8', item_shape=(N,)),
create_mgr('a,b,c,d,e,f: i8', item_shape=(N,)),
create_mgr('a,b: f8; c,d: i8; e,f: string', item_shape=(N,)),
create_mgr('a,b: f8; c,d: i8; e,f: f8', item_shape=(N,)),
#create_mgr('a: sparse', item_shape=(N,)),
create_mgr('a: sparse_na', item_shape=(N,)),
# 3-dim
create_mgr('a,b,c,d,e,f: f8', item_shape=(N, N)),
create_mgr('a,b,c,d,e,f: i8', item_shape=(N, N)),
create_mgr('a,b: f8; c,d: i8; e,f: string', item_shape=(N, N)),
create_mgr('a,b: f8; c,d: i8; e,f: f8', item_shape=(N, N)),
# create_mgr('a: sparse', item_shape=(1, N)),
]
# MANAGERS = [MANAGERS[6]]
def test_get_slice(self):
def assert_slice_ok(mgr, axis, slobj):
# import pudb; pudb.set_trace()
mat = mgr.as_matrix()
# we maybe using an ndarray to test slicing and
# might not be the full length of the axis
if isinstance(slobj, np.ndarray):
ax = mgr.axes[axis]
if len(ax) and len(slobj) and len(slobj) != len(ax):
slobj = np.concatenate([slobj, np.zeros(len(ax)-len(slobj),dtype=bool)])
sliced = mgr.get_slice(slobj, axis=axis)
mat_slobj = (slice(None),) * axis + (slobj,)
assert_almost_equal(mat[mat_slobj], sliced.as_matrix())
assert_almost_equal(mgr.axes[axis][slobj], sliced.axes[axis])
for mgr in self.MANAGERS:
for ax in range(mgr.ndim):
# slice
yield assert_slice_ok, mgr, ax, slice(None)
yield assert_slice_ok, mgr, ax, slice(3)
yield assert_slice_ok, mgr, ax, slice(100)
yield assert_slice_ok, mgr, ax, slice(1, 4)
yield assert_slice_ok, mgr, ax, slice(3, 0, -2)
# boolean mask
yield assert_slice_ok, mgr, ax, np.array([], dtype=np.bool_)
yield (assert_slice_ok, mgr, ax,
np.ones(mgr.shape[ax], dtype=np.bool_))
yield (assert_slice_ok, mgr, ax,
np.zeros(mgr.shape[ax], dtype=np.bool_))
if mgr.shape[ax] >= 3:
yield (assert_slice_ok, mgr, ax,
np.arange(mgr.shape[ax]) % 3 == 0)
yield (assert_slice_ok, mgr, ax,
np.array([True, True, False], dtype=np.bool_))
# fancy indexer
yield assert_slice_ok, mgr, ax, []
yield assert_slice_ok, mgr, ax, lrange(mgr.shape[ax])
if mgr.shape[ax] >= 3:
yield assert_slice_ok, mgr, ax, [0, 1, 2]
yield assert_slice_ok, mgr, ax, [-1, -2, -3]
def test_take(self):
def assert_take_ok(mgr, axis, indexer):
mat = mgr.as_matrix()
taken = mgr.take(indexer, axis)
assert_almost_equal(np.take(mat, indexer, axis),
taken.as_matrix())
assert_almost_equal(mgr.axes[axis].take(indexer),
taken.axes[axis])
for mgr in self.MANAGERS:
for ax in range(mgr.ndim):
# take/fancy indexer
yield assert_take_ok, mgr, ax, []
yield assert_take_ok, mgr, ax, [0, 0, 0]
yield assert_take_ok, mgr, ax, lrange(mgr.shape[ax])
if mgr.shape[ax] >= 3:
yield assert_take_ok, mgr, ax, [0, 1, 2]
yield assert_take_ok, mgr, ax, [-1, -2, -3]
def test_reindex_axis(self):
def assert_reindex_axis_is_ok(mgr, axis, new_labels,
fill_value):
mat = mgr.as_matrix()
indexer = mgr.axes[axis].get_indexer_for(new_labels)
reindexed = mgr.reindex_axis(new_labels, axis,
fill_value=fill_value)
assert_almost_equal(com.take_nd(mat, indexer, axis,
fill_value=fill_value),
reindexed.as_matrix())
assert_almost_equal(reindexed.axes[axis], new_labels)
for mgr in self.MANAGERS:
for ax in range(mgr.ndim):
for fill_value in (None, np.nan, 100.):
yield assert_reindex_axis_is_ok, mgr, ax, [], fill_value
yield (assert_reindex_axis_is_ok, mgr, ax,
mgr.axes[ax], fill_value)
yield (assert_reindex_axis_is_ok, mgr, ax,
mgr.axes[ax][[0, 0, 0]], fill_value)
yield (assert_reindex_axis_is_ok, mgr, ax,
['foo', 'bar', 'baz'], fill_value)
yield (assert_reindex_axis_is_ok, mgr, ax,
['foo', mgr.axes[ax][0], 'baz'], fill_value)
if mgr.shape[ax] >= 3:
yield (assert_reindex_axis_is_ok, mgr, ax,
mgr.axes[ax][:-3], fill_value)
yield (assert_reindex_axis_is_ok, mgr, ax,
mgr.axes[ax][-3::-1], fill_value)
yield (assert_reindex_axis_is_ok, mgr, ax,
mgr.axes[ax][[0, 1, 2, 0, 1, 2]], fill_value)
def test_reindex_indexer(self):
def assert_reindex_indexer_is_ok(mgr, axis, new_labels, indexer,
fill_value):
mat = mgr.as_matrix()
reindexed_mat = com.take_nd(mat, indexer, axis,
fill_value=fill_value)
reindexed = mgr.reindex_indexer(new_labels, indexer, axis,
fill_value=fill_value)
assert_almost_equal(reindexed_mat, reindexed.as_matrix())
assert_almost_equal(reindexed.axes[axis], new_labels)
for mgr in self.MANAGERS:
for ax in range(mgr.ndim):
for fill_value in (None, np.nan, 100.):
yield (assert_reindex_indexer_is_ok, mgr, ax,
[], [], fill_value)
yield (assert_reindex_indexer_is_ok, mgr, ax,
mgr.axes[ax], np.arange(mgr.shape[ax]), fill_value)
yield (assert_reindex_indexer_is_ok, mgr, ax,
['foo'] * mgr.shape[ax], np.arange(mgr.shape[ax]),
fill_value)
yield (assert_reindex_indexer_is_ok, mgr, ax,
mgr.axes[ax][::-1], np.arange(mgr.shape[ax]),
fill_value)
yield (assert_reindex_indexer_is_ok, mgr, ax,
mgr.axes[ax], np.arange(mgr.shape[ax])[::-1],
fill_value)
yield (assert_reindex_indexer_is_ok, mgr, ax,
['foo', 'bar', 'baz'], [0, 0, 0], fill_value)
yield (assert_reindex_indexer_is_ok, mgr, ax,
['foo', 'bar', 'baz'], [-1, 0, -1], fill_value)
yield (assert_reindex_indexer_is_ok, mgr, ax,
['foo', mgr.axes[ax][0], 'baz'], [-1, -1, -1],
fill_value)
if mgr.shape[ax] >= 3:
yield (assert_reindex_indexer_is_ok, mgr, ax,
['foo', 'bar', 'baz'], [0, 1, 2], fill_value)
# test_get_slice(slice_like, axis)
# take(indexer, axis)
# reindex_axis(new_labels, axis)
# reindex_indexer(new_labels, indexer, axis)
class TestBlockPlacement(tm.TestCase):
_multiprocess_can_split_ = True
def test_slice_len(self):
self.assertEqual(len(BlockPlacement(slice(0, 4))), 4)
self.assertEqual(len(BlockPlacement(slice(0, 4, 2))), 2)
self.assertEqual(len(BlockPlacement(slice(0, 3, 2))), 2)
self.assertEqual(len(BlockPlacement(slice(0, 1, 2))), 1)
self.assertEqual(len(BlockPlacement(slice(1, 0, -1))), 1)
def test_zero_step_raises(self):
self.assertRaises(ValueError, BlockPlacement, slice(1, 1, 0))
self.assertRaises(ValueError, BlockPlacement, slice(1, 2, 0))
def test_unbounded_slice_raises(self):
def assert_unbounded_slice_error(slc):
# assertRaisesRegexp is not available in py2.6
# self.assertRaisesRegexp(ValueError, "unbounded slice",
# lambda: BlockPlacement(slc))
self.assertRaises(ValueError, BlockPlacement, slc)
assert_unbounded_slice_error(slice(None, None))
assert_unbounded_slice_error(slice(10, None))
assert_unbounded_slice_error(slice(None, None, -1))
assert_unbounded_slice_error(slice(None, 10, -1))
# These are "unbounded" because negative index will change depending on
# container shape.
assert_unbounded_slice_error(slice(-1, None))
assert_unbounded_slice_error(slice(None, -1))
assert_unbounded_slice_error(slice(-1, -1))
assert_unbounded_slice_error(slice(-1, None, -1))
assert_unbounded_slice_error(slice(None, -1, -1))
assert_unbounded_slice_error(slice(-1, -1, -1))
def test_not_slice_like_slices(self):
def assert_not_slice_like(slc):
self.assertTrue(not BlockPlacement(slc).is_slice_like)
assert_not_slice_like(slice(0, 0))
assert_not_slice_like(slice(100, 0))
assert_not_slice_like(slice(100, 100, -1))
assert_not_slice_like(slice(0, 100, -1))
self.assertTrue(not BlockPlacement(slice(0, 0)).is_slice_like)
self.assertTrue(not BlockPlacement(slice(100, 100)).is_slice_like)
def test_array_to_slice_conversion(self):
def assert_as_slice_equals(arr, slc):
self.assertEqual(BlockPlacement(arr).as_slice, slc)
assert_as_slice_equals([0], slice(0, 1, 1))
assert_as_slice_equals([100], slice(100, 101, 1))
assert_as_slice_equals([0, 1, 2], slice(0, 3, 1))
assert_as_slice_equals([0, 5, 10], slice(0, 15, 5))
assert_as_slice_equals([0, 100], slice(0, 200, 100))
assert_as_slice_equals([2, 1], slice(2, 0, -1))
assert_as_slice_equals([2, 1, 0], slice(2, None, -1))
assert_as_slice_equals([100, 0], slice(100, None, -100))
def test_not_slice_like_arrays(self):
def assert_not_slice_like(arr):
self.assertTrue(not BlockPlacement(arr).is_slice_like)
assert_not_slice_like([])
assert_not_slice_like([-1])
assert_not_slice_like([-1, -2, -3])
assert_not_slice_like([-10])
assert_not_slice_like([-1])
assert_not_slice_like([-1, 0, 1, 2])
assert_not_slice_like([-2, 0, 2, 4])
assert_not_slice_like([1, 0, -1])
assert_not_slice_like([1, 1, 1])
def test_slice_iter(self):
self.assertEqual(list(BlockPlacement(slice(0, 3))), [0, 1, 2])
self.assertEqual(list(BlockPlacement(slice(0, 0))), [])
self.assertEqual(list(BlockPlacement(slice(3, 0))), [])
self.assertEqual(list(BlockPlacement(slice(3, 0, -1))), [3, 2, 1])
self.assertEqual(list(BlockPlacement(slice(3, None, -1))),
[3, 2, 1, 0])
def test_slice_to_array_conversion(self):
def assert_as_array_equals(slc, asarray):
tm.assert_numpy_array_equal(
BlockPlacement(slc).as_array,
np.asarray(asarray))
assert_as_array_equals(slice(0, 3), [0, 1, 2])
assert_as_array_equals(slice(0, 0), [])
assert_as_array_equals(slice(3, 0), [])
assert_as_array_equals(slice(3, 0, -1), [3, 2, 1])
assert_as_array_equals(slice(3, None, -1), [3, 2, 1, 0])
assert_as_array_equals(slice(31, None, -10), [31, 21, 11, 1])
def test_blockplacement_add(self):
bpl = BlockPlacement(slice(0, 5))
self.assertEqual(bpl.add(1).as_slice, slice(1, 6, 1))
self.assertEqual(bpl.add(np.arange(5)).as_slice,
slice(0, 10, 2))
self.assertEqual(list(bpl.add(np.arange(5, 0, -1))),
[5, 5, 5, 5, 5])
def test_blockplacement_add_int(self):
def assert_add_equals(val, inc, result):
self.assertEqual(list(BlockPlacement(val).add(inc)),
result)
assert_add_equals(slice(0, 0), 0, [])
assert_add_equals(slice(1, 4), 0, [1, 2, 3])
assert_add_equals(slice(3, 0, -1), 0, [3, 2, 1])
assert_add_equals(slice(2, None, -1), 0, [2, 1, 0])
assert_add_equals([1, 2, 4], 0, [1, 2, 4])
assert_add_equals(slice(0, 0), 10, [])
assert_add_equals(slice(1, 4), 10, [11, 12, 13])
assert_add_equals(slice(3, 0, -1), 10, [13, 12, 11])
assert_add_equals(slice(2, None, -1), 10, [12, 11, 10])
assert_add_equals([1, 2, 4], 10, [11, 12, 14])
assert_add_equals(slice(0, 0), -1, [])
assert_add_equals(slice(1, 4), -1, [0, 1, 2])
assert_add_equals(slice(3, 0, -1), -1, [2, 1, 0])
assert_add_equals([1, 2, 4], -1, [0, 1, 3])
self.assertRaises(ValueError,
lambda: BlockPlacement(slice(1, 4)).add(-10))
self.assertRaises(ValueError,
lambda: BlockPlacement([1, 2, 4]).add(-10))
self.assertRaises(ValueError,
lambda: BlockPlacement(slice(2, None, -1)).add(-1))
# def test_blockplacement_array_add(self):
# assert_add_equals(slice(0, 2), [0, 1, 1], [0, 2, 3])
# assert_add_equals(slice(2, None, -1), [1, 1, 0], [3, 2, 0])
if __name__ == '__main__':
import nose
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
| artistic-2.0 |
ishanic/scikit-learn | examples/ensemble/plot_adaboost_twoclass.py | 347 | 3268 | """
==================
Two-class AdaBoost
==================
This example fits an AdaBoosted decision stump on a non-linearly separable
classification dataset composed of two "Gaussian quantiles" clusters
(see :func:`sklearn.datasets.make_gaussian_quantiles`) and plots the decision
boundary and decision scores. The distributions of decision scores are shown
separately for samples of class A and B. The predicted class label for each
sample is determined by the sign of the decision score. Samples with decision
scores greater than zero are classified as B, and are otherwise classified
as A. The magnitude of a decision score determines the degree of likeness with
the predicted class label. Additionally, a new dataset could be constructed
containing a desired purity of class B, for example, by only selecting samples
with a decision score above some value.
"""
print(__doc__)
# Author: Noel Dawe <noel.dawe@gmail.com>
#
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.datasets import make_gaussian_quantiles
# Construct dataset
X1, y1 = make_gaussian_quantiles(cov=2.,
n_samples=200, n_features=2,
n_classes=2, random_state=1)
X2, y2 = make_gaussian_quantiles(mean=(3, 3), cov=1.5,
n_samples=300, n_features=2,
n_classes=2, random_state=1)
X = np.concatenate((X1, X2))
y = np.concatenate((y1, - y2 + 1))
# Create and fit an AdaBoosted decision tree
bdt = AdaBoostClassifier(DecisionTreeClassifier(max_depth=1),
algorithm="SAMME",
n_estimators=200)
bdt.fit(X, y)
plot_colors = "br"
plot_step = 0.02
class_names = "AB"
plt.figure(figsize=(10, 5))
# Plot the decision boundaries
plt.subplot(121)
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),
np.arange(y_min, y_max, plot_step))
Z = bdt.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=plt.cm.Paired)
plt.axis("tight")
# Plot the training points
for i, n, c in zip(range(2), class_names, plot_colors):
idx = np.where(y == i)
plt.scatter(X[idx, 0], X[idx, 1],
c=c, cmap=plt.cm.Paired,
label="Class %s" % n)
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.legend(loc='upper right')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Decision Boundary')
# Plot the two-class decision scores
twoclass_output = bdt.decision_function(X)
plot_range = (twoclass_output.min(), twoclass_output.max())
plt.subplot(122)
for i, n, c in zip(range(2), class_names, plot_colors):
plt.hist(twoclass_output[y == i],
bins=10,
range=plot_range,
facecolor=c,
label='Class %s' % n,
alpha=.5)
x1, x2, y1, y2 = plt.axis()
plt.axis((x1, x2, y1, y2 * 1.2))
plt.legend(loc='upper right')
plt.ylabel('Samples')
plt.xlabel('Score')
plt.title('Decision Scores')
plt.tight_layout()
plt.subplots_adjust(wspace=0.35)
plt.show()
| bsd-3-clause |
mjudsp/Tsallis | examples/svm/plot_weighted_samples.py | 95 | 1943 | """
=====================
SVM: Weighted samples
=====================
Plot decision function of a weighted dataset, where the size of points
is proportional to its weight.
The sample weighting rescales the C parameter, which means that the classifier
puts more emphasis on getting these points right. The effect might often be
subtle.
To emphasize the effect here, we particularly weight outliers, making the
deformation of the decision boundary very visible.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm
def plot_decision_function(classifier, sample_weight, axis, title):
# plot the decision function
xx, yy = np.meshgrid(np.linspace(-4, 5, 500), np.linspace(-4, 5, 500))
Z = classifier.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# plot the line, the points, and the nearest vectors to the plane
axis.contourf(xx, yy, Z, alpha=0.75, cmap=plt.cm.bone)
axis.scatter(X[:, 0], X[:, 1], c=y, s=100 * sample_weight, alpha=0.9,
cmap=plt.cm.bone)
axis.axis('off')
axis.set_title(title)
# we create 20 points
np.random.seed(0)
X = np.r_[np.random.randn(10, 2) + [1, 1], np.random.randn(10, 2)]
y = [1] * 10 + [-1] * 10
sample_weight_last_ten = abs(np.random.randn(len(X)))
sample_weight_constant = np.ones(len(X))
# and bigger weights to some outliers
sample_weight_last_ten[15:] *= 5
sample_weight_last_ten[9] *= 15
# for reference, first fit without class weights
# fit the model
clf_weights = svm.SVC()
clf_weights.fit(X, y, sample_weight=sample_weight_last_ten)
clf_no_weights = svm.SVC()
clf_no_weights.fit(X, y)
fig, axes = plt.subplots(1, 2, figsize=(14, 6))
plot_decision_function(clf_no_weights, sample_weight_constant, axes[0],
"Constant weights")
plot_decision_function(clf_weights, sample_weight_last_ten, axes[1],
"Modified weights")
plt.show()
| bsd-3-clause |
dsm054/pandas | pandas/util/_test_decorators.py | 2 | 6935 | """
This module provides decorator functions which can be applied to test objects
in order to skip those objects when certain conditions occur. A sample use case
is to detect if the platform is missing ``matplotlib``. If so, any test objects
which require ``matplotlib`` and decorated with ``@td.skip_if_no_mpl`` will be
skipped by ``pytest`` during the execution of the test suite.
To illustrate, after importing this module:
import pandas.util._test_decorators as td
The decorators can be applied to classes:
@td.skip_if_some_reason
class Foo():
...
Or individual functions:
@td.skip_if_some_reason
def test_foo():
...
For more information, refer to the ``pytest`` documentation on ``skipif``.
"""
from distutils.version import LooseVersion
import locale
import pytest
from pandas.compat import (
PY3, import_lzma, is_platform_32bit, is_platform_windows)
from pandas.compat.numpy import _np_version_under1p15
from pandas.core.computation.expressions import (
_NUMEXPR_INSTALLED, _USE_NUMEXPR)
def safe_import(mod_name, min_version=None):
"""
Parameters:
-----------
mod_name : str
Name of the module to be imported
min_version : str, default None
Minimum required version of the specified mod_name
Returns:
--------
object
The imported module if successful, or False
"""
try:
mod = __import__(mod_name)
except ImportError:
return False
if not min_version:
return mod
else:
import sys
try:
version = getattr(sys.modules[mod_name], '__version__')
except AttributeError:
# xlrd uses a capitalized attribute name
version = getattr(sys.modules[mod_name], '__VERSION__')
if version:
from distutils.version import LooseVersion
if LooseVersion(version) >= LooseVersion(min_version):
return mod
return False
def _skip_if_no_mpl():
mod = safe_import("matplotlib")
if mod:
mod.use("Agg", warn=False)
else:
return True
def _skip_if_mpl_2_2():
mod = safe_import("matplotlib")
if mod:
v = mod.__version__
if LooseVersion(v) > LooseVersion('2.1.2'):
return True
else:
mod.use("Agg", warn=False)
def _skip_if_has_locale():
lang, _ = locale.getlocale()
if lang is not None:
return True
def _skip_if_not_us_locale():
lang, _ = locale.getlocale()
if lang != 'en_US':
return True
def _skip_if_no_scipy():
return not (safe_import('scipy.stats') and
safe_import('scipy.sparse') and
safe_import('scipy.interpolate') and
safe_import('scipy.signal'))
def _skip_if_no_lzma():
try:
import_lzma()
except ImportError:
return True
def skip_if_no(package, min_version=None):
"""
Generic function to help skip test functions when required packages are not
present on the testing system.
Intended for use as a decorator, this function will wrap the decorated
function with a pytest ``skip_if`` mark. During a pytest test suite
execution, that mark will attempt to import the specified ``package`` and
optionally ensure it meets the ``min_version``. If the import and version
check are unsuccessful, then the decorated function will be skipped.
Parameters
----------
package: str
The name of the package required by the decorated function
min_version: str or None, default None
Optional minimum version of the package required by the decorated
function
Returns
-------
decorated_func: function
The decorated function wrapped within a pytest ``skip_if`` mark
"""
def decorated_func(func):
msg = "Could not import '{}'".format(package)
if min_version:
msg += " satisfying a min_version of {}".format(min_version)
return pytest.mark.skipif(
not safe_import(package, min_version=min_version), reason=msg
)(func)
return decorated_func
skip_if_no_mpl = pytest.mark.skipif(_skip_if_no_mpl(),
reason="Missing matplotlib dependency")
skip_if_np_lt_115 = pytest.mark.skipif(_np_version_under1p15,
reason="NumPy 1.15 or greater required")
skip_if_mpl = pytest.mark.skipif(not _skip_if_no_mpl(),
reason="matplotlib is present")
xfail_if_mpl_2_2 = pytest.mark.xfail(_skip_if_mpl_2_2(),
reason="matplotlib 2.2")
skip_if_32bit = pytest.mark.skipif(is_platform_32bit(),
reason="skipping for 32 bit")
skip_if_windows = pytest.mark.skipif(is_platform_windows(),
reason="Running on Windows")
skip_if_windows_python_3 = pytest.mark.skipif(is_platform_windows() and PY3,
reason=("not used on python3/"
"win32"))
skip_if_has_locale = pytest.mark.skipif(_skip_if_has_locale(),
reason="Specific locale is set {lang}"
.format(lang=locale.getlocale()[0]))
skip_if_not_us_locale = pytest.mark.skipif(_skip_if_not_us_locale(),
reason="Specific locale is set "
"{lang}".format(
lang=locale.getlocale()[0]))
skip_if_no_scipy = pytest.mark.skipif(_skip_if_no_scipy(),
reason="Missing SciPy requirement")
skip_if_no_lzma = pytest.mark.skipif(_skip_if_no_lzma(),
reason="need backports.lzma to run")
skip_if_no_ne = pytest.mark.skipif(not _USE_NUMEXPR,
reason="numexpr enabled->{enabled}, "
"installed->{installed}".format(
enabled=_USE_NUMEXPR,
installed=_NUMEXPR_INSTALLED))
def parametrize_fixture_doc(*args):
"""
Intended for use as a decorator for parametrized fixture,
this function will wrap the decorated function with a pytest
``parametrize_fixture_doc`` mark. That mark will format
initial fixture docstring by replacing placeholders {0}, {1} etc
with parameters passed as arguments.
Parameters:
----------
args: iterable
Positional arguments for docstring.
Returns:
-------
documented_fixture: function
The decorated function wrapped within a pytest
``parametrize_fixture_doc`` mark
"""
def documented_fixture(fixture):
fixture.__doc__ = fixture.__doc__.format(*args)
return fixture
return documented_fixture
| bsd-3-clause |
joequant/zipline | zipline/data/ffc/loaders/us_equity_pricing.py | 16 | 21283 | # Copyright 2015 Quantopian, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from abc import (
ABCMeta,
abstractmethod,
)
from contextlib import contextmanager
from errno import ENOENT
from os import remove
from os.path import exists
from bcolz import (
carray,
ctable,
)
from click import progressbar
from numpy import (
array,
array_equal,
float64,
floating,
full,
iinfo,
integer,
issubdtype,
uint32,
)
from pandas import (
DatetimeIndex,
read_csv,
Timestamp,
)
from six import (
iteritems,
string_types,
with_metaclass,
)
import sqlite3
from zipline.data.ffc.base import FFCLoader
from zipline.data.ffc.loaders._us_equity_pricing import (
_compute_row_slices,
_read_bcolz_data,
load_adjustments_from_sqlite,
)
from zipline.lib.adjusted_array import (
adjusted_array,
)
from zipline.errors import NoFurtherDataError
OHLC = frozenset(['open', 'high', 'low', 'close'])
US_EQUITY_PRICING_BCOLZ_COLUMNS = [
'open', 'high', 'low', 'close', 'volume', 'day', 'id'
]
DAILY_US_EQUITY_PRICING_DEFAULT_FILENAME = 'daily_us_equity_pricing.bcolz'
SQLITE_ADJUSTMENT_COLUMNS = frozenset(['effective_date', 'ratio', 'sid'])
SQLITE_ADJUSTMENT_COLUMN_DTYPES = {
'effective_date': integer,
'ratio': floating,
'sid': integer,
}
SQLITE_ADJUSTMENT_TABLENAMES = frozenset(['splits', 'dividends', 'mergers'])
UINT32_MAX = iinfo(uint32).max
@contextmanager
def passthrough(obj):
yield obj
class BcolzDailyBarWriter(with_metaclass(ABCMeta)):
"""
Class capable of writing daily OHLCV data to disk in a format that can be
read efficiently by BcolzDailyOHLCVReader.
See Also
--------
BcolzDailyBarReader : Consumer of the data written by this class.
"""
@abstractmethod
def gen_tables(self, assets):
"""
Return an iterator of pairs of (asset_id, bcolz.ctable).
"""
raise NotImplementedError()
@abstractmethod
def to_uint32(self, array, colname):
"""
Convert raw column values produced by gen_tables into uint32 values.
Parameters
----------
array : np.array
An array of raw values.
colname : str, {'open', 'high', 'low', 'close', 'volume', 'day'}
The name of the column being loaded.
For output being read by the default BcolzOHLCVReader, data should be
stored in the following manner:
- Pricing columns (Open, High, Low, Close) should be stored as 1000 *
as-traded dollar value.
- Volume should be the as-traded volume.
- Dates should be stored as seconds since midnight UTC, Jan 1, 1970.
"""
raise NotImplementedError()
def write(self, filename, calendar, assets, show_progress=False):
"""
Parameters
----------
filename : str
The location at which we should write our output.
calendar : pandas.DatetimeIndex
Calendar to use to compute asset calendar offsets.
assets : pandas.Int64Index
The assets for which to write data.
show_progress : bool
Whether or not to show a progress bar while writing.
Returns
-------
table : bcolz.ctable
The newly-written table.
"""
_iterator = self.gen_tables(assets)
if show_progress:
pbar = progressbar(
_iterator,
length=len(assets),
item_show_func=lambda i: i if i is None else str(i[0]),
label="Merging asset files:",
)
with pbar as pbar_iterator:
return self._write_internal(filename, calendar, pbar_iterator)
return self._write_internal(filename, calendar, _iterator)
def _write_internal(self, filename, calendar, iterator):
"""
Internal implementation of write.
`iterator` should be an iterator yielding pairs of (asset, ctable).
"""
total_rows = 0
first_row = {}
last_row = {}
calendar_offset = {}
# Maps column name -> output carray.
columns = {
k: carray(array([], dtype=uint32))
for k in US_EQUITY_PRICING_BCOLZ_COLUMNS
}
for asset_id, table in iterator:
nrows = len(table)
for column_name in columns:
if column_name == 'id':
# We know what the content of this column is, so don't
# bother reading it.
columns['id'].append(full((nrows,), asset_id))
continue
columns[column_name].append(
self.to_uint32(table[column_name][:], column_name)
)
# Bcolz doesn't support ints as keys in `attrs`, so convert
# assets to strings for use as attr keys.
asset_key = str(asset_id)
# Calculate the index into the array of the first and last row
# for this asset. This allows us to efficiently load single
# assets when querying the data back out of the table.
first_row[asset_key] = total_rows
last_row[asset_key] = total_rows + nrows - 1
total_rows += nrows
# Calculate the number of trading days between the first date
# in the stored data and the first date of **this** asset. This
# offset used for output alignment by the reader.
# HACK: Index with a list so that we get back an array we can pass
# to self.to_uint32. We could try to extract this in the loop
# above, but that makes the logic a lot messier.
asset_first_day = self.to_uint32(table['day'][[0]], 'day')[0]
calendar_offset[asset_key] = calendar.get_loc(
Timestamp(asset_first_day, unit='s', tz='UTC'),
)
# This writes the table to disk.
full_table = ctable(
columns=[
columns[colname]
for colname in US_EQUITY_PRICING_BCOLZ_COLUMNS
],
names=US_EQUITY_PRICING_BCOLZ_COLUMNS,
rootdir=filename,
mode='w',
)
full_table.attrs['first_row'] = first_row
full_table.attrs['last_row'] = last_row
full_table.attrs['calendar_offset'] = calendar_offset
full_table.attrs['calendar'] = calendar.asi8.tolist()
return full_table
class DailyBarWriterFromCSVs(BcolzDailyBarWriter):
"""
BcolzDailyBarWriter constructed from a map from csvs to assets.
Parameters
----------
asset_map : dict
A map from asset_id -> path to csv with data for that asset.
CSVs should have the following columns:
day : datetime64
open : float64
high : float64
low : float64
close : float64
volume : int64
"""
_csv_dtypes = {
'open': float64,
'high': float64,
'low': float64,
'close': float64,
'volume': float64,
}
def __init__(self, asset_map):
self._asset_map = asset_map
def gen_tables(self, assets):
"""
Read CSVs as DataFrames from our asset map.
"""
dtypes = self._csv_dtypes
for asset in assets:
path = self._asset_map.get(asset)
if path is None:
raise KeyError("No path supplied for asset %s" % asset)
data = read_csv(path, parse_dates=['day'], dtype=dtypes)
yield asset, ctable.fromdataframe(data)
def to_uint32(self, array, colname):
arrmax = array.max()
if colname in OHLC:
self.check_uint_safe(arrmax * 1000, colname)
return (array * 1000).astype(uint32)
elif colname == 'volume':
self.check_uint_safe(arrmax, colname)
return array.astype(uint32)
elif colname == 'day':
nanos_per_second = (1000 * 1000 * 1000)
self.check_uint_safe(arrmax.view(int) / nanos_per_second, colname)
return (array.view(int) / nanos_per_second).astype(uint32)
@staticmethod
def check_uint_safe(value, colname):
if value >= UINT32_MAX:
raise ValueError(
"Value %s from column '%s' is too large" % (value, colname)
)
class BcolzDailyBarReader(object):
"""
Reader for raw pricing data written by BcolzDailyOHLCVWriter.
A Bcolz CTable is comprised of Columns and Attributes.
Columns
-------
The table with which this loader interacts contains the following columns:
['open', 'high', 'low', 'close', 'volume', 'day', 'id'].
The data in these columns is interpreted as follows:
- Price columns ('open', 'high', 'low', 'close') are interpreted as 1000 *
as-traded dollar value.
- Volume is interpreted as as-traded volume.
- Day is interpreted as seconds since midnight UTC, Jan 1, 1970.
- Id is the asset id of the row.
The data in each column is grouped by asset and then sorted by day within
each asset block.
The table is built to represent a long time range of data, e.g. ten years
of equity data, so the lengths of each asset block is not equal to each
other. The blocks are clipped to the known start and end date of each asset
to cut down on the number of empty values that would need to be included to
make a regular/cubic dataset.
When read across the open, high, low, close, and volume with the same
index should represent the same asset and day.
Attributes
----------
The table with which this loader interacts contains the following
attributes:
first_row : dict
Map from asset_id -> index of first row in the dataset with that id.
last_row : dict
Map from asset_id -> index of last row in the dataset with that id.
calendar_offset : dict
Map from asset_id -> calendar index of first row.
calendar : list[int64]
Calendar used to compute offsets, in asi8 format (ns since EPOCH).
We use first_row and last_row together to quickly find ranges of rows to
load when reading an asset's data into memory.
We use calendar_offset and calendar to orient loaded blocks within a
range of queried dates.
"""
def __init__(self, table):
if isinstance(table, string_types):
table = ctable(rootdir=table, mode='r')
self._table = table
self._calendar = DatetimeIndex(table.attrs['calendar'], tz='UTC')
self._first_rows = {
int(asset_id): start_index
for asset_id, start_index in iteritems(table.attrs['first_row'])
}
self._last_rows = {
int(asset_id): end_index
for asset_id, end_index in iteritems(table.attrs['last_row'])
}
self._calendar_offsets = {
int(id_): offset
for id_, offset in iteritems(table.attrs['calendar_offset'])
}
def _slice_locs(self, start_date, end_date):
try:
start = self._calendar.get_loc(start_date)
except KeyError:
if start_date < self._calendar[0]:
raise NoFurtherDataError(
msg=(
"FFC Query requesting data starting on {query_start}, "
"but first known date is {calendar_start}"
).format(
query_start=str(start_date),
calendar_start=str(self._calendar[0]),
)
)
else:
raise ValueError("Query start %s not in calendar" % start_date)
try:
stop = self._calendar.get_loc(end_date)
except:
if end_date > self._calendar[-1]:
raise NoFurtherDataError(
msg=(
"FFC Query requesting data up to {query_end}, "
"but last known date is {calendar_end}"
).format(
query_end=end_date,
calendar_end=self._calendar[-1],
)
)
else:
raise ValueError("Query end %s not in calendar" % end_date)
return start, stop
def _compute_slices(self, dates, assets):
"""
Compute the raw row indices to load for each asset on a query for the
given dates.
Parameters
----------
dates : pandas.DatetimeIndex
Dates of the query on which we want to compute row indices.
assets : pandas.Int64Index
Assets for which we want to compute row indices
Returns
-------
A 3-tuple of (first_rows, last_rows, offsets):
first_rows : np.array[intp]
Array with length == len(assets) containing the index of the first
row to load for each asset in `assets`.
last_rows : np.array[intp]
Array with length == len(assets) containing the index of the last
row to load for each asset in `assets`.
offset : np.array[intp]
Array with length == (len(asset) containing the index in a buffer
of length `dates` corresponding to the first row of each asset.
The value of offset[i] will be 0 if asset[i] existed at the start
of a query. Otherwise, offset[i] will be equal to the number of
entries in `dates` for which the asset did not yet exist.
"""
start, stop = self._slice_locs(dates[0], dates[-1])
# Sanity check that the requested date range matches our calendar.
# This could be removed in the future if it's materially affecting
# performance.
query_dates = self._calendar[start:stop + 1]
if not array_equal(query_dates.values, dates.values):
raise ValueError("Incompatible calendars!")
# The core implementation of the logic here is implemented in Cython
# for efficiency.
return _compute_row_slices(
self._first_rows,
self._last_rows,
self._calendar_offsets,
start,
stop,
assets,
)
def load_raw_arrays(self, columns, dates, assets):
first_rows, last_rows, offsets = self._compute_slices(dates, assets)
return _read_bcolz_data(
self._table,
(len(dates), len(assets)),
[column.name for column in columns],
first_rows,
last_rows,
offsets,
)
class SQLiteAdjustmentWriter(object):
"""
Writer for data to be read by SQLiteAdjustmentWriter
Parameters
----------
conn_or_path : str or sqlite3.Connection
A handle to the target sqlite database.
overwrite : bool, optional, default=False
If True and conn_or_path is a string, remove any existing files at the
given path before connecting.
See Also
--------
SQLiteAdjustmentReader
"""
def __init__(self, conn_or_path, overwrite=False):
if isinstance(conn_or_path, sqlite3.Connection):
self.conn = conn_or_path
elif isinstance(conn_or_path, str):
if overwrite and exists(conn_or_path):
try:
remove(conn_or_path)
except OSError as e:
if e.errno != ENOENT:
raise
self.conn = sqlite3.connect(conn_or_path)
else:
raise TypeError("Unknown connection type %s" % type(conn_or_path))
def write_frame(self, tablename, frame):
if frozenset(frame.columns) != SQLITE_ADJUSTMENT_COLUMNS:
raise ValueError(
"Unexpected frame columns:\n"
"Expected Columns: %s\n"
"Received Columns: %s" % (
SQLITE_ADJUSTMENT_COLUMNS,
frame.columns.tolist(),
)
)
elif tablename not in SQLITE_ADJUSTMENT_TABLENAMES:
raise ValueError(
"Adjustment table %s not in %s" % (
tablename, SQLITE_ADJUSTMENT_TABLENAMES
)
)
expected_dtypes = SQLITE_ADJUSTMENT_COLUMN_DTYPES
actual_dtypes = frame.dtypes
for colname, expected in iteritems(expected_dtypes):
actual = actual_dtypes[colname]
if not issubdtype(actual, expected):
raise TypeError(
"Expected data of type {expected} for column '{colname}', "
"but got {actual}.".format(
expected=expected,
colname=colname,
actual=actual,
)
)
return frame.to_sql(tablename, self.conn)
def write(self, splits, mergers, dividends):
"""
Writes data to a SQLite file to be read by SQLiteAdjustmentReader.
Parameters
----------
splits : pandas.DataFrame
Dataframe containing split data.
mergers : pandas.DataFrame
DataFrame containing merger data.
dividends : pandas.DataFrame
DataFrame containing dividend data.
Notes
-----
DataFrame input (`splits`, `mergers`, and `dividends`) should all have
the following columns:
effective_date : int
The date, represented as seconds since Unix epoch, on which the
adjustment should be applied.
ratio : float
A value to apply to all data earlier than the effective date.
sid : int
The asset id associated with this adjustment.
The ratio column is interpreted as follows:
- For all adjustment types, multiply price fields ('open', 'high',
'low', and 'close') by the ratio.
- For **splits only**, **divide** volume by the adjustment ratio.
Dividend ratios should be calculated as
1.0 - (dividend_value / "close on day prior to dividend ex_date").
Returns
-------
None
See Also
--------
SQLiteAdjustmentReader : Consumer for the data written by this class
"""
self.write_frame('splits', splits)
self.write_frame('mergers', mergers)
self.write_frame('dividends', dividends)
self.conn.execute(
"CREATE INDEX splits_sids "
"ON splits(sid)"
)
self.conn.execute(
"CREATE INDEX splits_effective_date "
"ON splits(effective_date)"
)
self.conn.execute(
"CREATE INDEX mergers_sids "
"ON mergers(sid)"
)
self.conn.execute(
"CREATE INDEX mergers_effective_date "
"ON mergers(effective_date)"
)
self.conn.execute(
"CREATE INDEX dividends_sid "
"ON dividends(sid)"
)
self.conn.execute(
"CREATE INDEX dividends_effective_date "
"ON dividends(effective_date)"
)
def close(self):
self.conn.close()
class SQLiteAdjustmentReader(object):
"""
Loads adjustments based on corporate actions from a SQLite database.
Expects data written in the format output by `SQLiteAdjustmentWriter`.
Parameters
----------
conn : str or sqlite3.Connection
Connection from which to load data.
"""
def __init__(self, conn):
if isinstance(conn, str):
conn = sqlite3.connect(conn)
self.conn = conn
def load_adjustments(self, columns, dates, assets):
return load_adjustments_from_sqlite(
self.conn,
[column.name for column in columns],
dates,
assets,
)
class USEquityPricingLoader(FFCLoader):
"""
FFCLoader for US Equity Pricing
Delegates loading of baselines and adjustments.
"""
def __init__(self, raw_price_loader, adjustments_loader):
self.raw_price_loader = raw_price_loader
self.adjustments_loader = adjustments_loader
def load_adjusted_array(self, columns, mask):
dates, assets = mask.index, mask.columns
raw_arrays = self.raw_price_loader.load_raw_arrays(
columns,
dates,
assets,
)
adjustments = self.adjustments_loader.load_adjustments(
columns,
dates,
assets,
)
return [
adjusted_array(raw_array, mask.values, col_adjustments)
for raw_array, col_adjustments in zip(raw_arrays, adjustments)
]
| apache-2.0 |
cdeil/rootpy | setup.py | 1 | 4900 | #!/usr/bin/env python
from distribute_setup import use_setuptools
use_setuptools()
from setuptools import setup, find_packages
from glob import glob
import os
from os.path import join
import sys
local_path = os.path.dirname(os.path.abspath(__file__))
# setup.py can be called from outside the rootpy directory
os.chdir(local_path)
sys.path.insert(0, local_path)
# check for custom args
# we should instead extend distutils...
filtered_args = []
release = False
build_extensions = True
for arg in sys.argv:
if arg == '--release':
# --release sets the version number before installing
release = True
elif arg == '--no-ext':
build_extensions = False
else:
filtered_args.append(arg)
sys.argv = filtered_args
ext_modules = []
if os.getenv('ROOTPY_NO_EXT') not in ('1', 'true') and build_extensions:
from distutils.core import Extension
import subprocess
import distutils.sysconfig
python_lib = os.path.dirname(
distutils.sysconfig.get_python_lib(
standard_lib=True))
if 'CPPFLAGS' in os.environ:
del os.environ['CPPFLAGS']
if 'LDFLAGS' in os.environ:
del os.environ['LDFLAGS']
try:
root_inc = subprocess.Popen(
['root-config', '--incdir'],
stdout=subprocess.PIPE).communicate()[0].strip()
root_ldflags = subprocess.Popen(
['root-config', '--libs', '--ldflags'],
stdout=subprocess.PIPE).communicate()[0].strip().split()
root_cflags = subprocess.Popen(
['root-config', '--cflags'],
stdout=subprocess.PIPE).communicate()[0].strip().split()
except OSError:
print('root-config not found. '
'Please activate your ROOT installation before '
'the root_numpy extension can be compiled '
'or set ROOTPY_NO_EXT=1 .')
sys.exit(1)
try:
import numpy as np
module = Extension(
'rootpy.root2array.root_numpy._librootnumpy',
sources=['rootpy/root2array/root_numpy/_librootnumpy.cxx'],
include_dirs=[np.get_include(),
root_inc,
'rootpy/root2array/root_numpy/'],
extra_compile_args=root_cflags,
extra_link_args=root_ldflags + ['-L%s' % python_lib])
ext_modules.append(module)
module = Extension(
'rootpy.root2array._libnumpyhist',
sources=['rootpy/root2array/src/_libnumpyhist.cxx'],
include_dirs=[np.get_include(),
root_inc,
'rootpy/root2array/src'],
extra_compile_args=root_cflags,
extra_link_args=root_ldflags + ['-L%s' % python_lib])
ext_modules.append(module)
except ImportError:
# could not import numpy, so don't build numpy ext_modules
pass
module = Extension(
'rootpy.interactive._pydispatcher_processed_event',
sources=['rootpy/interactive/src/_pydispatcher.cxx'],
include_dirs=[root_inc],
extra_compile_args=root_cflags,
extra_link_args=root_ldflags + ['-L%s' % python_lib])
ext_modules.append(module)
if release:
# write the version to rootpy/info.py
version = open('version.txt', 'r').read().strip()
import shutil
shutil.move('rootpy/info.py', 'info.tmp')
dev_info = ''.join(open('info.tmp', 'r').readlines())
open('rootpy/info.py', 'w').write(
dev_info.replace(
"version_info('dev')",
"version_info('%s')" % version))
execfile('rootpy/info.py')
print __doc__
setup(
name='rootpy',
version=__version__,
description="A pythonic layer on top of the "
"ROOT framework's PyROOT bindings.",
long_description=open('README.rst').read(),
author='Noel Dawe',
author_email='noel.dawe@cern.ch',
license='GPLv3',
url=__url__,
download_url=__download_url__,
packages=find_packages(),
install_requires=[
'python>=2.6',
'argparse>=1.2.1',
],
extras_require={
'hdf': ['tables>=2.3'],
'array': ['numpy>=1.6.1'],
'mpl': ['matplotlib>=1.0.1'],
'term': ['readline>=6.2.4',
'termcolor>=1.1.0'],
},
scripts=glob('scripts/*'),
package_data={'': ['etc/*']},
ext_modules=ext_modules,
classifiers=[
"Programming Language :: Python",
"Topic :: Utilities",
"Operating System :: POSIX :: Linux",
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License (GPL)"
])
if release:
# revert rootpy/info.py
shutil.move('info.tmp', 'rootpy/info.py')
| gpl-3.0 |
scikit-hep/uproot | uproot3/pandas.py | 1 | 1165 | #!/usr/bin/env python
# BSD 3-Clause License; see https://github.com/scikit-hep/uproot3/blob/master/LICENSE
"""Top-level functions for Pandas."""
from __future__ import absolute_import
import uproot3.tree
from uproot3.source.memmap import MemmapSource
from uproot3.source.xrootd import XRootDSource
from uproot3.source.http import HTTPSource
def iterate(path, treepath, branches=None, entrysteps=None, namedecode="utf-8", reportpath=False, reportfile=False, flatten=True, flatname=None, awkwardlib=None, cache=None, basketcache=None, keycache=None, executor=None, blocking=True, localsource=MemmapSource.defaults, xrootdsource=XRootDSource.defaults, httpsource=HTTPSource.defaults, **options):
import pandas
return uproot3.tree.iterate(path, treepath, branches=branches, entrysteps=entrysteps, outputtype=pandas.DataFrame, namedecode=namedecode, reportpath=reportpath, reportfile=reportfile, reportentries=False, flatten=flatten, flatname=flatname, awkwardlib=awkwardlib, cache=cache, basketcache=basketcache, keycache=keycache, executor=executor, blocking=blocking, localsource=localsource, xrootdsource=xrootdsource, httpsource=httpsource, **options)
| bsd-3-clause |
ryfeus/lambda-packs | Tensorflow_Pandas_Numpy/source3.6/pandas/core/series.py | 1 | 134982 | """
Data structure for 1-dimensional cross-sectional and time series data
"""
from __future__ import division
# pylint: disable=E1101,E1103
# pylint: disable=W0703,W0622,W0613,W0201
import types
import warnings
from textwrap import dedent
import numpy as np
import numpy.ma as ma
from pandas.core.accessor import CachedAccessor
from pandas.core.arrays import ExtensionArray
from pandas.core.dtypes.common import (
is_categorical_dtype,
is_bool,
is_integer, is_integer_dtype,
is_float_dtype,
is_extension_type,
is_extension_array_dtype,
is_datetime64tz_dtype,
is_timedelta64_dtype,
is_object_dtype,
is_list_like,
is_hashable,
is_iterator,
is_dict_like,
is_scalar,
_is_unorderable_exception,
_ensure_platform_int,
pandas_dtype)
from pandas.core.dtypes.generic import (
ABCSparseArray, ABCDataFrame, ABCIndexClass)
from pandas.core.dtypes.cast import (
maybe_upcast, infer_dtype_from_scalar,
maybe_convert_platform,
maybe_cast_to_datetime, maybe_castable,
construct_1d_arraylike_from_scalar,
construct_1d_ndarray_preserving_na,
construct_1d_object_array_from_listlike)
from pandas.core.dtypes.missing import (
isna,
notna,
remove_na_arraylike,
na_value_for_dtype)
from pandas.core.index import (Index, MultiIndex, InvalidIndexError,
Float64Index, _ensure_index)
from pandas.core.indexing import check_bool_indexer, maybe_convert_indices
from pandas.core import generic, base
from pandas.core.internals import SingleBlockManager
from pandas.core.arrays.categorical import Categorical, CategoricalAccessor
from pandas.core.indexes.accessors import CombinedDatetimelikeProperties
from pandas.core.indexes.datetimes import DatetimeIndex
from pandas.core.indexes.timedeltas import TimedeltaIndex
from pandas.core.indexes.period import PeriodIndex
from pandas import compat
from pandas.io.formats.terminal import get_terminal_size
from pandas.compat import (
zip, u, OrderedDict, StringIO, range, get_range_parameters, PY36)
from pandas.compat.numpy import function as nv
import pandas.core.ops as ops
import pandas.core.algorithms as algorithms
import pandas.core.common as com
import pandas.core.nanops as nanops
import pandas.io.formats.format as fmt
from pandas.util._decorators import (
Appender, deprecate, deprecate_kwarg, Substitution)
from pandas.util._validators import validate_bool_kwarg
from pandas._libs import index as libindex, tslib as libts, lib, iNaT
from pandas.core.config import get_option
from pandas.core.strings import StringMethods
import pandas.plotting._core as gfx
__all__ = ['Series']
_shared_doc_kwargs = dict(
axes='index', klass='Series', axes_single_arg="{0 or 'index'}",
axis="""
axis : {0 or 'index'}
Parameter needed for compatibility with DataFrame.
""",
inplace="""inplace : boolean, default False
If True, performs operation inplace and returns None.""",
unique='np.ndarray', duplicated='Series',
optional_by='', optional_mapper='', optional_labels='', optional_axis='',
versionadded_to_excel='\n .. versionadded:: 0.20.0\n')
# see gh-16971
def remove_na(arr):
"""Remove null values from array like structure.
.. deprecated:: 0.21.0
Use s[s.notnull()] instead.
"""
warnings.warn("remove_na is deprecated and is a private "
"function. Do not use.", FutureWarning, stacklevel=2)
return remove_na_arraylike(arr)
def _coerce_method(converter):
""" install the scalar coercion methods """
def wrapper(self):
if len(self) == 1:
return converter(self.iloc[0])
raise TypeError("cannot convert the series to "
"{0}".format(str(converter)))
return wrapper
# ----------------------------------------------------------------------
# Series class
class Series(base.IndexOpsMixin, generic.NDFrame):
"""
One-dimensional ndarray with axis labels (including time series).
Labels need not be unique but must be a hashable type. The object
supports both integer- and label-based indexing and provides a host of
methods for performing operations involving the index. Statistical
methods from ndarray have been overridden to automatically exclude
missing data (currently represented as NaN).
Operations between Series (+, -, /, *, **) align values based on their
associated index values-- they need not be the same length. The result
index will be the sorted union of the two indexes.
Parameters
----------
data : array-like, dict, or scalar value
Contains data stored in Series
.. versionchanged :: 0.23.0
If data is a dict, argument order is maintained for Python 3.6
and later.
index : array-like or Index (1d)
Values must be hashable and have the same length as `data`.
Non-unique index values are allowed. Will default to
RangeIndex (0, 1, 2, ..., n) if not provided. If both a dict and index
sequence are used, the index will override the keys found in the
dict.
dtype : numpy.dtype or None
If None, dtype will be inferred
copy : boolean, default False
Copy input data
"""
_metadata = ['name']
_accessors = set(['dt', 'cat', 'str'])
_deprecations = generic.NDFrame._deprecations | frozenset(
['asobject', 'sortlevel', 'reshape', 'get_value', 'set_value',
'from_csv', 'valid'])
def __init__(self, data=None, index=None, dtype=None, name=None,
copy=False, fastpath=False):
# we are called internally, so short-circuit
if fastpath:
# data is an ndarray, index is defined
if not isinstance(data, SingleBlockManager):
data = SingleBlockManager(data, index, fastpath=True)
if copy:
data = data.copy()
if index is None:
index = data.index
else:
if index is not None:
index = _ensure_index(index)
if data is None:
data = {}
if dtype is not None:
dtype = self._validate_dtype(dtype)
if isinstance(data, MultiIndex):
raise NotImplementedError("initializing a Series from a "
"MultiIndex is not supported")
elif isinstance(data, Index):
if name is None:
name = data.name
if dtype is not None:
# astype copies
data = data.astype(dtype)
else:
# need to copy to avoid aliasing issues
data = data._values.copy()
copy = False
elif isinstance(data, np.ndarray):
pass
elif isinstance(data, Series):
if name is None:
name = data.name
if index is None:
index = data.index
else:
data = data.reindex(index, copy=copy)
data = data._data
elif isinstance(data, dict):
data, index = self._init_dict(data, index, dtype)
dtype = None
copy = False
elif isinstance(data, SingleBlockManager):
if index is None:
index = data.index
elif not data.index.equals(index) or copy:
# GH#19275 SingleBlockManager input should only be called
# internally
raise AssertionError('Cannot pass both SingleBlockManager '
'`data` argument and a different '
'`index` argument. `copy` must '
'be False.')
elif is_extension_array_dtype(data) and dtype is not None:
if not data.dtype.is_dtype(dtype):
raise ValueError("Cannot specify a dtype '{}' with an "
"extension array of a different "
"dtype ('{}').".format(dtype,
data.dtype))
elif (isinstance(data, types.GeneratorType) or
(compat.PY3 and isinstance(data, map))):
data = list(data)
elif isinstance(data, (set, frozenset)):
raise TypeError("{0!r} type is unordered"
"".format(data.__class__.__name__))
else:
# handle sparse passed here (and force conversion)
if isinstance(data, ABCSparseArray):
data = data.to_dense()
if index is None:
if not is_list_like(data):
data = [data]
index = com._default_index(len(data))
elif is_list_like(data):
# a scalar numpy array is list-like but doesn't
# have a proper length
try:
if len(index) != len(data):
raise ValueError(
'Length of passed values is {val}, '
'index implies {ind}'
.format(val=len(data), ind=len(index)))
except TypeError:
pass
# create/copy the manager
if isinstance(data, SingleBlockManager):
if dtype is not None:
data = data.astype(dtype=dtype, errors='ignore',
copy=copy)
elif copy:
data = data.copy()
else:
data = _sanitize_array(data, index, dtype, copy,
raise_cast_failure=True)
data = SingleBlockManager(data, index, fastpath=True)
generic.NDFrame.__init__(self, data, fastpath=True)
self.name = name
self._set_axis(0, index, fastpath=True)
def _init_dict(self, data, index=None, dtype=None):
"""
Derive the "_data" and "index" attributes of a new Series from a
dictionary input.
Parameters
----------
data : dict or dict-like
Data used to populate the new Series
index : Index or index-like, default None
index for the new Series: if None, use dict keys
dtype : dtype, default None
dtype for the new Series: if None, infer from data
Returns
-------
_data : BlockManager for the new Series
index : index for the new Series
"""
# Looking for NaN in dict doesn't work ({np.nan : 1}[float('nan')]
# raises KeyError), so we iterate the entire dict, and align
if data:
keys, values = zip(*compat.iteritems(data))
values = list(values)
elif index is not None:
# fastpath for Series(data=None). Just use broadcasting a scalar
# instead of reindexing.
values = na_value_for_dtype(dtype)
keys = index
else:
keys, values = [], []
# Input is now list-like, so rely on "standard" construction:
s = Series(values, index=keys, dtype=dtype)
# Now we just make sure the order is respected, if any
if data and index is not None:
s = s.reindex(index, copy=False)
elif not PY36 and not isinstance(data, OrderedDict) and data:
# Need the `and data` to avoid sorting Series(None, index=[...])
# since that isn't really dict-like
try:
s = s.sort_index()
except TypeError:
pass
return s._data, s.index
@classmethod
def from_array(cls, arr, index=None, name=None, dtype=None, copy=False,
fastpath=False):
"""Construct Series from array.
.. deprecated :: 0.23.0
Use pd.Series(..) constructor instead.
"""
warnings.warn("'from_array' is deprecated and will be removed in a "
"future version. Please use the pd.Series(..) "
"constructor instead.", FutureWarning, stacklevel=2)
if isinstance(arr, ABCSparseArray):
from pandas.core.sparse.series import SparseSeries
cls = SparseSeries
return cls(arr, index=index, name=name, dtype=dtype,
copy=copy, fastpath=fastpath)
@property
def _constructor(self):
return Series
@property
def _constructor_expanddim(self):
from pandas.core.frame import DataFrame
return DataFrame
# types
@property
def _can_hold_na(self):
return self._data._can_hold_na
_index = None
def _set_axis(self, axis, labels, fastpath=False):
""" override generic, we want to set the _typ here """
if not fastpath:
labels = _ensure_index(labels)
is_all_dates = labels.is_all_dates
if is_all_dates:
if not isinstance(labels,
(DatetimeIndex, PeriodIndex, TimedeltaIndex)):
try:
labels = DatetimeIndex(labels)
# need to set here because we changed the index
if fastpath:
self._data.set_axis(axis, labels)
except (libts.OutOfBoundsDatetime, ValueError):
# labels may exceeds datetime bounds,
# or not be a DatetimeIndex
pass
self._set_subtyp(is_all_dates)
object.__setattr__(self, '_index', labels)
if not fastpath:
self._data.set_axis(axis, labels)
def _set_subtyp(self, is_all_dates):
if is_all_dates:
object.__setattr__(self, '_subtyp', 'time_series')
else:
object.__setattr__(self, '_subtyp', 'series')
def _update_inplace(self, result, **kwargs):
# we want to call the generic version and not the IndexOpsMixin
return generic.NDFrame._update_inplace(self, result, **kwargs)
@property
def name(self):
return self._name
@name.setter
def name(self, value):
if value is not None and not is_hashable(value):
raise TypeError('Series.name must be a hashable type')
object.__setattr__(self, '_name', value)
# ndarray compatibility
@property
def dtype(self):
""" return the dtype object of the underlying data """
return self._data.dtype
@property
def dtypes(self):
""" return the dtype object of the underlying data """
return self._data.dtype
@property
def ftype(self):
""" return if the data is sparse|dense """
return self._data.ftype
@property
def ftypes(self):
""" return if the data is sparse|dense """
return self._data.ftype
@property
def values(self):
"""
Return Series as ndarray or ndarray-like
depending on the dtype
Returns
-------
arr : numpy.ndarray or ndarray-like
Examples
--------
>>> pd.Series([1, 2, 3]).values
array([1, 2, 3])
>>> pd.Series(list('aabc')).values
array(['a', 'a', 'b', 'c'], dtype=object)
>>> pd.Series(list('aabc')).astype('category').values
[a, a, b, c]
Categories (3, object): [a, b, c]
Timezone aware datetime data is converted to UTC:
>>> pd.Series(pd.date_range('20130101', periods=3,
... tz='US/Eastern')).values
array(['2013-01-01T05:00:00.000000000',
'2013-01-02T05:00:00.000000000',
'2013-01-03T05:00:00.000000000'], dtype='datetime64[ns]')
"""
return self._data.external_values()
@property
def _values(self):
""" return the internal repr of this data """
return self._data.internal_values()
def _formatting_values(self):
"""Return the values that can be formatted (used by SeriesFormatter
and DataFrameFormatter)
"""
return self._data.formatting_values()
def get_values(self):
""" same as values (but handles sparseness conversions); is a view """
return self._data.get_values()
@property
def asobject(self):
"""Return object Series which contains boxed values.
.. deprecated :: 0.23.0
Use ``astype(object)`` instead.
*this is an internal non-public method*
"""
warnings.warn("'asobject' is deprecated. Use 'astype(object)'"
" instead", FutureWarning, stacklevel=2)
return self.astype(object).values
# ops
def ravel(self, order='C'):
"""
Return the flattened underlying data as an ndarray
See also
--------
numpy.ndarray.ravel
"""
return self._values.ravel(order=order)
def compress(self, condition, *args, **kwargs):
"""
Return selected slices of an array along given axis as a Series
See also
--------
numpy.ndarray.compress
"""
nv.validate_compress(args, kwargs)
return self[condition]
def nonzero(self):
"""
Return the *integer* indices of the elements that are non-zero
This method is equivalent to calling `numpy.nonzero` on the
series data. For compatibility with NumPy, the return value is
the same (a tuple with an array of indices for each dimension),
but it will always be a one-item tuple because series only have
one dimension.
Examples
--------
>>> s = pd.Series([0, 3, 0, 4])
>>> s.nonzero()
(array([1, 3]),)
>>> s.iloc[s.nonzero()[0]]
1 3
3 4
dtype: int64
>>> s = pd.Series([0, 3, 0, 4], index=['a', 'b', 'c', 'd'])
# same return although index of s is different
>>> s.nonzero()
(array([1, 3]),)
>>> s.iloc[s.nonzero()[0]]
b 3
d 4
dtype: int64
See Also
--------
numpy.nonzero
"""
return self._values.nonzero()
def put(self, *args, **kwargs):
"""
Applies the `put` method to its `values` attribute
if it has one.
See also
--------
numpy.ndarray.put
"""
self._values.put(*args, **kwargs)
def __len__(self):
"""
return the length of the Series
"""
return len(self._data)
def view(self, dtype=None):
"""
Create a new view of the Series.
This function will return a new Series with a view of the same
underlying values in memory, optionally reinterpreted with a new data
type. The new data type must preserve the same size in bytes as to not
cause index misalignment.
Parameters
----------
dtype : data type
Data type object or one of their string representations.
Returns
-------
Series
A new Series object as a view of the same data in memory.
See Also
--------
numpy.ndarray.view : Equivalent numpy function to create a new view of
the same data in memory.
Notes
-----
Series are instantiated with ``dtype=float64`` by default. While
``numpy.ndarray.view()`` will return a view with the same data type as
the original array, ``Series.view()`` (without specified dtype)
will try using ``float64`` and may fail if the original data type size
in bytes is not the same.
Examples
--------
>>> s = pd.Series([-2, -1, 0, 1, 2], dtype='int8')
>>> s
0 -2
1 -1
2 0
3 1
4 2
dtype: int8
The 8 bit signed integer representation of `-1` is `0b11111111`, but
the same bytes represent 255 if read as an 8 bit unsigned integer:
>>> us = s.view('uint8')
>>> us
0 254
1 255
2 0
3 1
4 2
dtype: uint8
The views share the same underlying values:
>>> us[0] = 128
>>> s
0 -128
1 -1
2 0
3 1
4 2
dtype: int8
"""
return self._constructor(self._values.view(dtype),
index=self.index).__finalize__(self)
def __array__(self, result=None):
"""
the array interface, return my values
"""
return self.get_values()
def __array_wrap__(self, result, context=None):
"""
Gets called after a ufunc
"""
return self._constructor(result, index=self.index,
copy=False).__finalize__(self)
def __array_prepare__(self, result, context=None):
"""
Gets called prior to a ufunc
"""
# nice error message for non-ufunc types
if context is not None and not isinstance(self._values, np.ndarray):
obj = context[1][0]
raise TypeError("{obj} with dtype {dtype} cannot perform "
"the numpy op {op}".format(
obj=type(obj).__name__,
dtype=getattr(obj, 'dtype', None),
op=context[0].__name__))
return result
# complex
@property
def real(self):
return self.values.real
@real.setter
def real(self, v):
self.values.real = v
@property
def imag(self):
return self.values.imag
@imag.setter
def imag(self, v):
self.values.imag = v
# coercion
__float__ = _coerce_method(float)
__long__ = _coerce_method(int)
__int__ = _coerce_method(int)
def _unpickle_series_compat(self, state):
if isinstance(state, dict):
self._data = state['_data']
self.name = state['name']
self.index = self._data.index
elif isinstance(state, tuple):
# < 0.12 series pickle
nd_state, own_state = state
# recreate the ndarray
data = np.empty(nd_state[1], dtype=nd_state[2])
np.ndarray.__setstate__(data, nd_state)
# backwards compat
index, name = own_state[0], None
if len(own_state) > 1:
name = own_state[1]
# recreate
self._data = SingleBlockManager(data, index, fastpath=True)
self._index = index
self.name = name
else:
raise Exception("cannot unpickle legacy formats -> [%s]" % state)
# indexers
@property
def axes(self):
"""Return a list of the row axis labels"""
return [self.index]
def _ixs(self, i, axis=0):
"""
Return the i-th value or values in the Series by location
Parameters
----------
i : int, slice, or sequence of integers
Returns
-------
value : scalar (int) or Series (slice, sequence)
"""
try:
# dispatch to the values if we need
values = self._values
if isinstance(values, np.ndarray):
return libindex.get_value_at(values, i)
else:
return values[i]
except IndexError:
raise
except Exception:
if isinstance(i, slice):
indexer = self.index._convert_slice_indexer(i, kind='iloc')
return self._get_values(indexer)
else:
label = self.index[i]
if isinstance(label, Index):
return self.take(i, axis=axis, convert=True)
else:
return libindex.get_value_at(self, i)
@property
def _is_mixed_type(self):
return False
def _slice(self, slobj, axis=0, kind=None):
slobj = self.index._convert_slice_indexer(slobj,
kind=kind or 'getitem')
return self._get_values(slobj)
def __getitem__(self, key):
key = com._apply_if_callable(key, self)
try:
result = self.index.get_value(self, key)
if not is_scalar(result):
if is_list_like(result) and not isinstance(result, Series):
# we need to box if loc of the key isn't scalar here
# otherwise have inline ndarray/lists
try:
if not is_scalar(self.index.get_loc(key)):
result = self._constructor(
result, index=[key] * len(result),
dtype=self.dtype).__finalize__(self)
except KeyError:
pass
return result
except InvalidIndexError:
pass
except (KeyError, ValueError):
if isinstance(key, tuple) and isinstance(self.index, MultiIndex):
# kludge
pass
elif key is Ellipsis:
return self
elif com.is_bool_indexer(key):
pass
else:
# we can try to coerce the indexer (or this will raise)
new_key = self.index._convert_scalar_indexer(key,
kind='getitem')
if type(new_key) != type(key):
return self.__getitem__(new_key)
raise
except Exception:
raise
if is_iterator(key):
key = list(key)
if com.is_bool_indexer(key):
key = check_bool_indexer(self.index, key)
return self._get_with(key)
def _get_with(self, key):
# other: fancy integer or otherwise
if isinstance(key, slice):
indexer = self.index._convert_slice_indexer(key, kind='getitem')
return self._get_values(indexer)
elif isinstance(key, ABCDataFrame):
raise TypeError('Indexing a Series with DataFrame is not '
'supported, use the appropriate DataFrame column')
else:
if isinstance(key, tuple):
try:
return self._get_values_tuple(key)
except Exception:
if len(key) == 1:
key = key[0]
if isinstance(key, slice):
return self._get_values(key)
raise
# pragma: no cover
if not isinstance(key, (list, np.ndarray, Series, Index)):
key = list(key)
if isinstance(key, Index):
key_type = key.inferred_type
else:
key_type = lib.infer_dtype(key)
if key_type == 'integer':
if self.index.is_integer() or self.index.is_floating():
return self.loc[key]
else:
return self._get_values(key)
elif key_type == 'boolean':
return self._get_values(key)
else:
try:
# handle the dup indexing case (GH 4246)
if isinstance(key, (list, tuple)):
return self.loc[key]
return self.reindex(key)
except Exception:
# [slice(0, 5, None)] will break if you convert to ndarray,
# e.g. as requested by np.median
# hack
if isinstance(key[0], slice):
return self._get_values(key)
raise
def _get_values_tuple(self, key):
# mpl hackaround
if com._any_none(*key):
return self._get_values(key)
if not isinstance(self.index, MultiIndex):
raise ValueError('Can only tuple-index with a MultiIndex')
# If key is contained, would have returned by now
indexer, new_index = self.index.get_loc_level(key)
return self._constructor(self._values[indexer],
index=new_index).__finalize__(self)
def _get_values(self, indexer):
try:
return self._constructor(self._data.get_slice(indexer),
fastpath=True).__finalize__(self)
except Exception:
return self._values[indexer]
def __setitem__(self, key, value):
key = com._apply_if_callable(key, self)
def setitem(key, value):
try:
self._set_with_engine(key, value)
return
except com.SettingWithCopyError:
raise
except (KeyError, ValueError):
values = self._values
if (is_integer(key) and
not self.index.inferred_type == 'integer'):
values[key] = value
return
elif key is Ellipsis:
self[:] = value
return
elif com.is_bool_indexer(key):
pass
elif is_timedelta64_dtype(self.dtype):
# reassign a null value to iNaT
if isna(value):
value = iNaT
try:
self.index._engine.set_value(self._values, key,
value)
return
except TypeError:
pass
self.loc[key] = value
return
except TypeError as e:
if (isinstance(key, tuple) and
not isinstance(self.index, MultiIndex)):
raise ValueError("Can only tuple-index with a MultiIndex")
# python 3 type errors should be raised
if _is_unorderable_exception(e):
raise IndexError(key)
if com.is_bool_indexer(key):
key = check_bool_indexer(self.index, key)
try:
self._where(~key, value, inplace=True)
return
except InvalidIndexError:
pass
self._set_with(key, value)
# do the setitem
cacher_needs_updating = self._check_is_chained_assignment_possible()
setitem(key, value)
if cacher_needs_updating:
self._maybe_update_cacher()
def _set_with_engine(self, key, value):
values = self._values
try:
self.index._engine.set_value(values, key, value)
return
except KeyError:
values[self.index.get_loc(key)] = value
return
def _set_with(self, key, value):
# other: fancy integer or otherwise
if isinstance(key, slice):
indexer = self.index._convert_slice_indexer(key, kind='getitem')
return self._set_values(indexer, value)
else:
if isinstance(key, tuple):
try:
self._set_values(key, value)
except Exception:
pass
if not isinstance(key, (list, Series, np.ndarray, Series)):
try:
key = list(key)
except Exception:
key = [key]
if isinstance(key, Index):
key_type = key.inferred_type
else:
key_type = lib.infer_dtype(key)
if key_type == 'integer':
if self.index.inferred_type == 'integer':
self._set_labels(key, value)
else:
return self._set_values(key, value)
elif key_type == 'boolean':
self._set_values(key.astype(np.bool_), value)
else:
self._set_labels(key, value)
def _set_labels(self, key, value):
if isinstance(key, Index):
key = key.values
else:
key = com._asarray_tuplesafe(key)
indexer = self.index.get_indexer(key)
mask = indexer == -1
if mask.any():
raise ValueError('%s not contained in the index' % str(key[mask]))
self._set_values(indexer, value)
def _set_values(self, key, value):
if isinstance(key, Series):
key = key._values
self._data = self._data.setitem(indexer=key, value=value)
self._maybe_update_cacher()
@deprecate_kwarg(old_arg_name='reps', new_arg_name='repeats')
def repeat(self, repeats, *args, **kwargs):
"""
Repeat elements of an Series. Refer to `numpy.ndarray.repeat`
for more information about the `repeats` argument.
See also
--------
numpy.ndarray.repeat
"""
nv.validate_repeat(args, kwargs)
new_index = self.index.repeat(repeats)
new_values = self._values.repeat(repeats)
return self._constructor(new_values,
index=new_index).__finalize__(self)
def get_value(self, label, takeable=False):
"""Quickly retrieve single value at passed index label
.. deprecated:: 0.21.0
Please use .at[] or .iat[] accessors.
Parameters
----------
label : object
takeable : interpret the index as indexers, default False
Returns
-------
value : scalar value
"""
warnings.warn("get_value is deprecated and will be removed "
"in a future release. Please use "
".at[] or .iat[] accessors instead", FutureWarning,
stacklevel=2)
return self._get_value(label, takeable=takeable)
def _get_value(self, label, takeable=False):
if takeable is True:
return com._maybe_box_datetimelike(self._values[label])
return self.index.get_value(self._values, label)
_get_value.__doc__ = get_value.__doc__
def set_value(self, label, value, takeable=False):
"""Quickly set single value at passed label. If label is not contained,
a new object is created with the label placed at the end of the result
index.
.. deprecated:: 0.21.0
Please use .at[] or .iat[] accessors.
Parameters
----------
label : object
Partial indexing with MultiIndex not allowed
value : object
Scalar value
takeable : interpret the index as indexers, default False
Returns
-------
series : Series
If label is contained, will be reference to calling Series,
otherwise a new object
"""
warnings.warn("set_value is deprecated and will be removed "
"in a future release. Please use "
".at[] or .iat[] accessors instead", FutureWarning,
stacklevel=2)
return self._set_value(label, value, takeable=takeable)
def _set_value(self, label, value, takeable=False):
try:
if takeable:
self._values[label] = value
else:
self.index._engine.set_value(self._values, label, value)
except KeyError:
# set using a non-recursive method
self.loc[label] = value
return self
_set_value.__doc__ = set_value.__doc__
def reset_index(self, level=None, drop=False, name=None, inplace=False):
"""
Generate a new DataFrame or Series with the index reset.
This is useful when the index needs to be treated as a column, or
when the index is meaningless and needs to be reset to the default
before another operation.
Parameters
----------
level : int, str, tuple, or list, default optional
For a Series with a MultiIndex, only remove the specified levels
from the index. Removes all levels by default.
drop : bool, default False
Just reset the index, without inserting it as a column in
the new DataFrame.
name : object, optional
The name to use for the column containing the original Series
values. Uses ``self.name`` by default. This argument is ignored
when `drop` is True.
inplace : bool, default False
Modify the Series in place (do not create a new object).
Returns
-------
Series or DataFrame
When `drop` is False (the default), a DataFrame is returned.
The newly created columns will come first in the DataFrame,
followed by the original Series values.
When `drop` is True, a `Series` is returned.
In either case, if ``inplace=True``, no value is returned.
See Also
--------
DataFrame.reset_index: Analogous function for DataFrame.
Examples
--------
>>> s = pd.Series([1, 2, 3, 4], name='foo',
... index=pd.Index(['a', 'b', 'c', 'd'], name='idx'))
Generate a DataFrame with default index.
>>> s.reset_index()
idx foo
0 a 1
1 b 2
2 c 3
3 d 4
To specify the name of the new column use `name`.
>>> s.reset_index(name='values')
idx values
0 a 1
1 b 2
2 c 3
3 d 4
To generate a new Series with the default set `drop` to True.
>>> s.reset_index(drop=True)
0 1
1 2
2 3
3 4
Name: foo, dtype: int64
To update the Series in place, without generating a new one
set `inplace` to True. Note that it also requires ``drop=True``.
>>> s.reset_index(inplace=True, drop=True)
>>> s
0 1
1 2
2 3
3 4
Name: foo, dtype: int64
The `level` parameter is interesting for Series with a multi-level
index.
>>> arrays = [np.array(['bar', 'bar', 'baz', 'baz']),
... np.array(['one', 'two', 'one', 'two'])]
>>> s2 = pd.Series(
... range(4), name='foo',
... index=pd.MultiIndex.from_arrays(arrays,
... names=['a', 'b']))
To remove a specific level from the Index, use `level`.
>>> s2.reset_index(level='a')
a foo
b
one bar 0
two bar 1
one baz 2
two baz 3
If `level` is not set, all levels are removed from the Index.
>>> s2.reset_index()
a b foo
0 bar one 0
1 bar two 1
2 baz one 2
3 baz two 3
"""
inplace = validate_bool_kwarg(inplace, 'inplace')
if drop:
new_index = com._default_index(len(self))
if level is not None:
if not isinstance(level, (tuple, list)):
level = [level]
level = [self.index._get_level_number(lev) for lev in level]
if isinstance(self.index, MultiIndex):
if len(level) < self.index.nlevels:
new_index = self.index.droplevel(level)
if inplace:
self.index = new_index
# set name if it was passed, otherwise, keep the previous name
self.name = name or self.name
else:
return self._constructor(self._values.copy(),
index=new_index).__finalize__(self)
elif inplace:
raise TypeError('Cannot reset_index inplace on a Series '
'to create a DataFrame')
else:
df = self.to_frame(name)
return df.reset_index(level=level, drop=drop)
def __unicode__(self):
"""
Return a string representation for a particular DataFrame
Invoked by unicode(df) in py2 only. Yields a Unicode String in both
py2/py3.
"""
buf = StringIO(u(""))
width, height = get_terminal_size()
max_rows = (height if get_option("display.max_rows") == 0 else
get_option("display.max_rows"))
show_dimensions = get_option("display.show_dimensions")
self.to_string(buf=buf, name=self.name, dtype=self.dtype,
max_rows=max_rows, length=show_dimensions)
result = buf.getvalue()
return result
def to_string(self, buf=None, na_rep='NaN', float_format=None, header=True,
index=True, length=False, dtype=False, name=False,
max_rows=None):
"""
Render a string representation of the Series
Parameters
----------
buf : StringIO-like, optional
buffer to write to
na_rep : string, optional
string representation of NAN to use, default 'NaN'
float_format : one-parameter function, optional
formatter function to apply to columns' elements if they are floats
default None
header: boolean, default True
Add the Series header (index name)
index : bool, optional
Add index (row) labels, default True
length : boolean, default False
Add the Series length
dtype : boolean, default False
Add the Series dtype
name : boolean, default False
Add the Series name if not None
max_rows : int, optional
Maximum number of rows to show before truncating. If None, show
all.
Returns
-------
formatted : string (if not buffer passed)
"""
formatter = fmt.SeriesFormatter(self, name=name, length=length,
header=header, index=index,
dtype=dtype, na_rep=na_rep,
float_format=float_format,
max_rows=max_rows)
result = formatter.to_string()
# catch contract violations
if not isinstance(result, compat.text_type):
raise AssertionError("result must be of type unicode, type"
" of result is {0!r}"
"".format(result.__class__.__name__))
if buf is None:
return result
else:
try:
buf.write(result)
except AttributeError:
with open(buf, 'w') as f:
f.write(result)
def iteritems(self):
"""
Lazily iterate over (index, value) tuples
"""
return zip(iter(self.index), iter(self))
items = iteritems
# ----------------------------------------------------------------------
# Misc public methods
def keys(self):
"""Alias for index"""
return self.index
def to_dict(self, into=dict):
"""
Convert Series to {label -> value} dict or dict-like object.
Parameters
----------
into : class, default dict
The collections.Mapping subclass to use as the return
object. Can be the actual class or an empty
instance of the mapping type you want. If you want a
collections.defaultdict, you must pass it initialized.
.. versionadded:: 0.21.0
Returns
-------
value_dict : collections.Mapping
Examples
--------
>>> s = pd.Series([1, 2, 3, 4])
>>> s.to_dict()
{0: 1, 1: 2, 2: 3, 3: 4}
>>> from collections import OrderedDict, defaultdict
>>> s.to_dict(OrderedDict)
OrderedDict([(0, 1), (1, 2), (2, 3), (3, 4)])
>>> dd = defaultdict(list)
>>> s.to_dict(dd)
defaultdict(<type 'list'>, {0: 1, 1: 2, 2: 3, 3: 4})
"""
# GH16122
into_c = com.standardize_mapping(into)
return into_c(compat.iteritems(self))
def to_frame(self, name=None):
"""
Convert Series to DataFrame
Parameters
----------
name : object, default None
The passed name should substitute for the series name (if it has
one).
Returns
-------
data_frame : DataFrame
"""
if name is None:
df = self._constructor_expanddim(self)
else:
df = self._constructor_expanddim({name: self})
return df
def to_sparse(self, kind='block', fill_value=None):
"""
Convert Series to SparseSeries
Parameters
----------
kind : {'block', 'integer'}
fill_value : float, defaults to NaN (missing)
Returns
-------
sp : SparseSeries
"""
from pandas.core.sparse.series import SparseSeries
return SparseSeries(self, kind=kind,
fill_value=fill_value).__finalize__(self)
def _set_name(self, name, inplace=False):
"""
Set the Series name.
Parameters
----------
name : str
inplace : bool
whether to modify `self` directly or return a copy
"""
inplace = validate_bool_kwarg(inplace, 'inplace')
ser = self if inplace else self.copy()
ser.name = name
return ser
# ----------------------------------------------------------------------
# Statistics, overridden ndarray methods
# TODO: integrate bottleneck
def count(self, level=None):
"""
Return number of non-NA/null observations in the Series
Parameters
----------
level : int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a smaller Series
Returns
-------
nobs : int or Series (if level specified)
"""
if level is None:
return notna(com._values_from_object(self)).sum()
if isinstance(level, compat.string_types):
level = self.index._get_level_number(level)
lev = self.index.levels[level]
lab = np.array(self.index.labels[level], subok=False, copy=True)
mask = lab == -1
if mask.any():
lab[mask] = cnt = len(lev)
lev = lev.insert(cnt, lev._na_value)
obs = lab[notna(self.values)]
out = np.bincount(obs, minlength=len(lev) or None)
return self._constructor(out, index=lev,
dtype='int64').__finalize__(self)
def mode(self):
"""Return the mode(s) of the dataset.
Always returns Series even if only one value is returned.
Returns
-------
modes : Series (sorted)
"""
# TODO: Add option for bins like value_counts()
return algorithms.mode(self)
def unique(self):
"""
Return unique values of Series object.
Uniques are returned in order of appearance. Hash table-based unique,
therefore does NOT sort.
Returns
-------
ndarray or Categorical
The unique values returned as a NumPy array. In case of categorical
data type, returned as a Categorical.
See Also
--------
pandas.unique : top-level unique method for any 1-d array-like object.
Index.unique : return Index with unique values from an Index object.
Examples
--------
>>> pd.Series([2, 1, 3, 3], name='A').unique()
array([2, 1, 3])
>>> pd.Series([pd.Timestamp('2016-01-01') for _ in range(3)]).unique()
array(['2016-01-01T00:00:00.000000000'], dtype='datetime64[ns]')
>>> pd.Series([pd.Timestamp('2016-01-01', tz='US/Eastern')
... for _ in range(3)]).unique()
array([Timestamp('2016-01-01 00:00:00-0500', tz='US/Eastern')],
dtype=object)
An unordered Categorical will return categories in the order of
appearance.
>>> pd.Series(pd.Categorical(list('baabc'))).unique()
[b, a, c]
Categories (3, object): [b, a, c]
An ordered Categorical preserves the category ordering.
>>> pd.Series(pd.Categorical(list('baabc'), categories=list('abc'),
... ordered=True)).unique()
[b, a, c]
Categories (3, object): [a < b < c]
"""
result = super(Series, self).unique()
if is_datetime64tz_dtype(self.dtype):
# we are special casing datetime64tz_dtype
# to return an object array of tz-aware Timestamps
# TODO: it must return DatetimeArray with tz in pandas 2.0
result = result.astype(object).values
return result
def drop_duplicates(self, keep='first', inplace=False):
"""
Return Series with duplicate values removed.
Parameters
----------
keep : {'first', 'last', ``False``}, default 'first'
- 'first' : Drop duplicates except for the first occurrence.
- 'last' : Drop duplicates except for the last occurrence.
- ``False`` : Drop all duplicates.
inplace : boolean, default ``False``
If ``True``, performs operation inplace and returns None.
Returns
-------
deduplicated : Series
See Also
--------
Index.drop_duplicates : equivalent method on Index
DataFrame.drop_duplicates : equivalent method on DataFrame
Series.duplicated : related method on Series, indicating duplicate
Series values.
Examples
--------
Generate an Series with duplicated entries.
>>> s = pd.Series(['lama', 'cow', 'lama', 'beetle', 'lama', 'hippo'],
... name='animal')
>>> s
0 lama
1 cow
2 lama
3 beetle
4 lama
5 hippo
Name: animal, dtype: object
With the 'keep' parameter, the selection behaviour of duplicated values
can be changed. The value 'first' keeps the first occurrence for each
set of duplicated entries. The default value of keep is 'first'.
>>> s.drop_duplicates()
0 lama
1 cow
3 beetle
5 hippo
Name: animal, dtype: object
The value 'last' for parameter 'keep' keeps the last occurrence for
each set of duplicated entries.
>>> s.drop_duplicates(keep='last')
1 cow
3 beetle
4 lama
5 hippo
Name: animal, dtype: object
The value ``False`` for parameter 'keep' discards all sets of
duplicated entries. Setting the value of 'inplace' to ``True`` performs
the operation inplace and returns ``None``.
>>> s.drop_duplicates(keep=False, inplace=True)
>>> s
1 cow
3 beetle
5 hippo
Name: animal, dtype: object
"""
return super(Series, self).drop_duplicates(keep=keep, inplace=inplace)
def duplicated(self, keep='first'):
"""
Indicate duplicate Series values.
Duplicated values are indicated as ``True`` values in the resulting
Series. Either all duplicates, all except the first or all except the
last occurrence of duplicates can be indicated.
Parameters
----------
keep : {'first', 'last', False}, default 'first'
- 'first' : Mark duplicates as ``True`` except for the first
occurrence.
- 'last' : Mark duplicates as ``True`` except for the last
occurrence.
- ``False`` : Mark all duplicates as ``True``.
Examples
--------
By default, for each set of duplicated values, the first occurrence is
set on False and all others on True:
>>> animals = pd.Series(['lama', 'cow', 'lama', 'beetle', 'lama'])
>>> animals.duplicated()
0 False
1 False
2 True
3 False
4 True
dtype: bool
which is equivalent to
>>> animals.duplicated(keep='first')
0 False
1 False
2 True
3 False
4 True
dtype: bool
By using 'last', the last occurrence of each set of duplicated values
is set on False and all others on True:
>>> animals.duplicated(keep='last')
0 True
1 False
2 True
3 False
4 False
dtype: bool
By setting keep on ``False``, all duplicates are True:
>>> animals.duplicated(keep=False)
0 True
1 False
2 True
3 False
4 True
dtype: bool
Returns
-------
pandas.core.series.Series
See Also
--------
pandas.Index.duplicated : Equivalent method on pandas.Index
pandas.DataFrame.duplicated : Equivalent method on pandas.DataFrame
pandas.Series.drop_duplicates : Remove duplicate values from Series
"""
return super(Series, self).duplicated(keep=keep)
def idxmin(self, axis=None, skipna=True, *args, **kwargs):
"""
Return the row label of the minimum value.
If multiple values equal the minimum, the first row label with that
value is returned.
Parameters
----------
skipna : boolean, default True
Exclude NA/null values. If the entire Series is NA, the result
will be NA.
axis : int, default 0
For compatibility with DataFrame.idxmin. Redundant for application
on Series.
*args, **kwargs
Additional keywors have no effect but might be accepted
for compatibility with NumPy.
Returns
-------
idxmin : Index of minimum of values.
Raises
------
ValueError
If the Series is empty.
Notes
-----
This method is the Series version of ``ndarray.argmin``. This method
returns the label of the minimum, while ``ndarray.argmin`` returns
the position. To get the position, use ``series.values.argmin()``.
See Also
--------
numpy.argmin : Return indices of the minimum values
along the given axis.
DataFrame.idxmin : Return index of first occurrence of minimum
over requested axis.
Series.idxmax : Return index *label* of the first occurrence
of maximum of values.
Examples
--------
>>> s = pd.Series(data=[1, None, 4, 1],
... index=['A' ,'B' ,'C' ,'D'])
>>> s
A 1.0
B NaN
C 4.0
D 1.0
dtype: float64
>>> s.idxmin()
'A'
If `skipna` is False and there is an NA value in the data,
the function returns ``nan``.
>>> s.idxmin(skipna=False)
nan
"""
skipna = nv.validate_argmin_with_skipna(skipna, args, kwargs)
i = nanops.nanargmin(com._values_from_object(self), skipna=skipna)
if i == -1:
return np.nan
return self.index[i]
def idxmax(self, axis=0, skipna=True, *args, **kwargs):
"""
Return the row label of the maximum value.
If multiple values equal the maximum, the first row label with that
value is returned.
Parameters
----------
skipna : boolean, default True
Exclude NA/null values. If the entire Series is NA, the result
will be NA.
axis : int, default 0
For compatibility with DataFrame.idxmax. Redundant for application
on Series.
*args, **kwargs
Additional keywors have no effect but might be accepted
for compatibility with NumPy.
Returns
-------
idxmax : Index of maximum of values.
Raises
------
ValueError
If the Series is empty.
Notes
-----
This method is the Series version of ``ndarray.argmax``. This method
returns the label of the maximum, while ``ndarray.argmax`` returns
the position. To get the position, use ``series.values.argmax()``.
See Also
--------
numpy.argmax : Return indices of the maximum values
along the given axis.
DataFrame.idxmax : Return index of first occurrence of maximum
over requested axis.
Series.idxmin : Return index *label* of the first occurrence
of minimum of values.
Examples
--------
>>> s = pd.Series(data=[1, None, 4, 3, 4],
... index=['A', 'B', 'C', 'D', 'E'])
>>> s
A 1.0
B NaN
C 4.0
D 3.0
E 4.0
dtype: float64
>>> s.idxmax()
'C'
If `skipna` is False and there is an NA value in the data,
the function returns ``nan``.
>>> s.idxmax(skipna=False)
nan
"""
skipna = nv.validate_argmax_with_skipna(skipna, args, kwargs)
i = nanops.nanargmax(com._values_from_object(self), skipna=skipna)
if i == -1:
return np.nan
return self.index[i]
# ndarray compat
argmin = deprecate(
'argmin', idxmin, '0.21.0',
msg=dedent("""\
'argmin' is deprecated, use 'idxmin' instead. The behavior of 'argmin'
will be corrected to return the positional minimum in the future.
Use 'series.values.argmin' to get the position of the minimum now.""")
)
argmax = deprecate(
'argmax', idxmax, '0.21.0',
msg=dedent("""\
'argmax' is deprecated, use 'idxmax' instead. The behavior of 'argmax'
will be corrected to return the positional maximum in the future.
Use 'series.values.argmax' to get the position of the maximum now.""")
)
def round(self, decimals=0, *args, **kwargs):
"""
Round each value in a Series to the given number of decimals.
Parameters
----------
decimals : int
Number of decimal places to round to (default: 0).
If decimals is negative, it specifies the number of
positions to the left of the decimal point.
Returns
-------
Series object
See Also
--------
numpy.around
DataFrame.round
"""
nv.validate_round(args, kwargs)
result = com._values_from_object(self).round(decimals)
result = self._constructor(result, index=self.index).__finalize__(self)
return result
def quantile(self, q=0.5, interpolation='linear'):
"""
Return value at the given quantile, a la numpy.percentile.
Parameters
----------
q : float or array-like, default 0.5 (50% quantile)
0 <= q <= 1, the quantile(s) to compute
interpolation : {'linear', 'lower', 'higher', 'midpoint', 'nearest'}
.. versionadded:: 0.18.0
This optional parameter specifies the interpolation method to use,
when the desired quantile lies between two data points `i` and `j`:
* linear: `i + (j - i) * fraction`, where `fraction` is the
fractional part of the index surrounded by `i` and `j`.
* lower: `i`.
* higher: `j`.
* nearest: `i` or `j` whichever is nearest.
* midpoint: (`i` + `j`) / 2.
Returns
-------
quantile : float or Series
if ``q`` is an array, a Series will be returned where the
index is ``q`` and the values are the quantiles.
Examples
--------
>>> s = Series([1, 2, 3, 4])
>>> s.quantile(.5)
2.5
>>> s.quantile([.25, .5, .75])
0.25 1.75
0.50 2.50
0.75 3.25
dtype: float64
See Also
--------
pandas.core.window.Rolling.quantile
"""
self._check_percentile(q)
result = self._data.quantile(qs=q, interpolation=interpolation)
if is_list_like(q):
return self._constructor(result,
index=Float64Index(q),
name=self.name)
else:
# scalar
return result
def corr(self, other, method='pearson', min_periods=None):
"""
Compute correlation with `other` Series, excluding missing values
Parameters
----------
other : Series
method : {'pearson', 'kendall', 'spearman'}
* pearson : standard correlation coefficient
* kendall : Kendall Tau correlation coefficient
* spearman : Spearman rank correlation
min_periods : int, optional
Minimum number of observations needed to have a valid result
Returns
-------
correlation : float
"""
this, other = self.align(other, join='inner', copy=False)
if len(this) == 0:
return np.nan
return nanops.nancorr(this.values, other.values, method=method,
min_periods=min_periods)
def cov(self, other, min_periods=None):
"""
Compute covariance with Series, excluding missing values
Parameters
----------
other : Series
min_periods : int, optional
Minimum number of observations needed to have a valid result
Returns
-------
covariance : float
Normalized by N-1 (unbiased estimator).
"""
this, other = self.align(other, join='inner', copy=False)
if len(this) == 0:
return np.nan
return nanops.nancov(this.values, other.values,
min_periods=min_periods)
def diff(self, periods=1):
"""
First discrete difference of element.
Calculates the difference of a Series element compared with another
element in the Series (default is element in previous row).
Parameters
----------
periods : int, default 1
Periods to shift for calculating difference, accepts negative
values.
Returns
-------
diffed : Series
See Also
--------
Series.pct_change: Percent change over given number of periods.
Series.shift: Shift index by desired number of periods with an
optional time freq.
DataFrame.diff: First discrete difference of object
Examples
--------
Difference with previous row
>>> s = pd.Series([1, 1, 2, 3, 5, 8])
>>> s.diff()
0 NaN
1 0.0
2 1.0
3 1.0
4 2.0
5 3.0
dtype: float64
Difference with 3rd previous row
>>> s.diff(periods=3)
0 NaN
1 NaN
2 NaN
3 2.0
4 4.0
5 6.0
dtype: float64
Difference with following row
>>> s.diff(periods=-1)
0 0.0
1 -1.0
2 -1.0
3 -2.0
4 -3.0
5 NaN
dtype: float64
"""
result = algorithms.diff(com._values_from_object(self), periods)
return self._constructor(result, index=self.index).__finalize__(self)
def autocorr(self, lag=1):
"""
Lag-N autocorrelation
Parameters
----------
lag : int, default 1
Number of lags to apply before performing autocorrelation.
Returns
-------
autocorr : float
"""
return self.corr(self.shift(lag))
def dot(self, other):
"""
Matrix multiplication with DataFrame or inner-product with Series
objects. Can also be called using `self @ other` in Python >= 3.5.
Parameters
----------
other : Series or DataFrame
Returns
-------
dot_product : scalar or Series
"""
from pandas.core.frame import DataFrame
if isinstance(other, (Series, DataFrame)):
common = self.index.union(other.index)
if (len(common) > len(self.index) or
len(common) > len(other.index)):
raise ValueError('matrices are not aligned')
left = self.reindex(index=common, copy=False)
right = other.reindex(index=common, copy=False)
lvals = left.values
rvals = right.values
else:
left = self
lvals = self.values
rvals = np.asarray(other)
if lvals.shape[0] != rvals.shape[0]:
raise Exception('Dot product shape mismatch, %s vs %s' %
(lvals.shape, rvals.shape))
if isinstance(other, DataFrame):
return self._constructor(np.dot(lvals, rvals),
index=other.columns).__finalize__(self)
elif isinstance(other, Series):
return np.dot(lvals, rvals)
elif isinstance(rvals, np.ndarray):
return np.dot(lvals, rvals)
else: # pragma: no cover
raise TypeError('unsupported type: %s' % type(other))
def __matmul__(self, other):
""" Matrix multiplication using binary `@` operator in Python>=3.5 """
return self.dot(other)
def __rmatmul__(self, other):
""" Matrix multiplication using binary `@` operator in Python>=3.5 """
return self.dot(other)
@Substitution(klass='Series')
@Appender(base._shared_docs['searchsorted'])
@deprecate_kwarg(old_arg_name='v', new_arg_name='value')
def searchsorted(self, value, side='left', sorter=None):
if sorter is not None:
sorter = _ensure_platform_int(sorter)
return self._values.searchsorted(Series(value)._values,
side=side, sorter=sorter)
# -------------------------------------------------------------------
# Combination
def append(self, to_append, ignore_index=False, verify_integrity=False):
"""
Concatenate two or more Series.
Parameters
----------
to_append : Series or list/tuple of Series
ignore_index : boolean, default False
If True, do not use the index labels.
.. versionadded:: 0.19.0
verify_integrity : boolean, default False
If True, raise Exception on creating index with duplicates
Notes
-----
Iteratively appending to a Series can be more computationally intensive
than a single concatenate. A better solution is to append values to a
list and then concatenate the list with the original Series all at
once.
See also
--------
pandas.concat : General function to concatenate DataFrame, Series
or Panel objects
Returns
-------
appended : Series
Examples
--------
>>> s1 = pd.Series([1, 2, 3])
>>> s2 = pd.Series([4, 5, 6])
>>> s3 = pd.Series([4, 5, 6], index=[3,4,5])
>>> s1.append(s2)
0 1
1 2
2 3
0 4
1 5
2 6
dtype: int64
>>> s1.append(s3)
0 1
1 2
2 3
3 4
4 5
5 6
dtype: int64
With `ignore_index` set to True:
>>> s1.append(s2, ignore_index=True)
0 1
1 2
2 3
3 4
4 5
5 6
dtype: int64
With `verify_integrity` set to True:
>>> s1.append(s2, verify_integrity=True)
Traceback (most recent call last):
...
ValueError: Indexes have overlapping values: [0, 1, 2]
"""
from pandas.core.reshape.concat import concat
if isinstance(to_append, (list, tuple)):
to_concat = [self] + to_append
else:
to_concat = [self, to_append]
return concat(to_concat, ignore_index=ignore_index,
verify_integrity=verify_integrity)
def _binop(self, other, func, level=None, fill_value=None):
"""
Perform generic binary operation with optional fill value
Parameters
----------
other : Series
func : binary operator
fill_value : float or object
Value to substitute for NA/null values. If both Series are NA in a
location, the result will be NA regardless of the passed fill value
level : int or level name, default None
Broadcast across a level, matching Index values on the
passed MultiIndex level
Returns
-------
combined : Series
"""
if not isinstance(other, Series):
raise AssertionError('Other operand must be Series')
new_index = self.index
this = self
if not self.index.equals(other.index):
this, other = self.align(other, level=level, join='outer',
copy=False)
new_index = this.index
this_vals, other_vals = ops.fill_binop(this.values, other.values,
fill_value)
with np.errstate(all='ignore'):
result = func(this_vals, other_vals)
name = ops.get_op_result_name(self, other)
result = self._constructor(result, index=new_index, name=name)
result = result.__finalize__(self)
if name is None:
# When name is None, __finalize__ overwrites current name
result.name = None
return result
def combine(self, other, func, fill_value=np.nan):
"""
Perform elementwise binary operation on two Series using given function
with optional fill value when an index is missing from one Series or
the other
Parameters
----------
other : Series or scalar value
func : function
Function that takes two scalars as inputs and return a scalar
fill_value : scalar value
Returns
-------
result : Series
Examples
--------
>>> s1 = Series([1, 2])
>>> s2 = Series([0, 3])
>>> s1.combine(s2, lambda x1, x2: x1 if x1 < x2 else x2)
0 0
1 2
dtype: int64
See Also
--------
Series.combine_first : Combine Series values, choosing the calling
Series's values first
"""
if isinstance(other, Series):
new_index = self.index.union(other.index)
new_name = ops.get_op_result_name(self, other)
new_values = np.empty(len(new_index), dtype=self.dtype)
for i, idx in enumerate(new_index):
lv = self.get(idx, fill_value)
rv = other.get(idx, fill_value)
with np.errstate(all='ignore'):
new_values[i] = func(lv, rv)
else:
new_index = self.index
with np.errstate(all='ignore'):
new_values = func(self._values, other)
new_name = self.name
return self._constructor(new_values, index=new_index, name=new_name)
def combine_first(self, other):
"""
Combine Series values, choosing the calling Series's values
first. Result index will be the union of the two indexes
Parameters
----------
other : Series
Returns
-------
combined : Series
Examples
--------
>>> s1 = pd.Series([1, np.nan])
>>> s2 = pd.Series([3, 4])
>>> s1.combine_first(s2)
0 1.0
1 4.0
dtype: float64
See Also
--------
Series.combine : Perform elementwise operation on two Series
using a given function
"""
new_index = self.index.union(other.index)
this = self.reindex(new_index, copy=False)
other = other.reindex(new_index, copy=False)
# TODO: do we need name?
name = ops.get_op_result_name(self, other) # noqa
rs_vals = com._where_compat(isna(this), other._values, this._values)
return self._constructor(rs_vals, index=new_index).__finalize__(self)
def update(self, other):
"""
Modify Series in place using non-NA values from passed
Series. Aligns on index
Parameters
----------
other : Series
Examples
--------
>>> s = pd.Series([1, 2, 3])
>>> s.update(pd.Series([4, 5, 6]))
>>> s
0 4
1 5
2 6
dtype: int64
>>> s = pd.Series(['a', 'b', 'c'])
>>> s.update(pd.Series(['d', 'e'], index=[0, 2]))
>>> s
0 d
1 b
2 e
dtype: object
>>> s = pd.Series([1, 2, 3])
>>> s.update(pd.Series([4, 5, 6, 7, 8]))
>>> s
0 4
1 5
2 6
dtype: int64
If ``other`` contains NaNs the corresponding values are not updated
in the original Series.
>>> s = pd.Series([1, 2, 3])
>>> s.update(pd.Series([4, np.nan, 6]))
>>> s
0 4
1 2
2 6
dtype: int64
"""
other = other.reindex_like(self)
mask = notna(other)
self._data = self._data.putmask(mask=mask, new=other, inplace=True)
self._maybe_update_cacher()
# ----------------------------------------------------------------------
# Reindexing, sorting
def sort_values(self, axis=0, ascending=True, inplace=False,
kind='quicksort', na_position='last'):
"""
Sort by the values.
Sort a Series in ascending or descending order by some
criterion.
Parameters
----------
axis : {0 or 'index'}, default 0
Axis to direct sorting. The value 'index' is accepted for
compatibility with DataFrame.sort_values.
ascending : bool, default True
If True, sort values in ascending order, otherwise descending.
inplace : bool, default False
If True, perform operation in-place.
kind : {'quicksort', 'mergesort' or 'heapsort'}, default 'quicksort'
Choice of sorting algorithm. See also :func:`numpy.sort` for more
information. 'mergesort' is the only stable algorithm.
na_position : {'first' or 'last'}, default 'last'
Argument 'first' puts NaNs at the beginning, 'last' puts NaNs at
the end.
Returns
-------
Series
Series ordered by values.
See Also
--------
Series.sort_index : Sort by the Series indices.
DataFrame.sort_values : Sort DataFrame by the values along either axis.
DataFrame.sort_index : Sort DataFrame by indices.
Examples
--------
>>> s = pd.Series([np.nan, 1, 3, 10, 5])
>>> s
0 NaN
1 1.0
2 3.0
3 10.0
4 5.0
dtype: float64
Sort values ascending order (default behaviour)
>>> s.sort_values(ascending=True)
1 1.0
2 3.0
4 5.0
3 10.0
0 NaN
dtype: float64
Sort values descending order
>>> s.sort_values(ascending=False)
3 10.0
4 5.0
2 3.0
1 1.0
0 NaN
dtype: float64
Sort values inplace
>>> s.sort_values(ascending=False, inplace=True)
>>> s
3 10.0
4 5.0
2 3.0
1 1.0
0 NaN
dtype: float64
Sort values putting NAs first
>>> s.sort_values(na_position='first')
0 NaN
1 1.0
2 3.0
4 5.0
3 10.0
dtype: float64
Sort a series of strings
>>> s = pd.Series(['z', 'b', 'd', 'a', 'c'])
>>> s
0 z
1 b
2 d
3 a
4 c
dtype: object
>>> s.sort_values()
3 a
1 b
4 c
2 d
0 z
dtype: object
"""
inplace = validate_bool_kwarg(inplace, 'inplace')
axis = self._get_axis_number(axis)
# GH 5856/5853
if inplace and self._is_cached:
raise ValueError("This Series is a view of some other array, to "
"sort in-place you must create a copy")
def _try_kind_sort(arr):
# easier to ask forgiveness than permission
try:
# if kind==mergesort, it can fail for object dtype
return arr.argsort(kind=kind)
except TypeError:
# stable sort not available for object dtype
# uses the argsort default quicksort
return arr.argsort(kind='quicksort')
arr = self._values
sortedIdx = np.empty(len(self), dtype=np.int32)
bad = isna(arr)
good = ~bad
idx = com._default_index(len(self))
argsorted = _try_kind_sort(arr[good])
if is_list_like(ascending):
if len(ascending) != 1:
raise ValueError('Length of ascending (%d) must be 1 '
'for Series' % (len(ascending)))
ascending = ascending[0]
if not is_bool(ascending):
raise ValueError('ascending must be boolean')
if not ascending:
argsorted = argsorted[::-1]
if na_position == 'last':
n = good.sum()
sortedIdx[:n] = idx[good][argsorted]
sortedIdx[n:] = idx[bad]
elif na_position == 'first':
n = bad.sum()
sortedIdx[n:] = idx[good][argsorted]
sortedIdx[:n] = idx[bad]
else:
raise ValueError('invalid na_position: {!r}'.format(na_position))
result = self._constructor(arr[sortedIdx], index=self.index[sortedIdx])
if inplace:
self._update_inplace(result)
else:
return result.__finalize__(self)
def sort_index(self, axis=0, level=None, ascending=True, inplace=False,
kind='quicksort', na_position='last', sort_remaining=True):
"""
Sort Series by index labels.
Returns a new Series sorted by label if `inplace` argument is
``False``, otherwise updates the original series and returns None.
Parameters
----------
axis : int, default 0
Axis to direct sorting. This can only be 0 for Series.
level : int, optional
If not None, sort on values in specified index level(s).
ascending : bool, default true
Sort ascending vs. descending.
inplace : bool, default False
If True, perform operation in-place.
kind : {'quicksort', 'mergesort', 'heapsort'}, default 'quicksort'
Choice of sorting algorithm. See also :func:`numpy.sort` for more
information. 'mergesort' is the only stable algorithm. For
DataFrames, this option is only applied when sorting on a single
column or label.
na_position : {'first', 'last'}, default 'last'
If 'first' puts NaNs at the beginning, 'last' puts NaNs at the end.
Not implemented for MultiIndex.
sort_remaining : bool, default True
If true and sorting by level and index is multilevel, sort by other
levels too (in order) after sorting by specified level.
Returns
-------
pandas.Series
The original Series sorted by the labels
See Also
--------
DataFrame.sort_index: Sort DataFrame by the index
DataFrame.sort_values: Sort DataFrame by the value
Series.sort_values : Sort Series by the value
Examples
--------
>>> s = pd.Series(['a', 'b', 'c', 'd'], index=[3, 2, 1, 4])
>>> s.sort_index()
1 c
2 b
3 a
4 d
dtype: object
Sort Descending
>>> s.sort_index(ascending=False)
4 d
3 a
2 b
1 c
dtype: object
Sort Inplace
>>> s.sort_index(inplace=True)
>>> s
1 c
2 b
3 a
4 d
dtype: object
By default NaNs are put at the end, but use `na_position` to place
them at the beginning
>>> s = pd.Series(['a', 'b', 'c', 'd'], index=[3, 2, 1, np.nan])
>>> s.sort_index(na_position='first')
NaN d
1.0 c
2.0 b
3.0 a
dtype: object
Specify index level to sort
>>> arrays = [np.array(['qux', 'qux', 'foo', 'foo',
... 'baz', 'baz', 'bar', 'bar']),
... np.array(['two', 'one', 'two', 'one',
... 'two', 'one', 'two', 'one'])]
>>> s = pd.Series([1, 2, 3, 4, 5, 6, 7, 8], index=arrays)
>>> s.sort_index(level=1)
bar one 8
baz one 6
foo one 4
qux one 2
bar two 7
baz two 5
foo two 3
qux two 1
dtype: int64
Does not sort by remaining levels when sorting by levels
>>> s.sort_index(level=1, sort_remaining=False)
qux one 2
foo one 4
baz one 6
bar one 8
qux two 1
foo two 3
baz two 5
bar two 7
dtype: int64
"""
# TODO: this can be combined with DataFrame.sort_index impl as
# almost identical
inplace = validate_bool_kwarg(inplace, 'inplace')
axis = self._get_axis_number(axis)
index = self.index
if level is not None:
new_index, indexer = index.sortlevel(level, ascending=ascending,
sort_remaining=sort_remaining)
elif isinstance(index, MultiIndex):
from pandas.core.sorting import lexsort_indexer
labels = index._sort_levels_monotonic()
indexer = lexsort_indexer(labels._get_labels_for_sorting(),
orders=ascending,
na_position=na_position)
else:
from pandas.core.sorting import nargsort
# Check monotonic-ness before sort an index
# GH11080
if ((ascending and index.is_monotonic_increasing) or
(not ascending and index.is_monotonic_decreasing)):
if inplace:
return
else:
return self.copy()
indexer = nargsort(index, kind=kind, ascending=ascending,
na_position=na_position)
indexer = _ensure_platform_int(indexer)
new_index = index.take(indexer)
new_index = new_index._sort_levels_monotonic()
new_values = self._values.take(indexer)
result = self._constructor(new_values, index=new_index)
if inplace:
self._update_inplace(result)
else:
return result.__finalize__(self)
def argsort(self, axis=0, kind='quicksort', order=None):
"""
Overrides ndarray.argsort. Argsorts the value, omitting NA/null values,
and places the result in the same locations as the non-NA values
Parameters
----------
axis : int (can only be zero)
kind : {'mergesort', 'quicksort', 'heapsort'}, default 'quicksort'
Choice of sorting algorithm. See np.sort for more
information. 'mergesort' is the only stable algorithm
order : ignored
Returns
-------
argsorted : Series, with -1 indicated where nan values are present
See also
--------
numpy.ndarray.argsort
"""
values = self._values
mask = isna(values)
if mask.any():
result = Series(-1, index=self.index, name=self.name,
dtype='int64')
notmask = ~mask
result[notmask] = np.argsort(values[notmask], kind=kind)
return self._constructor(result,
index=self.index).__finalize__(self)
else:
return self._constructor(
np.argsort(values, kind=kind), index=self.index,
dtype='int64').__finalize__(self)
def nlargest(self, n=5, keep='first'):
"""
Return the largest `n` elements.
Parameters
----------
n : int
Return this many descending sorted values
keep : {'first', 'last'}, default 'first'
Where there are duplicate values:
- ``first`` : take the first occurrence.
- ``last`` : take the last occurrence.
Returns
-------
top_n : Series
The n largest values in the Series, in sorted order
Notes
-----
Faster than ``.sort_values(ascending=False).head(n)`` for small `n`
relative to the size of the ``Series`` object.
See Also
--------
Series.nsmallest
Examples
--------
>>> import pandas as pd
>>> import numpy as np
>>> s = pd.Series(np.random.randn(10**6))
>>> s.nlargest(10) # only sorts up to the N requested
219921 4.644710
82124 4.608745
421689 4.564644
425277 4.447014
718691 4.414137
43154 4.403520
283187 4.313922
595519 4.273635
503969 4.250236
121637 4.240952
dtype: float64
"""
return algorithms.SelectNSeries(self, n=n, keep=keep).nlargest()
def nsmallest(self, n=5, keep='first'):
"""
Return the smallest `n` elements.
Parameters
----------
n : int
Return this many ascending sorted values
keep : {'first', 'last'}, default 'first'
Where there are duplicate values:
- ``first`` : take the first occurrence.
- ``last`` : take the last occurrence.
Returns
-------
bottom_n : Series
The n smallest values in the Series, in sorted order
Notes
-----
Faster than ``.sort_values().head(n)`` for small `n` relative to
the size of the ``Series`` object.
See Also
--------
Series.nlargest
Examples
--------
>>> import pandas as pd
>>> import numpy as np
>>> s = pd.Series(np.random.randn(10**6))
>>> s.nsmallest(10) # only sorts up to the N requested
288532 -4.954580
732345 -4.835960
64803 -4.812550
446457 -4.609998
501225 -4.483945
669476 -4.472935
973615 -4.401699
621279 -4.355126
773916 -4.347355
359919 -4.331927
dtype: float64
"""
return algorithms.SelectNSeries(self, n=n, keep=keep).nsmallest()
def sortlevel(self, level=0, ascending=True, sort_remaining=True):
"""Sort Series with MultiIndex by chosen level. Data will be
lexicographically sorted by the chosen level followed by the other
levels (in order),
.. deprecated:: 0.20.0
Use :meth:`Series.sort_index`
Parameters
----------
level : int or level name, default None
ascending : bool, default True
Returns
-------
sorted : Series
See Also
--------
Series.sort_index(level=...)
"""
warnings.warn("sortlevel is deprecated, use sort_index(level=...)",
FutureWarning, stacklevel=2)
return self.sort_index(level=level, ascending=ascending,
sort_remaining=sort_remaining)
def swaplevel(self, i=-2, j=-1, copy=True):
"""
Swap levels i and j in a MultiIndex
Parameters
----------
i, j : int, string (can be mixed)
Level of index to be swapped. Can pass level name as string.
Returns
-------
swapped : Series
.. versionchanged:: 0.18.1
The indexes ``i`` and ``j`` are now optional, and default to
the two innermost levels of the index.
"""
new_index = self.index.swaplevel(i, j)
return self._constructor(self._values, index=new_index,
copy=copy).__finalize__(self)
def reorder_levels(self, order):
"""
Rearrange index levels using input order. May not drop or duplicate
levels
Parameters
----------
order : list of int representing new level order.
(reference level by number or key)
axis : where to reorder levels
Returns
-------
type of caller (new object)
"""
if not isinstance(self.index, MultiIndex): # pragma: no cover
raise Exception('Can only reorder levels on a hierarchical axis.')
result = self.copy()
result.index = result.index.reorder_levels(order)
return result
def unstack(self, level=-1, fill_value=None):
"""
Unstack, a.k.a. pivot, Series with MultiIndex to produce DataFrame.
The level involved will automatically get sorted.
Parameters
----------
level : int, string, or list of these, default last level
Level(s) to unstack, can pass level name
fill_value : replace NaN with this value if the unstack produces
missing values
.. versionadded:: 0.18.0
Examples
--------
>>> s = pd.Series([1, 2, 3, 4],
... index=pd.MultiIndex.from_product([['one', 'two'], ['a', 'b']]))
>>> s
one a 1
b 2
two a 3
b 4
dtype: int64
>>> s.unstack(level=-1)
a b
one 1 2
two 3 4
>>> s.unstack(level=0)
one two
a 1 3
b 2 4
Returns
-------
unstacked : DataFrame
"""
from pandas.core.reshape.reshape import unstack
return unstack(self, level, fill_value)
# ----------------------------------------------------------------------
# function application
def map(self, arg, na_action=None):
"""
Map values of Series using input correspondence (a dict, Series, or
function).
Parameters
----------
arg : function, dict, or Series
Mapping correspondence.
na_action : {None, 'ignore'}
If 'ignore', propagate NA values, without passing them to the
mapping correspondence.
Returns
-------
y : Series
Same index as caller.
Examples
--------
Map inputs to outputs (both of type `Series`):
>>> x = pd.Series([1,2,3], index=['one', 'two', 'three'])
>>> x
one 1
two 2
three 3
dtype: int64
>>> y = pd.Series(['foo', 'bar', 'baz'], index=[1,2,3])
>>> y
1 foo
2 bar
3 baz
>>> x.map(y)
one foo
two bar
three baz
If `arg` is a dictionary, return a new Series with values converted
according to the dictionary's mapping:
>>> z = {1: 'A', 2: 'B', 3: 'C'}
>>> x.map(z)
one A
two B
three C
Use na_action to control whether NA values are affected by the mapping
function.
>>> s = pd.Series([1, 2, 3, np.nan])
>>> s2 = s.map('this is a string {}'.format, na_action=None)
0 this is a string 1.0
1 this is a string 2.0
2 this is a string 3.0
3 this is a string nan
dtype: object
>>> s3 = s.map('this is a string {}'.format, na_action='ignore')
0 this is a string 1.0
1 this is a string 2.0
2 this is a string 3.0
3 NaN
dtype: object
See Also
--------
Series.apply : For applying more complex functions on a Series.
DataFrame.apply : Apply a function row-/column-wise.
DataFrame.applymap : Apply a function elementwise on a whole DataFrame.
Notes
-----
When `arg` is a dictionary, values in Series that are not in the
dictionary (as keys) are converted to ``NaN``. However, if the
dictionary is a ``dict`` subclass that defines ``__missing__`` (i.e.
provides a method for default values), then this default is used
rather than ``NaN``:
>>> from collections import Counter
>>> counter = Counter()
>>> counter['bar'] += 1
>>> y.map(counter)
1 0
2 1
3 0
dtype: int64
"""
new_values = super(Series, self)._map_values(
arg, na_action=na_action)
return self._constructor(new_values,
index=self.index).__finalize__(self)
def _gotitem(self, key, ndim, subset=None):
"""
sub-classes to define
return a sliced object
Parameters
----------
key : string / list of selections
ndim : 1,2
requested ndim of result
subset : object, default None
subset to act on
"""
return self
_agg_doc = dedent("""
Examples
--------
>>> s = Series(np.random.randn(10))
>>> s.agg('min')
-1.3018049988556679
>>> s.agg(['min', 'max'])
min -1.301805
max 1.127688
dtype: float64
See also
--------
pandas.Series.apply
pandas.Series.transform
""")
@Appender(_agg_doc)
@Appender(generic._shared_docs['aggregate'] % dict(
versionadded='.. versionadded:: 0.20.0',
**_shared_doc_kwargs))
def aggregate(self, func, axis=0, *args, **kwargs):
axis = self._get_axis_number(axis)
result, how = self._aggregate(func, *args, **kwargs)
if result is None:
# we can be called from an inner function which
# passes this meta-data
kwargs.pop('_axis', None)
kwargs.pop('_level', None)
# try a regular apply, this evaluates lambdas
# row-by-row; however if the lambda is expected a Series
# expression, e.g.: lambda x: x-x.quantile(0.25)
# this will fail, so we can try a vectorized evaluation
# we cannot FIRST try the vectorized evaluation, because
# then .agg and .apply would have different semantics if the
# operation is actually defined on the Series, e.g. str
try:
result = self.apply(func, *args, **kwargs)
except (ValueError, AttributeError, TypeError):
result = func(self, *args, **kwargs)
return result
agg = aggregate
def apply(self, func, convert_dtype=True, args=(), **kwds):
"""
Invoke function on values of Series. Can be ufunc (a NumPy function
that applies to the entire Series) or a Python function that only works
on single values
Parameters
----------
func : function
convert_dtype : boolean, default True
Try to find better dtype for elementwise function results. If
False, leave as dtype=object
args : tuple
Positional arguments to pass to function in addition to the value
Additional keyword arguments will be passed as keywords to the function
Returns
-------
y : Series or DataFrame if func returns a Series
See also
--------
Series.map: For element-wise operations
Series.agg: only perform aggregating type operations
Series.transform: only perform transformating type operations
Examples
--------
Create a series with typical summer temperatures for each city.
>>> import pandas as pd
>>> import numpy as np
>>> series = pd.Series([20, 21, 12], index=['London',
... 'New York','Helsinki'])
>>> series
London 20
New York 21
Helsinki 12
dtype: int64
Square the values by defining a function and passing it as an
argument to ``apply()``.
>>> def square(x):
... return x**2
>>> series.apply(square)
London 400
New York 441
Helsinki 144
dtype: int64
Square the values by passing an anonymous function as an
argument to ``apply()``.
>>> series.apply(lambda x: x**2)
London 400
New York 441
Helsinki 144
dtype: int64
Define a custom function that needs additional positional
arguments and pass these additional arguments using the
``args`` keyword.
>>> def subtract_custom_value(x, custom_value):
... return x-custom_value
>>> series.apply(subtract_custom_value, args=(5,))
London 15
New York 16
Helsinki 7
dtype: int64
Define a custom function that takes keyword arguments
and pass these arguments to ``apply``.
>>> def add_custom_values(x, **kwargs):
... for month in kwargs:
... x+=kwargs[month]
... return x
>>> series.apply(add_custom_values, june=30, july=20, august=25)
London 95
New York 96
Helsinki 87
dtype: int64
Use a function from the Numpy library.
>>> series.apply(np.log)
London 2.995732
New York 3.044522
Helsinki 2.484907
dtype: float64
"""
if len(self) == 0:
return self._constructor(dtype=self.dtype,
index=self.index).__finalize__(self)
# dispatch to agg
if isinstance(func, (list, dict)):
return self.aggregate(func, *args, **kwds)
# if we are a string, try to dispatch
if isinstance(func, compat.string_types):
return self._try_aggregate_string_function(func, *args, **kwds)
# handle ufuncs and lambdas
if kwds or args and not isinstance(func, np.ufunc):
f = lambda x: func(x, *args, **kwds)
else:
f = func
with np.errstate(all='ignore'):
if isinstance(f, np.ufunc):
return f(self)
# row-wise access
if is_extension_type(self.dtype):
mapped = self._values.map(f)
else:
values = self.astype(object).values
mapped = lib.map_infer(values, f, convert=convert_dtype)
if len(mapped) and isinstance(mapped[0], Series):
from pandas.core.frame import DataFrame
return DataFrame(mapped.tolist(), index=self.index)
else:
return self._constructor(mapped,
index=self.index).__finalize__(self)
def _reduce(self, op, name, axis=0, skipna=True, numeric_only=None,
filter_type=None, **kwds):
"""
perform a reduction operation
if we have an ndarray as a value, then simply perform the operation,
otherwise delegate to the object
"""
delegate = self._values
if isinstance(delegate, np.ndarray):
# Validate that 'axis' is consistent with Series's single axis.
if axis is not None:
self._get_axis_number(axis)
if numeric_only:
raise NotImplementedError('Series.{0} does not implement '
'numeric_only.'.format(name))
with np.errstate(all='ignore'):
return op(delegate, skipna=skipna, **kwds)
return delegate._reduce(op=op, name=name, axis=axis, skipna=skipna,
numeric_only=numeric_only,
filter_type=filter_type, **kwds)
def _reindex_indexer(self, new_index, indexer, copy):
if indexer is None:
if copy:
return self.copy()
return self
new_values = algorithms.take_1d(self._values, indexer,
allow_fill=True, fill_value=None)
return self._constructor(new_values, index=new_index)
def _needs_reindex_multi(self, axes, method, level):
""" check if we do need a multi reindex; this is for compat with
higher dims
"""
return False
@Appender(generic._shared_docs['align'] % _shared_doc_kwargs)
def align(self, other, join='outer', axis=None, level=None, copy=True,
fill_value=None, method=None, limit=None, fill_axis=0,
broadcast_axis=None):
return super(Series, self).align(other, join=join, axis=axis,
level=level, copy=copy,
fill_value=fill_value, method=method,
limit=limit, fill_axis=fill_axis,
broadcast_axis=broadcast_axis)
def rename(self, index=None, **kwargs):
"""Alter Series index labels or name
Function / dict values must be unique (1-to-1). Labels not contained in
a dict / Series will be left as-is. Extra labels listed don't throw an
error.
Alternatively, change ``Series.name`` with a scalar value.
See the :ref:`user guide <basics.rename>` for more.
Parameters
----------
index : scalar, hashable sequence, dict-like or function, optional
dict-like or functions are transformations to apply to
the index.
Scalar or hashable sequence-like will alter the ``Series.name``
attribute.
copy : boolean, default True
Also copy underlying data
inplace : boolean, default False
Whether to return a new Series. If True then value of copy is
ignored.
level : int or level name, default None
In case of a MultiIndex, only rename labels in the specified
level.
Returns
-------
renamed : Series (new object)
See Also
--------
pandas.Series.rename_axis
Examples
--------
>>> s = pd.Series([1, 2, 3])
>>> s
0 1
1 2
2 3
dtype: int64
>>> s.rename("my_name") # scalar, changes Series.name
0 1
1 2
2 3
Name: my_name, dtype: int64
>>> s.rename(lambda x: x ** 2) # function, changes labels
0 1
1 2
4 3
dtype: int64
>>> s.rename({1: 3, 2: 5}) # mapping, changes labels
0 1
3 2
5 3
dtype: int64
"""
kwargs['inplace'] = validate_bool_kwarg(kwargs.get('inplace', False),
'inplace')
non_mapping = is_scalar(index) or (is_list_like(index) and
not is_dict_like(index))
if non_mapping:
return self._set_name(index, inplace=kwargs.get('inplace'))
return super(Series, self).rename(index=index, **kwargs)
@Appender(generic._shared_docs['reindex'] % _shared_doc_kwargs)
def reindex(self, index=None, **kwargs):
return super(Series, self).reindex(index=index, **kwargs)
def drop(self, labels=None, axis=0, index=None, columns=None,
level=None, inplace=False, errors='raise'):
"""
Return Series with specified index labels removed.
Remove elements of a Series based on specifying the index labels.
When using a multi-index, labels on different levels can be removed
by specifying the level.
Parameters
----------
labels : single label or list-like
Index labels to drop.
axis : 0, default 0
Redundant for application on Series.
index, columns : None
Redundant for application on Series, but index can be used instead
of labels.
.. versionadded:: 0.21.0
level : int or level name, optional
For MultiIndex, level for which the labels will be removed.
inplace : bool, default False
If True, do operation inplace and return None.
errors : {'ignore', 'raise'}, default 'raise'
If 'ignore', suppress error and only existing labels are dropped.
Returns
-------
dropped : pandas.Series
See Also
--------
Series.reindex : Return only specified index labels of Series.
Series.dropna : Return series without null values.
Series.drop_duplicates : Return Series with duplicate values removed.
DataFrame.drop : Drop specified labels from rows or columns.
Raises
------
KeyError
If none of the labels are found in the index.
Examples
--------
>>> s = pd.Series(data=np.arange(3), index=['A','B','C'])
>>> s
A 0
B 1
C 2
dtype: int64
Drop labels B en C
>>> s.drop(labels=['B','C'])
A 0
dtype: int64
Drop 2nd level label in MultiIndex Series
>>> midx = pd.MultiIndex(levels=[['lama', 'cow', 'falcon'],
... ['speed', 'weight', 'length']],
... labels=[[0, 0, 0, 1, 1, 1, 2, 2, 2],
... [0, 1, 2, 0, 1, 2, 0, 1, 2]])
>>> s = pd.Series([45, 200, 1.2, 30, 250, 1.5, 320, 1, 0.3],
... index=midx)
>>> s
lama speed 45.0
weight 200.0
length 1.2
cow speed 30.0
weight 250.0
length 1.5
falcon speed 320.0
weight 1.0
length 0.3
dtype: float64
>>> s.drop(labels='weight', level=1)
lama speed 45.0
length 1.2
cow speed 30.0
length 1.5
falcon speed 320.0
length 0.3
dtype: float64
"""
return super(Series, self).drop(labels=labels, axis=axis, index=index,
columns=columns, level=level,
inplace=inplace, errors=errors)
@Substitution(**_shared_doc_kwargs)
@Appender(generic.NDFrame.fillna.__doc__)
def fillna(self, value=None, method=None, axis=None, inplace=False,
limit=None, downcast=None, **kwargs):
return super(Series, self).fillna(value=value, method=method,
axis=axis, inplace=inplace,
limit=limit, downcast=downcast,
**kwargs)
@Appender(generic._shared_docs['replace'] % _shared_doc_kwargs)
def replace(self, to_replace=None, value=None, inplace=False, limit=None,
regex=False, method='pad'):
return super(Series, self).replace(to_replace=to_replace, value=value,
inplace=inplace, limit=limit,
regex=regex, method=method)
@Appender(generic._shared_docs['shift'] % _shared_doc_kwargs)
def shift(self, periods=1, freq=None, axis=0):
return super(Series, self).shift(periods=periods, freq=freq, axis=axis)
def reindex_axis(self, labels, axis=0, **kwargs):
"""Conform Series to new index with optional filling logic.
.. deprecated:: 0.21.0
Use ``Series.reindex`` instead.
"""
# for compatibility with higher dims
if axis != 0:
raise ValueError("cannot reindex series on non-zero axis!")
msg = ("'.reindex_axis' is deprecated and will be removed in a future "
"version. Use '.reindex' instead.")
warnings.warn(msg, FutureWarning, stacklevel=2)
return self.reindex(index=labels, **kwargs)
def memory_usage(self, index=True, deep=False):
"""
Return the memory usage of the Series.
The memory usage can optionally include the contribution of
the index and of elements of `object` dtype.
Parameters
----------
index : bool, default True
Specifies whether to include the memory usage of the Series index.
deep : bool, default False
If True, introspect the data deeply by interrogating
`object` dtypes for system-level memory consumption, and include
it in the returned value.
Returns
-------
int
Bytes of memory consumed.
See Also
--------
numpy.ndarray.nbytes : Total bytes consumed by the elements of the
array.
DataFrame.memory_usage : Bytes consumed by a DataFrame.
Examples
--------
>>> s = pd.Series(range(3))
>>> s.memory_usage()
104
Not including the index gives the size of the rest of the data, which
is necessarily smaller:
>>> s.memory_usage(index=False)
24
The memory footprint of `object` values is ignored by default:
>>> s = pd.Series(["a", "b"])
>>> s.values
array(['a', 'b'], dtype=object)
>>> s.memory_usage()
96
>>> s.memory_usage(deep=True)
212
"""
v = super(Series, self).memory_usage(deep=deep)
if index:
v += self.index.memory_usage(deep=deep)
return v
@Appender(generic._shared_docs['_take'])
def _take(self, indices, axis=0, is_copy=False):
indices = _ensure_platform_int(indices)
new_index = self.index.take(indices)
if is_categorical_dtype(self):
# https://github.com/pandas-dev/pandas/issues/20664
# TODO: remove when the default Categorical.take behavior changes
indices = maybe_convert_indices(indices, len(self._get_axis(axis)))
kwargs = {'allow_fill': False}
else:
kwargs = {}
new_values = self._values.take(indices, **kwargs)
result = (self._constructor(new_values, index=new_index,
fastpath=True).__finalize__(self))
# Maybe set copy if we didn't actually change the index.
if is_copy:
if not result._get_axis(axis).equals(self._get_axis(axis)):
result._set_is_copy(self)
return result
def isin(self, values):
"""
Check whether `values` are contained in Series.
Return a boolean Series showing whether each element in the Series
matches an element in the passed sequence of `values` exactly.
Parameters
----------
values : set or list-like
The sequence of values to test. Passing in a single string will
raise a ``TypeError``. Instead, turn a single string into a
list of one element.
.. versionadded:: 0.18.1
Support for values as a set.
Returns
-------
isin : Series (bool dtype)
Raises
------
TypeError
* If `values` is a string
See Also
--------
pandas.DataFrame.isin : equivalent method on DataFrame
Examples
--------
>>> s = pd.Series(['lama', 'cow', 'lama', 'beetle', 'lama',
... 'hippo'], name='animal')
>>> s.isin(['cow', 'lama'])
0 True
1 True
2 True
3 False
4 True
5 False
Name: animal, dtype: bool
Passing a single string as ``s.isin('lama')`` will raise an error. Use
a list of one element instead:
>>> s.isin(['lama'])
0 True
1 False
2 True
3 False
4 True
5 False
Name: animal, dtype: bool
"""
result = algorithms.isin(self, values)
return self._constructor(result, index=self.index).__finalize__(self)
def between(self, left, right, inclusive=True):
"""
Return boolean Series equivalent to left <= series <= right.
This function returns a boolean vector containing `True` wherever the
corresponding Series element is between the boundary values `left` and
`right`. NA values are treated as `False`.
Parameters
----------
left : scalar
Left boundary.
right : scalar
Right boundary.
inclusive : bool, default True
Include boundaries.
Returns
-------
Series
Each element will be a boolean.
Notes
-----
This function is equivalent to ``(left <= ser) & (ser <= right)``
See Also
--------
pandas.Series.gt : Greater than of series and other
pandas.Series.lt : Less than of series and other
Examples
--------
>>> s = pd.Series([2, 0, 4, 8, np.nan])
Boundary values are included by default:
>>> s.between(1, 4)
0 True
1 False
2 True
3 False
4 False
dtype: bool
With `inclusive` set to ``False`` boundary values are excluded:
>>> s.between(1, 4, inclusive=False)
0 True
1 False
2 False
3 False
4 False
dtype: bool
`left` and `right` can be any scalar value:
>>> s = pd.Series(['Alice', 'Bob', 'Carol', 'Eve'])
>>> s.between('Anna', 'Daniel')
0 False
1 True
2 True
3 False
dtype: bool
"""
if inclusive:
lmask = self >= left
rmask = self <= right
else:
lmask = self > left
rmask = self < right
return lmask & rmask
@classmethod
def from_csv(cls, path, sep=',', parse_dates=True, header=None,
index_col=0, encoding=None, infer_datetime_format=False):
"""Read CSV file.
.. deprecated:: 0.21.0
Use :func:`pandas.read_csv` instead.
It is preferable to use the more powerful :func:`pandas.read_csv`
for most general purposes, but ``from_csv`` makes for an easy
roundtrip to and from a file (the exact counterpart of
``to_csv``), especially with a time Series.
This method only differs from :func:`pandas.read_csv` in some defaults:
- `index_col` is ``0`` instead of ``None`` (take first column as index
by default)
- `header` is ``None`` instead of ``0`` (the first row is not used as
the column names)
- `parse_dates` is ``True`` instead of ``False`` (try parsing the index
as datetime by default)
With :func:`pandas.read_csv`, the option ``squeeze=True`` can be used
to return a Series like ``from_csv``.
Parameters
----------
path : string file path or file handle / StringIO
sep : string, default ','
Field delimiter
parse_dates : boolean, default True
Parse dates. Different default from read_table
header : int, default None
Row to use as header (skip prior rows)
index_col : int or sequence, default 0
Column to use for index. If a sequence is given, a MultiIndex
is used. Different default from read_table
encoding : string, optional
a string representing the encoding to use if the contents are
non-ascii, for python versions prior to 3
infer_datetime_format: boolean, default False
If True and `parse_dates` is True for a column, try to infer the
datetime format based on the first datetime string. If the format
can be inferred, there often will be a large parsing speed-up.
See also
--------
pandas.read_csv
Returns
-------
y : Series
"""
# We're calling `DataFrame.from_csv` in the implementation,
# which will propagate a warning regarding `from_csv` deprecation.
from pandas.core.frame import DataFrame
df = DataFrame.from_csv(path, header=header, index_col=index_col,
sep=sep, parse_dates=parse_dates,
encoding=encoding,
infer_datetime_format=infer_datetime_format)
result = df.iloc[:, 0]
if header is None:
result.index.name = result.name = None
return result
def to_csv(self, path=None, index=True, sep=",", na_rep='',
float_format=None, header=False, index_label=None,
mode='w', encoding=None, compression=None, date_format=None,
decimal='.'):
"""
Write Series to a comma-separated values (csv) file
Parameters
----------
path : string or file handle, default None
File path or object, if None is provided the result is returned as
a string.
na_rep : string, default ''
Missing data representation
float_format : string, default None
Format string for floating point numbers
header : boolean, default False
Write out series name
index : boolean, default True
Write row names (index)
index_label : string or sequence, default None
Column label for index column(s) if desired. If None is given, and
`header` and `index` are True, then the index names are used. A
sequence should be given if the DataFrame uses MultiIndex.
mode : Python write mode, default 'w'
sep : character, default ","
Field delimiter for the output file.
encoding : string, optional
a string representing the encoding to use if the contents are
non-ascii, for python versions prior to 3
compression : string, optional
A string representing the compression to use in the output file.
Allowed values are 'gzip', 'bz2', 'zip', 'xz'. This input is only
used when the first argument is a filename.
date_format: string, default None
Format string for datetime objects.
decimal: string, default '.'
Character recognized as decimal separator. E.g. use ',' for
European data
"""
from pandas.core.frame import DataFrame
df = DataFrame(self)
# result is only a string if no path provided, otherwise None
result = df.to_csv(path, index=index, sep=sep, na_rep=na_rep,
float_format=float_format, header=header,
index_label=index_label, mode=mode,
encoding=encoding, compression=compression,
date_format=date_format, decimal=decimal)
if path is None:
return result
@Appender(generic._shared_docs['to_excel'] % _shared_doc_kwargs)
def to_excel(self, excel_writer, sheet_name='Sheet1', na_rep='',
float_format=None, columns=None, header=True, index=True,
index_label=None, startrow=0, startcol=0, engine=None,
merge_cells=True, encoding=None, inf_rep='inf', verbose=True):
df = self.to_frame()
df.to_excel(excel_writer=excel_writer, sheet_name=sheet_name,
na_rep=na_rep, float_format=float_format, columns=columns,
header=header, index=index, index_label=index_label,
startrow=startrow, startcol=startcol, engine=engine,
merge_cells=merge_cells, encoding=encoding,
inf_rep=inf_rep, verbose=verbose)
@Appender(generic._shared_docs['isna'] % _shared_doc_kwargs)
def isna(self):
return super(Series, self).isna()
@Appender(generic._shared_docs['isna'] % _shared_doc_kwargs)
def isnull(self):
return super(Series, self).isnull()
@Appender(generic._shared_docs['notna'] % _shared_doc_kwargs)
def notna(self):
return super(Series, self).notna()
@Appender(generic._shared_docs['notna'] % _shared_doc_kwargs)
def notnull(self):
return super(Series, self).notnull()
def dropna(self, axis=0, inplace=False, **kwargs):
"""
Return a new Series with missing values removed.
See the :ref:`User Guide <missing_data>` for more on which values are
considered missing, and how to work with missing data.
Parameters
----------
axis : {0 or 'index'}, default 0
There is only one axis to drop values from.
inplace : bool, default False
If True, do operation inplace and return None.
**kwargs
Not in use.
Returns
-------
Series
Series with NA entries dropped from it.
See Also
--------
Series.isna: Indicate missing values.
Series.notna : Indicate existing (non-missing) values.
Series.fillna : Replace missing values.
DataFrame.dropna : Drop rows or columns which contain NA values.
Index.dropna : Drop missing indices.
Examples
--------
>>> ser = pd.Series([1., 2., np.nan])
>>> ser
0 1.0
1 2.0
2 NaN
dtype: float64
Drop NA values from a Series.
>>> ser.dropna()
0 1.0
1 2.0
dtype: float64
Keep the Series with valid entries in the same variable.
>>> ser.dropna(inplace=True)
>>> ser
0 1.0
1 2.0
dtype: float64
Empty strings are not considered NA values. ``None`` is considered an
NA value.
>>> ser = pd.Series([np.NaN, 2, pd.NaT, '', None, 'I stay'])
>>> ser
0 NaN
1 2
2 NaT
3
4 None
5 I stay
dtype: object
>>> ser.dropna()
1 2
3
5 I stay
dtype: object
"""
inplace = validate_bool_kwarg(inplace, 'inplace')
kwargs.pop('how', None)
if kwargs:
raise TypeError('dropna() got an unexpected keyword '
'argument "{0}"'.format(list(kwargs.keys())[0]))
axis = self._get_axis_number(axis or 0)
if self._can_hold_na:
result = remove_na_arraylike(self)
if inplace:
self._update_inplace(result)
else:
return result
else:
if inplace:
# do nothing
pass
else:
return self.copy()
def valid(self, inplace=False, **kwargs):
"""Return Series without null values.
.. deprecated:: 0.23.0
Use :meth:`Series.dropna` instead.
"""
warnings.warn("Method .valid will be removed in a future version. "
"Use .dropna instead.", FutureWarning, stacklevel=2)
return self.dropna(inplace=inplace, **kwargs)
# ----------------------------------------------------------------------
# Time series-oriented methods
def to_timestamp(self, freq=None, how='start', copy=True):
"""
Cast to datetimeindex of timestamps, at *beginning* of period
Parameters
----------
freq : string, default frequency of PeriodIndex
Desired frequency
how : {'s', 'e', 'start', 'end'}
Convention for converting period to timestamp; start of period
vs. end
Returns
-------
ts : Series with DatetimeIndex
"""
new_values = self._values
if copy:
new_values = new_values.copy()
new_index = self.index.to_timestamp(freq=freq, how=how)
return self._constructor(new_values,
index=new_index).__finalize__(self)
def to_period(self, freq=None, copy=True):
"""
Convert Series from DatetimeIndex to PeriodIndex with desired
frequency (inferred from index if not passed)
Parameters
----------
freq : string, default
Returns
-------
ts : Series with PeriodIndex
"""
new_values = self._values
if copy:
new_values = new_values.copy()
new_index = self.index.to_period(freq=freq)
return self._constructor(new_values,
index=new_index).__finalize__(self)
# ----------------------------------------------------------------------
# Accessor Methods
# ----------------------------------------------------------------------
str = CachedAccessor("str", StringMethods)
dt = CachedAccessor("dt", CombinedDatetimelikeProperties)
cat = CachedAccessor("cat", CategoricalAccessor)
plot = CachedAccessor("plot", gfx.SeriesPlotMethods)
# ----------------------------------------------------------------------
# Add plotting methods to Series
hist = gfx.hist_series
Series._setup_axes(['index'], info_axis=0, stat_axis=0, aliases={'rows': 0},
docs={'index': 'The index (axis labels) of the Series.'})
Series._add_numeric_operations()
Series._add_series_only_operations()
Series._add_series_or_dataframe_operations()
# Add arithmetic!
ops.add_flex_arithmetic_methods(Series)
ops.add_special_arithmetic_methods(Series)
# -----------------------------------------------------------------------------
# Supplementary functions
def _sanitize_index(data, index, copy=False):
""" sanitize an index type to return an ndarray of the underlying, pass
thru a non-Index
"""
if index is None:
return data
if len(data) != len(index):
raise ValueError('Length of values does not match length of ' 'index')
if isinstance(data, ABCIndexClass) and not copy:
pass
elif isinstance(data, (PeriodIndex, DatetimeIndex)):
data = data._values
if copy:
data = data.copy()
elif isinstance(data, np.ndarray):
# coerce datetimelike types
if data.dtype.kind in ['M', 'm']:
data = _sanitize_array(data, index, copy=copy)
return data
def _sanitize_array(data, index, dtype=None, copy=False,
raise_cast_failure=False):
""" sanitize input data to an ndarray, copy if specified, coerce to the
dtype if specified
"""
if dtype is not None:
dtype = pandas_dtype(dtype)
if isinstance(data, ma.MaskedArray):
mask = ma.getmaskarray(data)
if mask.any():
data, fill_value = maybe_upcast(data, copy=True)
data[mask] = fill_value
else:
data = data.copy()
def _try_cast(arr, take_fast_path):
# perf shortcut as this is the most common case
if take_fast_path:
if maybe_castable(arr) and not copy and dtype is None:
return arr
try:
subarr = maybe_cast_to_datetime(arr, dtype)
# Take care in creating object arrays (but iterators are not
# supported):
if is_object_dtype(dtype) and (is_list_like(subarr) and
not (is_iterator(subarr) or
isinstance(subarr, np.ndarray))):
subarr = construct_1d_object_array_from_listlike(subarr)
elif not is_extension_type(subarr):
subarr = construct_1d_ndarray_preserving_na(subarr, dtype,
copy=copy)
except (ValueError, TypeError):
if is_categorical_dtype(dtype):
# We *do* allow casting to categorical, since we know
# that Categorical is the only array type for 'category'.
subarr = Categorical(arr, dtype.categories,
ordered=dtype.ordered)
elif is_extension_array_dtype(dtype):
# We don't allow casting to third party dtypes, since we don't
# know what array belongs to which type.
msg = ("Cannot cast data to extension dtype '{}'. "
"Pass the extension array directly.".format(dtype))
raise ValueError(msg)
elif dtype is not None and raise_cast_failure:
raise
else:
subarr = np.array(arr, dtype=object, copy=copy)
return subarr
# GH #846
if isinstance(data, (np.ndarray, Index, Series)):
if dtype is not None:
subarr = np.array(data, copy=False)
# possibility of nan -> garbage
if is_float_dtype(data.dtype) and is_integer_dtype(dtype):
if not isna(data).any():
subarr = _try_cast(data, True)
elif copy:
subarr = data.copy()
else:
subarr = _try_cast(data, True)
elif isinstance(data, Index):
# don't coerce Index types
# e.g. indexes can have different conversions (so don't fast path
# them)
# GH 6140
subarr = _sanitize_index(data, index, copy=copy)
else:
# we will try to copy be-definition here
subarr = _try_cast(data, True)
elif isinstance(data, ExtensionArray):
subarr = data
if dtype is not None and not data.dtype.is_dtype(dtype):
msg = ("Cannot coerce extension array to dtype '{typ}'. "
"Do the coercion before passing to the constructor "
"instead.".format(typ=dtype))
raise ValueError(msg)
if copy:
subarr = data.copy()
return subarr
elif isinstance(data, (list, tuple)) and len(data) > 0:
if dtype is not None:
try:
subarr = _try_cast(data, False)
except Exception:
if raise_cast_failure: # pragma: no cover
raise
subarr = np.array(data, dtype=object, copy=copy)
subarr = lib.maybe_convert_objects(subarr)
else:
subarr = maybe_convert_platform(data)
subarr = maybe_cast_to_datetime(subarr, dtype)
elif isinstance(data, range):
# GH 16804
start, stop, step = get_range_parameters(data)
arr = np.arange(start, stop, step, dtype='int64')
subarr = _try_cast(arr, False)
else:
subarr = _try_cast(data, False)
# scalar like, GH
if getattr(subarr, 'ndim', 0) == 0:
if isinstance(data, list): # pragma: no cover
subarr = np.array(data, dtype=object)
elif index is not None:
value = data
# figure out the dtype from the value (upcast if necessary)
if dtype is None:
dtype, value = infer_dtype_from_scalar(value)
else:
# need to possibly convert the value here
value = maybe_cast_to_datetime(value, dtype)
subarr = construct_1d_arraylike_from_scalar(
value, len(index), dtype)
else:
return subarr.item()
# the result that we want
elif subarr.ndim == 1:
if index is not None:
# a 1-element ndarray
if len(subarr) != len(index) and len(subarr) == 1:
subarr = construct_1d_arraylike_from_scalar(
subarr[0], len(index), subarr.dtype)
elif subarr.ndim > 1:
if isinstance(data, np.ndarray):
raise Exception('Data must be 1-dimensional')
else:
subarr = com._asarray_tuplesafe(data, dtype=dtype)
# This is to prevent mixed-type Series getting all casted to
# NumPy string type, e.g. NaN --> '-1#IND'.
if issubclass(subarr.dtype.type, compat.string_types):
# GH 16605
# If not empty convert the data to dtype
# GH 19853: If data is a scalar, subarr has already the result
if not is_scalar(data):
if not np.all(isna(data)):
data = np.array(data, dtype=dtype, copy=False)
subarr = np.array(data, dtype=object, copy=copy)
return subarr
| mit |
ibis-project/ibis | ibis/backends/pandas/execution/window.py | 1 | 16879 | """Code for computing window functions with ibis and pandas."""
import functools
import operator
import re
from typing import Any, List, NoReturn, Optional, Union
import pandas as pd
import toolz
from pandas.core.groupby import SeriesGroupBy
import ibis.common.exceptions as com
import ibis.expr.operations as ops
import ibis.expr.window as win
from ibis.expr.scope import Scope
from ibis.expr.timecontext import (
construct_time_context_aware_series,
get_time_col,
)
from ibis.expr.typing import TimeContext
from .. import aggcontext as agg_ctx
from ..aggcontext import AggregationContext
from ..core import (
compute_time_context,
date_types,
execute,
integer_types,
simple_types,
timedelta_types,
timestamp_types,
)
from ..dispatch import execute_node, pre_execute
from ..execution import util
def _post_process_empty(
result: Any,
parent: pd.DataFrame,
order_by: List[str],
group_by: List[str],
timecontext: Optional[TimeContext],
) -> pd.Series:
# This is the post process of the no groupby nor orderby window
# `result` could be a Series, DataFrame, or a scalar. generated
# by `agg` method of class `Window`. For window without grouby or
# orderby, `agg` calls pands method directly. So if timecontext is
# present, we need to insert 'time' column into index for trimming the
# result. For cases when grouby or orderby is present, `agg` calls
# Ibis method `window_agg_built_in` and `window_agg_udf`, time
# context is already inserted there.
assert not order_by and not group_by
if isinstance(result, (pd.Series, pd.DataFrame)):
if timecontext:
result = construct_time_context_aware_series(result, parent)
return result
else:
# `result` is a scalar when a reduction operation is being
# applied over the window, since reduction operations are N->1
# in this case we do not need to trim result by timecontext,
# just expand reduction result to be a Series with `index`.
index = parent.index
result = pd.Series([result]).repeat(len(index))
result.index = index
return result
def _post_process_group_by(
series: pd.Series,
parent: pd.DataFrame,
order_by: List[str],
group_by: List[str],
timecontext: Optional[TimeContext],
) -> pd.Series:
assert not order_by and group_by
return series
def _post_process_order_by(
series,
parent: pd.DataFrame,
order_by: List[str],
group_by: List[str],
timecontext: Optional[TimeContext],
) -> pd.Series:
assert order_by and not group_by
indexed_parent = parent.set_index(order_by)
index = indexed_parent.index
names = index.names
if len(names) > 1:
series = series.reorder_levels(names)
series = series.iloc[index.argsort(kind='mergesort')]
return series
def _post_process_group_by_order_by(
series: pd.Series,
parent: pd.DataFrame,
order_by: List[str],
group_by: List[str],
timecontext: Optional[TimeContext],
) -> pd.Series:
indexed_parent = parent.set_index(group_by + order_by, append=True)
index = indexed_parent.index
# get the names of the levels that will be in the result
series_index_names = frozenset(series.index.names)
# get the levels common to series.index, in the order that they occur in
# the parent's index
reordered_levels = [
name for name in index.names if name in series_index_names
]
if len(reordered_levels) > 1:
series = series.reorder_levels(reordered_levels)
return series
@functools.singledispatch
def get_aggcontext(
window, *, scope, operand, parent, group_by, order_by, **kwargs,
) -> NoReturn:
raise NotImplementedError(
f"get_aggcontext is not implemented for {type(window).__name__}"
)
@get_aggcontext.register(win.Window)
def get_aggcontext_window(
window, *, scope, operand, parent, group_by, order_by, **kwargs,
) -> AggregationContext:
# no order by or group by: default summarization aggcontext
#
# if we're reducing and we have an order by expression then we need to
# expand or roll.
#
# otherwise we're transforming
output_type = operand.type()
if not group_by and not order_by:
aggcontext = agg_ctx.Summarize(parent=parent, output_type=output_type)
elif (
isinstance(
operand.op(), (ops.Reduction, ops.CumulativeOp, ops.Any, ops.All)
)
and order_by
):
# XXX(phillipc): What a horror show
preceding = window.preceding
if preceding is not None:
max_lookback = window.max_lookback
assert not isinstance(operand.op(), ops.CumulativeOp)
aggcontext = agg_ctx.Moving(
preceding,
max_lookback,
parent=parent,
group_by=group_by,
order_by=order_by,
output_type=output_type,
)
else:
# expanding window
aggcontext = agg_ctx.Cumulative(
parent=parent,
group_by=group_by,
order_by=order_by,
output_type=output_type,
)
else:
# groupby transform (window with a partition by clause in SQL parlance)
aggcontext = agg_ctx.Transform(
parent=parent,
group_by=group_by,
order_by=order_by,
output_type=output_type,
)
return aggcontext
def trim_window_result(
data: Union[pd.Series, pd.DataFrame], timecontext: Optional[TimeContext]
):
""" Trim data within time range defined by timecontext
This is a util function used in ``execute_window_op``, where time
context might be adjusted for calculation. Data must be trimmed
within the original time context before return.
`data` is a pd.Series with Multiindex for most cases, for multi
column udf result, `data` could be a pd.DataFrame
Params
------
data: pd.Series or pd.DataFrame
timecontext: Optional[TimeContext]
Returns:
------
a trimmed pd.Series or or pd.DataFrame with the same Multiindex
as data's
"""
# noop if timecontext is None
if not timecontext:
return data
assert isinstance(
data, (pd.Series, pd.DataFrame)
), 'window computed columns is not a pd.Series nor a pd.DataFrame'
# reset multiindex, convert Series into a DataFrame
df = data.reset_index()
# Filter the data, here we preserve the time index so that when user is
# computing a single column, the computation and the relevant time
# indexes are returned.
time_col = get_time_col()
if time_col not in df:
return data
subset = df.loc[df[time_col].between(*timecontext)]
# Get columns to set for index
if isinstance(data, pd.Series):
# if Series dosen't contain a name, reset_index will assign
# '0' as the column name for the column of value
name = data.name if data.name else 0
index_columns = list(subset.columns.difference([name]))
else:
name = data.columns
index_columns = list(subset.columns.difference(name))
# set the correct index for return Series / DataFrame
indexed_subset = subset.set_index(index_columns)
return indexed_subset[name]
@execute_node.register(ops.WindowOp, pd.Series, win.Window)
def execute_window_op(
op,
data,
window,
scope: Scope = None,
timecontext: Optional[TimeContext] = None,
aggcontext=None,
clients=None,
**kwargs,
):
operand = op.expr
# pre execute "manually" here because otherwise we wouldn't pickup
# relevant scope changes from the child operand since we're managing
# execution of that by hand
operand_op = operand.op()
adjusted_timecontext = None
if timecontext:
arg_timecontexts = compute_time_context(
op, timecontext=timecontext, clients=clients
)
# timecontext is the original time context required by parent node
# of this WindowOp, while adjusted_timecontext is the adjusted context
# of this Window, since we are doing a manual execution here, use
# adjusted_timecontext in later execution phases
adjusted_timecontext = arg_timecontexts[0]
pre_executed_scope = pre_execute(
operand_op,
*clients,
scope=scope,
timecontext=adjusted_timecontext,
aggcontext=aggcontext,
**kwargs,
)
scope = scope.merge_scope(pre_executed_scope)
(root,) = op.root_tables()
root_expr = root.to_expr()
data = execute(
root_expr,
scope=scope,
timecontext=adjusted_timecontext,
clients=clients,
aggcontext=aggcontext,
**kwargs,
)
following = window.following
order_by = window._order_by
if (
order_by
and following != 0
and not isinstance(operand_op, ops.ShiftBase)
):
raise com.OperationNotDefinedError(
'Window functions affected by following with order_by are not '
'implemented'
)
group_by = window._group_by
grouping_keys = [
key_op.name
if isinstance(key_op, ops.TableColumn)
else execute(
key,
scope=scope,
clients=clients,
timecontext=adjusted_timecontext,
aggcontext=aggcontext,
**kwargs,
)
for key, key_op in zip(
group_by, map(operator.methodcaller('op'), group_by)
)
]
order_by = window._order_by
if not order_by:
ordering_keys = []
if group_by:
if order_by:
(
sorted_df,
grouping_keys,
ordering_keys,
) = util.compute_sorted_frame(
data,
order_by,
group_by=group_by,
timecontext=adjusted_timecontext,
**kwargs,
)
source = sorted_df.groupby(grouping_keys, sort=True)
post_process = _post_process_group_by_order_by
else:
source = data.groupby(grouping_keys, sort=False)
post_process = _post_process_group_by
else:
if order_by:
source, grouping_keys, ordering_keys = util.compute_sorted_frame(
data, order_by, timecontext=adjusted_timecontext, **kwargs
)
post_process = _post_process_order_by
else:
source = data
post_process = _post_process_empty
# Here groupby object should be add to the corresponding node in scope
# for execution, data will be overwrite to a groupby object, so we
# force an update regardless of time context
new_scope = scope.merge_scopes(
[
Scope({t: source}, adjusted_timecontext)
for t in operand.op().root_tables()
],
overwrite=True,
)
aggcontext = get_aggcontext(
window,
scope=scope,
operand=operand,
parent=source,
group_by=grouping_keys,
order_by=ordering_keys,
**kwargs,
)
result = execute(
operand,
scope=new_scope,
timecontext=adjusted_timecontext,
aggcontext=aggcontext,
clients=clients,
**kwargs,
)
result = post_process(
result, data, ordering_keys, grouping_keys, adjusted_timecontext,
)
assert len(data) == len(
result
), 'input data source and computed column do not have the same length'
# trim data to original time context
result = trim_window_result(result, timecontext)
return result
@execute_node.register(
(ops.CumulativeSum, ops.CumulativeMax, ops.CumulativeMin),
(pd.Series, SeriesGroupBy),
)
def execute_series_cumulative_sum_min_max(op, data, **kwargs):
typename = type(op).__name__
method_name = (
re.match(r"^Cumulative([A-Za-z_][A-Za-z0-9_]*)$", typename)
.group(1)
.lower()
)
method = getattr(data, "cum{}".format(method_name))
return method()
@execute_node.register(ops.CumulativeMean, (pd.Series, SeriesGroupBy))
def execute_series_cumulative_mean(op, data, **kwargs):
# TODO: Doesn't handle the case where we've grouped/sorted by. Handling
# this here would probably require a refactor.
return data.expanding().mean()
@execute_node.register(ops.CumulativeOp, (pd.Series, SeriesGroupBy))
def execute_series_cumulative_op(op, data, aggcontext=None, **kwargs):
assert aggcontext is not None, "aggcontext is none in {} operation".format(
type(op)
)
typename = type(op).__name__
match = re.match(r'^Cumulative([A-Za-z_][A-Za-z0-9_]*)$', typename)
if match is None:
raise ValueError('Unknown operation {}'.format(typename))
try:
(operation_name,) = match.groups()
except ValueError:
raise ValueError(
'More than one operation name found in {} class'.format(typename)
)
dtype = op.to_expr().type().to_pandas()
assert isinstance(aggcontext, agg_ctx.Cumulative), 'Got {}'.format(type())
result = aggcontext.agg(data, operation_name.lower())
# all expanding window operations are required to be int64 or float64, so
# we need to cast back to preserve the type of the operation
try:
return result.astype(dtype)
except TypeError:
return result
def post_lead_lag(result, default):
if not pd.isnull(default):
return result.fillna(default)
return result
@execute_node.register(
(ops.Lead, ops.Lag),
(pd.Series, SeriesGroupBy),
integer_types + (type(None),),
simple_types + (type(None),),
)
def execute_series_lead_lag(op, data, offset, default, **kwargs):
func = toolz.identity if isinstance(op, ops.Lag) else operator.neg
result = data.shift(func(1 if offset is None else offset))
return post_lead_lag(result, default)
@execute_node.register(
(ops.Lead, ops.Lag),
(pd.Series, SeriesGroupBy),
timedelta_types,
date_types + timestamp_types + (str, type(None)),
)
def execute_series_lead_lag_timedelta(
op, data, offset, default, aggcontext=None, **kwargs
):
"""An implementation of shifting a column relative to another one that is
in units of time rather than rows.
"""
# lagging adds time (delayed), leading subtracts time (moved up)
func = operator.add if isinstance(op, ops.Lag) else operator.sub
group_by = aggcontext.group_by
order_by = aggcontext.order_by
# get the parent object from which `data` originated
parent = aggcontext.parent
# get the DataFrame from the parent object, handling the DataFrameGroupBy
# case
parent_df = getattr(parent, 'obj', parent)
# index our parent df by grouping and ordering keys
indexed_original_df = parent_df.set_index(group_by + order_by)
# perform the time shift
adjusted_parent_df = parent_df.assign(
**{k: func(parent_df[k], offset) for k in order_by}
)
# index the parent *after* adjustment
adjusted_indexed_parent = adjusted_parent_df.set_index(group_by + order_by)
# get the column we care about
result = adjusted_indexed_parent[getattr(data, 'obj', data).name]
# reindex the shifted data by the original frame's index
result = result.reindex(indexed_original_df.index)
# add a default if necessary
return post_lead_lag(result, default)
@execute_node.register(ops.FirstValue, pd.Series)
def execute_series_first_value(op, data, **kwargs):
return data.values[0]
@execute_node.register(ops.FirstValue, SeriesGroupBy)
def execute_series_group_by_first_value(op, data, aggcontext=None, **kwargs):
return aggcontext.agg(data, 'first')
@execute_node.register(ops.LastValue, pd.Series)
def execute_series_last_value(op, data, **kwargs):
return data.values[-1]
@execute_node.register(ops.LastValue, SeriesGroupBy)
def execute_series_group_by_last_value(op, data, aggcontext=None, **kwargs):
return aggcontext.agg(data, 'last')
@execute_node.register(ops.MinRank, (pd.Series, SeriesGroupBy))
def execute_series_min_rank(op, data, **kwargs):
# TODO(phillipc): Handle ORDER BY
return data.rank(method='min', ascending=True).astype('int64') - 1
@execute_node.register(ops.DenseRank, (pd.Series, SeriesGroupBy))
def execute_series_dense_rank(op, data, **kwargs):
# TODO(phillipc): Handle ORDER BY
return data.rank(method='dense', ascending=True).astype('int64') - 1
@execute_node.register(ops.PercentRank, (pd.Series, SeriesGroupBy))
def execute_series_percent_rank(op, data, **kwargs):
# TODO(phillipc): Handle ORDER BY
return data.rank(method='min', ascending=True, pct=True)
| apache-2.0 |
JPFrancoia/scikit-learn | sklearn/covariance/tests/test_covariance.py | 79 | 12193 | # Author: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Gael Varoquaux <gael.varoquaux@normalesup.org>
# Virgile Fritsch <virgile.fritsch@inria.fr>
#
# License: BSD 3 clause
import numpy as np
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_warns
from sklearn.utils.testing import assert_greater
from sklearn import datasets
from sklearn.covariance import empirical_covariance, EmpiricalCovariance, \
ShrunkCovariance, shrunk_covariance, \
LedoitWolf, ledoit_wolf, ledoit_wolf_shrinkage, OAS, oas
X = datasets.load_diabetes().data
X_1d = X[:, 0]
n_samples, n_features = X.shape
def test_covariance():
# Tests Covariance module on a simple dataset.
# test covariance fit from data
cov = EmpiricalCovariance()
cov.fit(X)
emp_cov = empirical_covariance(X)
assert_array_almost_equal(emp_cov, cov.covariance_, 4)
assert_almost_equal(cov.error_norm(emp_cov), 0)
assert_almost_equal(
cov.error_norm(emp_cov, norm='spectral'), 0)
assert_almost_equal(
cov.error_norm(emp_cov, norm='frobenius'), 0)
assert_almost_equal(
cov.error_norm(emp_cov, scaling=False), 0)
assert_almost_equal(
cov.error_norm(emp_cov, squared=False), 0)
assert_raises(NotImplementedError,
cov.error_norm, emp_cov, norm='foo')
# Mahalanobis distances computation test
mahal_dist = cov.mahalanobis(X)
assert_greater(np.amin(mahal_dist), 0)
# test with n_features = 1
X_1d = X[:, 0].reshape((-1, 1))
cov = EmpiricalCovariance()
cov.fit(X_1d)
assert_array_almost_equal(empirical_covariance(X_1d), cov.covariance_, 4)
assert_almost_equal(cov.error_norm(empirical_covariance(X_1d)), 0)
assert_almost_equal(
cov.error_norm(empirical_covariance(X_1d), norm='spectral'), 0)
# test with one sample
# Create X with 1 sample and 5 features
X_1sample = np.arange(5).reshape(1, 5)
cov = EmpiricalCovariance()
assert_warns(UserWarning, cov.fit, X_1sample)
assert_array_almost_equal(cov.covariance_,
np.zeros(shape=(5, 5), dtype=np.float64))
# test integer type
X_integer = np.asarray([[0, 1], [1, 0]])
result = np.asarray([[0.25, -0.25], [-0.25, 0.25]])
assert_array_almost_equal(empirical_covariance(X_integer), result)
# test centered case
cov = EmpiricalCovariance(assume_centered=True)
cov.fit(X)
assert_array_equal(cov.location_, np.zeros(X.shape[1]))
def test_shrunk_covariance():
# Tests ShrunkCovariance module on a simple dataset.
# compare shrunk covariance obtained from data and from MLE estimate
cov = ShrunkCovariance(shrinkage=0.5)
cov.fit(X)
assert_array_almost_equal(
shrunk_covariance(empirical_covariance(X), shrinkage=0.5),
cov.covariance_, 4)
# same test with shrinkage not provided
cov = ShrunkCovariance()
cov.fit(X)
assert_array_almost_equal(
shrunk_covariance(empirical_covariance(X)), cov.covariance_, 4)
# same test with shrinkage = 0 (<==> empirical_covariance)
cov = ShrunkCovariance(shrinkage=0.)
cov.fit(X)
assert_array_almost_equal(empirical_covariance(X), cov.covariance_, 4)
# test with n_features = 1
X_1d = X[:, 0].reshape((-1, 1))
cov = ShrunkCovariance(shrinkage=0.3)
cov.fit(X_1d)
assert_array_almost_equal(empirical_covariance(X_1d), cov.covariance_, 4)
# test shrinkage coeff on a simple data set (without saving precision)
cov = ShrunkCovariance(shrinkage=0.5, store_precision=False)
cov.fit(X)
assert(cov.precision_ is None)
def test_ledoit_wolf():
# Tests LedoitWolf module on a simple dataset.
# test shrinkage coeff on a simple data set
X_centered = X - X.mean(axis=0)
lw = LedoitWolf(assume_centered=True)
lw.fit(X_centered)
shrinkage_ = lw.shrinkage_
score_ = lw.score(X_centered)
assert_almost_equal(ledoit_wolf_shrinkage(X_centered,
assume_centered=True),
shrinkage_)
assert_almost_equal(ledoit_wolf_shrinkage(X_centered, assume_centered=True,
block_size=6),
shrinkage_)
# compare shrunk covariance obtained from data and from MLE estimate
lw_cov_from_mle, lw_shinkrage_from_mle = ledoit_wolf(X_centered,
assume_centered=True)
assert_array_almost_equal(lw_cov_from_mle, lw.covariance_, 4)
assert_almost_equal(lw_shinkrage_from_mle, lw.shrinkage_)
# compare estimates given by LW and ShrunkCovariance
scov = ShrunkCovariance(shrinkage=lw.shrinkage_, assume_centered=True)
scov.fit(X_centered)
assert_array_almost_equal(scov.covariance_, lw.covariance_, 4)
# test with n_features = 1
X_1d = X[:, 0].reshape((-1, 1))
lw = LedoitWolf(assume_centered=True)
lw.fit(X_1d)
lw_cov_from_mle, lw_shinkrage_from_mle = ledoit_wolf(X_1d,
assume_centered=True)
assert_array_almost_equal(lw_cov_from_mle, lw.covariance_, 4)
assert_almost_equal(lw_shinkrage_from_mle, lw.shrinkage_)
assert_array_almost_equal((X_1d ** 2).sum() / n_samples, lw.covariance_, 4)
# test shrinkage coeff on a simple data set (without saving precision)
lw = LedoitWolf(store_precision=False, assume_centered=True)
lw.fit(X_centered)
assert_almost_equal(lw.score(X_centered), score_, 4)
assert(lw.precision_ is None)
# Same tests without assuming centered data
# test shrinkage coeff on a simple data set
lw = LedoitWolf()
lw.fit(X)
assert_almost_equal(lw.shrinkage_, shrinkage_, 4)
assert_almost_equal(lw.shrinkage_, ledoit_wolf_shrinkage(X))
assert_almost_equal(lw.shrinkage_, ledoit_wolf(X)[1])
assert_almost_equal(lw.score(X), score_, 4)
# compare shrunk covariance obtained from data and from MLE estimate
lw_cov_from_mle, lw_shinkrage_from_mle = ledoit_wolf(X)
assert_array_almost_equal(lw_cov_from_mle, lw.covariance_, 4)
assert_almost_equal(lw_shinkrage_from_mle, lw.shrinkage_)
# compare estimates given by LW and ShrunkCovariance
scov = ShrunkCovariance(shrinkage=lw.shrinkage_)
scov.fit(X)
assert_array_almost_equal(scov.covariance_, lw.covariance_, 4)
# test with n_features = 1
X_1d = X[:, 0].reshape((-1, 1))
lw = LedoitWolf()
lw.fit(X_1d)
lw_cov_from_mle, lw_shinkrage_from_mle = ledoit_wolf(X_1d)
assert_array_almost_equal(lw_cov_from_mle, lw.covariance_, 4)
assert_almost_equal(lw_shinkrage_from_mle, lw.shrinkage_)
assert_array_almost_equal(empirical_covariance(X_1d), lw.covariance_, 4)
# test with one sample
# warning should be raised when using only 1 sample
X_1sample = np.arange(5).reshape(1, 5)
lw = LedoitWolf()
assert_warns(UserWarning, lw.fit, X_1sample)
assert_array_almost_equal(lw.covariance_,
np.zeros(shape=(5, 5), dtype=np.float64))
# test shrinkage coeff on a simple data set (without saving precision)
lw = LedoitWolf(store_precision=False)
lw.fit(X)
assert_almost_equal(lw.score(X), score_, 4)
assert(lw.precision_ is None)
def _naive_ledoit_wolf_shrinkage(X):
# A simple implementation of the formulas from Ledoit & Wolf
# The computation below achieves the following computations of the
# "O. Ledoit and M. Wolf, A Well-Conditioned Estimator for
# Large-Dimensional Covariance Matrices"
# beta and delta are given in the beginning of section 3.2
n_samples, n_features = X.shape
emp_cov = empirical_covariance(X, assume_centered=False)
mu = np.trace(emp_cov) / n_features
delta_ = emp_cov.copy()
delta_.flat[::n_features + 1] -= mu
delta = (delta_ ** 2).sum() / n_features
X2 = X ** 2
beta_ = 1. / (n_features * n_samples) \
* np.sum(np.dot(X2.T, X2) / n_samples - emp_cov ** 2)
beta = min(beta_, delta)
shrinkage = beta / delta
return shrinkage
def test_ledoit_wolf_small():
# Compare our blocked implementation to the naive implementation
X_small = X[:, :4]
lw = LedoitWolf()
lw.fit(X_small)
shrinkage_ = lw.shrinkage_
assert_almost_equal(shrinkage_, _naive_ledoit_wolf_shrinkage(X_small))
def test_ledoit_wolf_large():
# test that ledoit_wolf doesn't error on data that is wider than block_size
rng = np.random.RandomState(0)
# use a number of features that is larger than the block-size
X = rng.normal(size=(10, 20))
lw = LedoitWolf(block_size=10).fit(X)
# check that covariance is about diagonal (random normal noise)
assert_almost_equal(lw.covariance_, np.eye(20), 0)
cov = lw.covariance_
# check that the result is consistent with not splitting data into blocks.
lw = LedoitWolf(block_size=25).fit(X)
assert_almost_equal(lw.covariance_, cov)
def test_oas():
# Tests OAS module on a simple dataset.
# test shrinkage coeff on a simple data set
X_centered = X - X.mean(axis=0)
oa = OAS(assume_centered=True)
oa.fit(X_centered)
shrinkage_ = oa.shrinkage_
score_ = oa.score(X_centered)
# compare shrunk covariance obtained from data and from MLE estimate
oa_cov_from_mle, oa_shinkrage_from_mle = oas(X_centered,
assume_centered=True)
assert_array_almost_equal(oa_cov_from_mle, oa.covariance_, 4)
assert_almost_equal(oa_shinkrage_from_mle, oa.shrinkage_)
# compare estimates given by OAS and ShrunkCovariance
scov = ShrunkCovariance(shrinkage=oa.shrinkage_, assume_centered=True)
scov.fit(X_centered)
assert_array_almost_equal(scov.covariance_, oa.covariance_, 4)
# test with n_features = 1
X_1d = X[:, 0:1]
oa = OAS(assume_centered=True)
oa.fit(X_1d)
oa_cov_from_mle, oa_shinkrage_from_mle = oas(X_1d, assume_centered=True)
assert_array_almost_equal(oa_cov_from_mle, oa.covariance_, 4)
assert_almost_equal(oa_shinkrage_from_mle, oa.shrinkage_)
assert_array_almost_equal((X_1d ** 2).sum() / n_samples, oa.covariance_, 4)
# test shrinkage coeff on a simple data set (without saving precision)
oa = OAS(store_precision=False, assume_centered=True)
oa.fit(X_centered)
assert_almost_equal(oa.score(X_centered), score_, 4)
assert(oa.precision_ is None)
# Same tests without assuming centered data--------------------------------
# test shrinkage coeff on a simple data set
oa = OAS()
oa.fit(X)
assert_almost_equal(oa.shrinkage_, shrinkage_, 4)
assert_almost_equal(oa.score(X), score_, 4)
# compare shrunk covariance obtained from data and from MLE estimate
oa_cov_from_mle, oa_shinkrage_from_mle = oas(X)
assert_array_almost_equal(oa_cov_from_mle, oa.covariance_, 4)
assert_almost_equal(oa_shinkrage_from_mle, oa.shrinkage_)
# compare estimates given by OAS and ShrunkCovariance
scov = ShrunkCovariance(shrinkage=oa.shrinkage_)
scov.fit(X)
assert_array_almost_equal(scov.covariance_, oa.covariance_, 4)
# test with n_features = 1
X_1d = X[:, 0].reshape((-1, 1))
oa = OAS()
oa.fit(X_1d)
oa_cov_from_mle, oa_shinkrage_from_mle = oas(X_1d)
assert_array_almost_equal(oa_cov_from_mle, oa.covariance_, 4)
assert_almost_equal(oa_shinkrage_from_mle, oa.shrinkage_)
assert_array_almost_equal(empirical_covariance(X_1d), oa.covariance_, 4)
# test with one sample
# warning should be raised when using only 1 sample
X_1sample = np.arange(5).reshape(1, 5)
oa = OAS()
assert_warns(UserWarning, oa.fit, X_1sample)
assert_array_almost_equal(oa.covariance_,
np.zeros(shape=(5, 5), dtype=np.float64))
# test shrinkage coeff on a simple data set (without saving precision)
oa = OAS(store_precision=False)
oa.fit(X)
assert_almost_equal(oa.score(X), score_, 4)
assert(oa.precision_ is None)
| bsd-3-clause |
pap/nupic | external/linux32/lib/python2.6/site-packages/matplotlib/backends/backend_qtagg.py | 73 | 4972 | """
Render to qt from agg
"""
from __future__ import division
import os, sys
import matplotlib
from matplotlib import verbose
from matplotlib.figure import Figure
from backend_agg import FigureCanvasAgg
from backend_qt import qt, FigureManagerQT, FigureCanvasQT,\
show, draw_if_interactive, backend_version, \
NavigationToolbar2QT
DEBUG = False
def new_figure_manager( num, *args, **kwargs ):
"""
Create a new figure manager instance
"""
if DEBUG: print 'backend_qtagg.new_figure_manager'
FigureClass = kwargs.pop('FigureClass', Figure)
thisFig = FigureClass( *args, **kwargs )
canvas = FigureCanvasQTAgg( thisFig )
return FigureManagerQTAgg( canvas, num )
class NavigationToolbar2QTAgg(NavigationToolbar2QT):
def _get_canvas(self, fig):
return FigureCanvasQTAgg(fig)
class FigureManagerQTAgg(FigureManagerQT):
def _get_toolbar(self, canvas, parent):
# must be inited after the window, drawingArea and figure
# attrs are set
if matplotlib.rcParams['toolbar']=='classic':
print "Classic toolbar is not yet supported"
elif matplotlib.rcParams['toolbar']=='toolbar2':
toolbar = NavigationToolbar2QTAgg(canvas, parent)
else:
toolbar = None
return toolbar
class FigureCanvasQTAgg( FigureCanvasAgg, FigureCanvasQT ):
"""
The canvas the figure renders into. Calls the draw and print fig
methods, creates the renderers, etc...
Public attribute
figure - A Figure instance
"""
def __init__( self, figure ):
if DEBUG: print 'FigureCanvasQtAgg: ', figure
FigureCanvasQT.__init__( self, figure )
FigureCanvasAgg.__init__( self, figure )
self.drawRect = False
self.rect = []
self.replot = True
self.pixmap = qt.QPixmap()
def resizeEvent( self, e ):
FigureCanvasQT.resizeEvent( self, e )
def drawRectangle( self, rect ):
self.rect = rect
self.drawRect = True
# False in repaint does not clear the image before repainting
self.repaint( False )
def paintEvent( self, e ):
"""
Draw to the Agg backend and then copy the image to the qt.drawable.
In Qt, all drawing should be done inside of here when a widget is
shown onscreen.
"""
FigureCanvasQT.paintEvent( self, e )
if DEBUG: print 'FigureCanvasQtAgg.paintEvent: ', self, \
self.get_width_height()
p = qt.QPainter( self )
# only replot data when needed
if type(self.replot) is bool: # might be a bbox for blitting
if self.replot:
FigureCanvasAgg.draw( self )
#stringBuffer = str( self.buffer_rgba(0,0) )
# matplotlib is in rgba byte order.
# qImage wants to put the bytes into argb format and
# is in a 4 byte unsigned int. little endian system is LSB first
# and expects the bytes in reverse order (bgra).
if ( qt.QImage.systemByteOrder() == qt.QImage.LittleEndian ):
stringBuffer = self.renderer._renderer.tostring_bgra()
else:
stringBuffer = self.renderer._renderer.tostring_argb()
qImage = qt.QImage( stringBuffer, self.renderer.width,
self.renderer.height, 32, None, 0,
qt.QImage.IgnoreEndian )
self.pixmap.convertFromImage( qImage, qt.QPixmap.Color )
p.drawPixmap( qt.QPoint( 0, 0 ), self.pixmap )
# draw the zoom rectangle to the QPainter
if ( self.drawRect ):
p.setPen( qt.QPen( qt.Qt.black, 1, qt.Qt.DotLine ) )
p.drawRect( self.rect[0], self.rect[1], self.rect[2], self.rect[3] )
# we are blitting here
else:
bbox = self.replot
l, b, r, t = bbox.extents
w = int(r) - int(l)
h = int(t) - int(b)
reg = self.copy_from_bbox(bbox)
stringBuffer = reg.to_string_argb()
qImage = qt.QImage(stringBuffer, w, h, 32, None, 0, qt.QImage.IgnoreEndian)
self.pixmap.convertFromImage(qImage, qt.QPixmap.Color)
p.drawPixmap(qt.QPoint(l, self.renderer.height-t), self.pixmap)
p.end()
self.replot = False
self.drawRect = False
def draw( self ):
"""
Draw the figure when xwindows is ready for the update
"""
if DEBUG: print "FigureCanvasQtAgg.draw", self
self.replot = True
FigureCanvasAgg.draw(self)
self.repaint(False)
def blit(self, bbox=None):
"""
Blit the region in bbox
"""
self.replot = bbox
self.repaint(False)
def print_figure(self, *args, **kwargs):
FigureCanvasAgg.print_figure(self, *args, **kwargs)
self.draw()
| agpl-3.0 |
Knight13/Exploring-Deep-Neural-Decision-Trees | Otto/NNDT_RF.py | 1 | 2651 | import numpy as np
import tensorflow as tf
import random
from neural_network_decision_tree import nn_decision_tree
from joblib import Parallel, delayed
"""train_data and test_data are list containg the X_train, y_train and X_test, y_test
obatined after splitting the data set using sklearn.model_selection.train_test_split"""
def random_forest(train_data, test_data, max_features, batch_size, epochs, *args, **kwargs):
#No. of trees defined by diving the total no. of features by the max no. of features in each tree
num_trees = int(train_data[0].shape[1]/max_features)
error = []
for i in xrange(num_trees):
features=[]
for i in xrange(max_features):
features.append(random.randrange(0,train_data[0].shape[1]))
col_idx = np.array(features)
X_train = train_data[0][:, col_idx]
y_train = train_data[1]
X_test = test_data[0][:, col_idx]
y_test = test_data[1]
num_cut = []
for f in xrange(max_features):
num_cut.append(1)
num_leaf = np.prod(np.array(num_cut) + 1)
num_class = y_train.shape[1]
seed = 1990
x_ph = tf.placeholder(tf.float32, [None, max_features])
y_ph = tf.placeholder(tf.float32, [None, num_class])
cut_points_list = [tf.Variable(tf.random_uniform([i])) for i in num_cut]
leaf_score = tf.Variable(tf.random_uniform([num_leaf, num_class]))
y_pred = nn_decision_tree(x_ph, cut_points_list, leaf_score, temperature=10)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_pred, labels=y_ph))
opt = tf.train.AdamOptimizer(0.1)
train_step = opt.minimize(loss)
sess = tf.InteractiveSession()
tf.set_random_seed(1990)
sess.run(tf.initialize_all_variables())
for epoch in range(epochs):
total_batch = int(X_train.shape[0]/batch_size)
for i in range(total_batch):
batch_mask = np.random.choice(X_train.shape[0], batch_size)
batch_x = X_train[batch_mask].reshape(-1, X_train.shape[1])
batch_y = y_train[batch_mask].reshape(-1, y_train.shape[1])
_, loss_e = sess.run([train_step, loss], feed_dict={x_ph: batch_x, y_ph: batch_y})
"""For each tree, the predicted values and the original y_values are stacked vertically
in two different numpy arrays after training each tree for 100 epochs"""
pred = np.vstack(np.array(y_pred.eval(feed_dict={x_ph: X_test}), dtype = np.float32))
orig = np.vstack(np.array(y_test, dtype = np.float32))
sess.close()
return (pred, orig)
| unlicense |
bavardage/statsmodels | statsmodels/examples/ex_emplike_1.py | 3 | 3620 | """
This is a basic tutorial on how to conduct basic empirical likelihood
inference for descriptive statistics. If matplotlib is installed
it also generates plots.
"""
import numpy as np
import statsmodels.api as sm
print 'Welcome to El'
np.random.seed(634) # No significance of the seed.
# Let's first generate some univariate data.
univariate = np.random.standard_normal(30)
# Now let's play with it
# Initiate an empirical likelihood descriptive statistics instance
eldescriptive = sm.emplike.DescStat(univariate)
# Empirical likelihood is (typically) a method of inference,
# not estimation. Therefore, there is no attribute eldescriptive.mean
# However, we can check the mean:
eldescriptive_mean = eldescriptive.endog.mean() #.42
#Let's conduct a hypothesis test to see if the mean is 0
print 'Hypothesis test results for the mean:'
print eldescriptive.test_mean(0)
# The first value is is -2 *log-likelihood ratio, which is distributed
#chi2. The second value is the p-value.
# Let's see what the variance is:
eldescriptive_var = eldescriptive.endog.var() # 1.01
#Let's test if the variance is 1:
print 'Hypothesis test results for the variance:'
print eldescriptive.test_var(1)
# Let's test if Skewness and Kurtosis are 0
print 'Hypothesis test results for Skewness:'
print eldescriptive.test_skew(0)
print 'Hypothesis test results for the Kurtosis:'
print eldescriptive.test_kurt(0)
# Note that the skewness and Kurtosis take longer. This is because
# we have to optimize over the nuisance parameters (mean, variance).
# We can also test for the joint skewness and kurtoses
print ' Joint Skewness-Kurtosis test'
eldescriptive.test_joint_skew_kurt(0, 0)
# Let's try and get some confidence intervals
print 'Confidence interval for the mean'
print eldescriptive.ci_mean()
print 'Confidence interval for the variance'
print eldescriptive.ci_var()
print 'Confidence interval for skewness'
print eldescriptive.ci_skew()
print 'Confidence interval for kurtosis'
print eldescriptive.ci_kurt()
# if matplotlib is installed, we can get a contour plot for the mean
# and variance.
mean_variance_contour = eldescriptive.plot_contour(-.5, 1.2, .2, 2.5, .05, .05)
# This returns a figure instance. Just type mean_var_contour.show()
# to see the plot.
# Once you close the plot, we can start some multivariate analysis.
x1 = np.random.exponential(2, (30, 1))
x2 = 2 * x1 + np.random.chisquare(4, (30, 1))
mv_data = np.concatenate((x1, x2), axis=1)
mv_elmodel = sm.emplike.DescStat(mv_data)
# For multivariate data, the only methods are mv_test_mean,
# mv mean contour and ci_corr and test_corr.
# Let's test the hypthesis that x1 has a mean of 2 and x2 has a mean of 7
print 'Multivaraite mean hypothesis test'
print mv_elmodel.mv_test_mean(np.array([2, 7]))
# Now let's get the confidence interval for correlation
print 'Correlation Coefficient CI'
print mv_elmodel.ci_corr()
# Note how this took much longer than previous functions. That is
# because the function is optimizing over 4 nuisance parameters.
# We can also do a hypothesis test for correlation
print 'Hypothesis test for correlation'
print mv_elmodel.test_corr(.7)
# Finally, let's create a contour plot for the means of the data
means_contour = mv_elmodel.mv_mean_contour(1, 3, 6,9, .15,.15, plot_dta=1)
# This also returns a fig so we can type mean_contour.show() to see the figure
# Sometimes, the data is very dispersed and we would like to see the confidence
# intervals without the plotted data. Let's see the difference when we set
# plot_dta=0
means_contour2 = mv_elmodel.mv_mean_contour(1, 3, 6,9, .05,.05, plot_dta=0)
| bsd-3-clause |
oche-jay/vEQ-benchmark | vEQ_ssim/vEQ_ssim.py | 1 | 17701 | '''
Created on 1 Jul 2015
@author: oche
'''
from __future__ import unicode_literals
import sys
import argparse
import os
import logging
import traceback
from util import validURLMatch, validYoutubeURLMatch
import subprocess
from subprocess import Popen
import re
from os.path import expanduser
from youtube_dl.utils import DownloadError
import pickle
import datetime
import time
import database.vEQ_database as DB
try:
from youtube_dl import YoutubeDL
except ImportError:
logging.error("Try installing for python Youtube-DL from PiP or similar")
try:
from pymediainfo import MediaInfo
except:
logging.warn("Using our own lib version of pymediainfo")
from util.pymediainfo import MediaInfo
ENV_DICT = os.environ
# PATH_TO_MEDIAINFO = "C:/MediaInfo" #Put this in a seperate PREFS file or something
PATH_TO_MEDIAINFO = "/usr/local/bin"
PATH_TO_FFMPEG = "/usr/local/bin"
PATH_TO_TINYSSIM = "/Users/oche/ffmpeg/tests"
PATH_TO_EPFL_VQMT = ""
ONLINE_VIDEO = False
LOCAL_VIDEO = False
video_url = ""
ENV_DICT["PATH"] = PATH_TO_TINYSSIM + os.pathsep + PATH_TO_FFMPEG + os.pathsep + PATH_TO_MEDIAINFO + os.pathsep + ENV_DICT["PATH"]
logging.getLogger().setLevel(logging.INFO)
def getLocalVideoInfo(video):
'''
Returns the codec, width and height of a video using MediaInfo
'''
try:
'''
if mediainfo isn't in the python environment path then this wont work.
'''
logging.info("Getting mediainfo for %s", video)
video_info = MediaInfo.parse(video)
for track in video_info.tracks:
if track.track_type == 'Video':
logging.info(" ".join(["Extracted info with MediaInfo: ", track.codec, str(track.width) , str(track.height)]) )
return track.codec, track.width, track.height
except OSError as oe:
logging.error("OS Error:", sys.exc_info())
sys.stderr.write("OS Error: Probably couldn't find MediaInfo \nIs it in ENV_DICT[\"PATH\"] ?\n")
sys.stderr.write(os.environ["PATH"]+"\n")
traceback.print_exc()
except:
logging.error("Unexpected error:", sys.exc_info())
traceback.print_exc()
sys.exit(1)
def scaledownYUV(video, orig_width=None, orig_height=None, target_width=None, target_height=None):
'''
Scale down a raw yuv file fro
'''
# first use the filename to check if a suitable scaledown version exists????
logging.debug("Scaling down YUV file")
fileName = os.path.basename(video)
filePath = os.path.dirname(video)
# replace all spaces with and UNDERSCORE
fileName = re.sub(r"\s+", '-', fileName)
# remove sub all non-word characters and replACE WITH underscore
fileName = re.sub(r"[^.\w]+", '_', fileName)
fileName = 'scaled_' + str(target_width) +"x" + str(target_height) + "_" + fileName
outfile = os.path.join(filePath, fileName)
if os.path.exists((outfile)):
logging.warn("File already downloaded at : " + outfile)
return outfile
else:
# ffmpeg -s:v 1920x1080 -r 25 -i input.yuv -vf scale=960:540 -c:v rawvideo -pix_fmt yuv420p out.yuv
videosize_arg = str(orig_width)+"x"+str(orig_height)
codec_arg = 'rawvideo'
input_arg = video
scale_arg = "scale="+str(target_width) +':' + str(target_height)
command = ["ffmpeg", "-video_size", videosize_arg, "-i", input_arg, "-vf", scale_arg, "-codec:v", codec_arg, outfile]
for it in command:
print it,
print "\n"
p = Popen(command, env=ENV_DICT)
p.communicate(input)
return outfile
def convertToYUV(video, **kwargs):
codec = kwargs.get('codec', None)
width = kwargs.get('width', None)
height = kwargs.get('height', None)
if codec or width or height is None:
try:
codec,width,height= getLocalVideoInfo(video)
except:
logging.error("Could not retrieve details from video\n using generic defaults")
codec = kwargs.get('codec', "codc")
width = kwargs.get('width', "width")
height = kwargs.get('height', "height")
try:
fileName = os.path.basename(video)
filePath = os.path.dirname(video)
# replace all spaces with and UNDERSCORE
fileName = re.sub(r"\s+", '-', fileName)
# remove sub all non-word characters and replACE WITH underscore
fileName = re.sub(r"[\W]+", '_', fileName).lower()
fileName = "_".join([codec,(str(width) + "x" + str(height)),fileName])
outfile = os.path.join(filePath, fileName + '.yuv')
if os.path.exists((outfile)):
logging.warn("File already downloaded at : " + outfile)
else:
command = ["/usr/bin/ffmpeg", "-i", video, outfile ]
p = Popen(command, env=ENV_DICT)
p.communicate(input)
print "outfile: " + outfile
return outfile
except OSError as oe:
logging.error("OS Error:", sys.exc_info())
sys.stderr.write("OS Error: Probably couldn't find FFMPEG \nIs it in ENV_DICT[\"PATH\"] ?\n")
sys.stderr.write(os.environ["PATH"]+"\n")
traceback.print_exc()
except:
logging.error("Unexpected error:", sys.exc_info())
traceback.print_exc()
sys.exit(1)
def prepareFileName(filename):
logging.info("Preparing file name")
file_dir, base_file = os.path.split(filename)
logging.info("Original file name: %s " , filename)
# file_wo_ext = os.path.basename(base_file)o
# print fileName
new_filename = re.sub(r"\s+", '-', base_file)
new_filename = re.sub(r"[^.\w]+", '_', new_filename).lower()
new_filename = os.path.join(file_dir, new_filename)
logging.info("New file name: %s " , new_filename)
return new_filename
def downloadAndRenameVideo(video, video_download_folder, **kwargs):
"""
Downloads a video using Youtube-dl and renames it to a conisintent filename
"""
if not kwargs["quality"]:
logging.warn("No Quality level specified, will use youtube-dl best quality")
quality = kwargs.get('quality',"best") #18 for youtube is h264, mp, 360p
youtube_dl_opts = {
'outtmpl':video_download_folder + '/%(resolution)s-%(format_id)s-%(title)s-%(id)s.%(ext)s',
'format':quality
}
with YoutubeDL(youtube_dl_opts) as ydl:
try:
info_dict = ydl.extract_info(video, download=False)
filename = ydl.prepare_filename(info_dict)
logging.info("YOUTUBE_DL Downloaded file will be saved as: %s", filename)
new_filename = prepareFileName(filename)
print new_filename
if os.path.exists((new_filename)):
logging.warn("File already downloaded at : " + new_filename)
else:
logging.info("File now being downloaded")
info_dict = ydl.extract_info(video, download=True)
logging.info("Download Complete: now renaming file")
os.rename(filename, new_filename)
logging.info("Video file now at : " + new_filename) #check if file has already been downloaded
except DownloadError as de:
logging.error("YDL-Download Error")
sys.exit(1)
except:
logging.error("YDL Error")
traceback.print_exc()
sys.exit(1)
video = new_filename
return video
def makeDownloadsFolder():
home = expanduser("~")
print home
video_download_folder = os.path.join(home, "vEQ-benchmark", "Downloads")
if not os.path.exists(video_download_folder):
os.makedirs(video_download_folder)
return video_download_folder
def tiny_ssim(testvideo_width, testvideo_height, test_yuv_file, reference_video_yuv):
commandx = ["tiny_ssim", reference_video_yuv, test_yuv_file, str(testvideo_width) + "x" + str(testvideo_height)]
# http://blog.endpoint.com/2015/01/getting-realtime-output-using-python.html
'''
Calling Popen with universal_newlines=True because tiny_ssim
ouputs each line with ^M newline character - which maybe makes sense only
on Windows or something,
In any case, this causes problems if not set
'''
p = Popen(commandx, env=ENV_DICT, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
# out, err = p.communicate(input) //communicate() is potentially memory intensive
output_arr = []
while True:
output = p.stdout.readline()
if output == '' and p.poll() is not None:
# break
print ""
if output:
output_arr.append(output)
logging.info(output.strip())
ssim_outfile = open("out_ssim.txt", "wb")
pickle.dump(output_arr, ssim_outfile)
ssim_arr = []
# Frame 1849 | PSNR Y:60.957 U:67.682 V:67.326 All:62.262 | SSIM Y:0.99984 U:0.99984 V:0.99982 All:0.99984 (37.96369)
# Frame 1850 | PSNR Y:60.920 U:67.682 V:67.326 All:62.228 | SSIM Y:0.99984 U:0.99984 V:0.99982 All:0.99984 (37.94860)
# last_val = 'Total 1851 frames | PSNR Y:45.039 U:52.712 V:52.995 All:46.454 | SSIM Y:0.98834 U:0.99602 V:0.99642 All:0.99096 (20.43960)\n'
last_val = output_arr[-1]
for it in output_arr:
ssim_arr.append(re.split(r"[^\d.]+", it))
tiny_ssim_results = re.split(r"[^\d.]+", last_val)
return tiny_ssim_results
def main(argv=None):
logging.getLogger().setLevel(logging.INFO)
parser = argparse.ArgumentParser(description="vEQ_ssim tool: A utilty tool for objective quality measurements")
parser.add_argument("video" , metavar="VIDEO", help="A local file or URL(Youtube, Vimeo etc.) for the video to be benchmarked")
parser.add_argument("-r", "--reference", metavar="reference video", dest="reference", help="A location or url for a video file (HD) to be used as a reference for the comparison")
parser.add_argument("-f", "--format", metavar="format", dest="format", help="The format of the Youtube DL file being tested for the comparison")
parser.add_argument("-l", "--databse-location", dest="db_loc", metavar ="location for database file or \'memory\'", help = "A absolute location for storing the database file ")
args = parser.parse_args()
video = args.video
test_format = args.format
reference_video = args.reference
#=====================================================================================================#
# DATABASE SETUP
#=====================================================================================================#
vEQdb = DB.vEQ_database()
db =vEQdb.getDB()
cursor = db.cursor()
cursor.execute("CREATE TABLE if NOT exists video_quality_info (%s);" % vEQdb.VIDEO_QUALITY_COLS)
#=====================================================================================================#
# DATABASE SETUP ENDS #
#=====================================================================================================#
if not validURLMatch(video) and not (os.access(video, os.R_OK)):
print('Error: %s file not readable' % video)
logging.error('Error: %s file not readable' % video)
sys.exit(1)
if validURLMatch(video):
ONLINE_VIDEO = True
logging.debug("Found online video")
if validYoutubeURLMatch(video):
logging.debug("Found Youtube video")
"""
try to download the video using youtube-dl
and then get info about the downloaded file
"""
video_download_folder = makeDownloadsFolder()
video_url = video
video = downloadAndRenameVideo(video, video_download_folder, quality=test_format)
else:
LOCAL_VIDEO = True
# should get here even if it was a Youtubeurl as the video file
# should have been downloaded from Youtube etc
if not validURLMatch(video) and (os.access(video, os.R_OK)):
logging.debug("Found regular video")
codec, testvideo_width, testvideo_height = getLocalVideoInfo(video)
test_yuv_file = convertToYUV(video, codec=codec, testvideo_width=testvideo_width, testvideo_height=testvideo_height)
if ONLINE_VIDEO: #youtube
# Now try to get the best quality video if an online, or a reference source file if local
if not reference_video:
logging.debug("Trying to get best quality video from remote server")
reference_video = downloadAndRenameVideo(video_url, video_download_folder, quality="bestvideo")
ref_codec, ref_width, ref_height = getLocalVideoInfo(reference_video)
logging.info("Format and size of best video available is: " + ref_codec +", " + str(ref_width) + "x" + str(ref_height))
logging.debug("Converting Reference (best quality video) to YUV")
reference_video_yuv = convertToYUV(reference_video)
if (ref_width != testvideo_width) or (ref_height != testvideo_height ):
logging.debug("Scaling best quality reference video from %sx%s to %sx%s", ref_width, ref_height, testvideo_width, testvideo_height )
#but first check if an appropriate scaled down HD version already exists
reference_video_yuv = scaledownYUV(reference_video_yuv,orig_width=ref_width, orig_height=ref_height, target_width=testvideo_width, target_height=testvideo_height)
else:
logging.debug("Trying to get best quality video from local server")
if not reference_video:
logging.error("No reference video found")
sys.exit()
# 0.98834
TESTING = True
if TESTING:
ssim_in = open( "out_ssim.txt", "rb" )
output_arr = pickle.load(ssim_in)
ssim_arr = []
last_val = output_arr[-1]
tiny_ssim_results = re.split(r"[^\d.]+", last_val)
else:
tiny_ssim_results = tiny_ssim(testvideo_width, testvideo_height, test_yuv_file, reference_video_yuv)
ypsnr = str(tiny_ssim_results[2])
apsnr = str(tiny_ssim_results[5])
yssim = str(tiny_ssim_results[6])
assim = str(tiny_ssim_results[7])
#
# "id INTEGER PRIMARY KEY,"
# "timestamp REAL, "
# "video TEXT, "
# "url TEXT, "
# "reference_videoname TEXT"
# "metric1_ypsnr TEXT, "
# "metric2_apsnr TEXT, "
# "metric3_yssim TEXT, "
# "metric4_assim TEXT, "
# "metric5_other TEXT, "
# "metric6_other TEXT, "
# "metric7_other TEXT, "
# "metric8_other TEXT, "
# "metric9_other TEXT, "
# "metric10_other TEXT, "
timestamp = datetime.datetime.fromtimestamp(time.time()).strftime('%Y%m%d%H%M%S')
values = [timestamp, video, video_url, reference_video ,ypsnr, apsnr, yssim, assim, 0,0,0,0,0,0 ] #15 vlues
retcode = cursor.execute("INSERT INTO video_quality_info VALUES (null,?,?,?,?,?,?,?,?,?,?,?,?,?,?)", values)
db.commit()
print retcode
# reference_video = getSourceVideoInfo(source)
if __name__ == '__main__':
TESTING = False
if TESTING:
vEQdb = DB.vEQ_database(":memory:")
db =vEQdb.getDB()
cursor = db.cursor()
print vEQdb.VIDEO_QUALITY_COLS
cursor.execute("CREATE TABLE if NOT exists video_quality_info (%s);" % vEQdb.VIDEO_QUALITY_COLS)
ssim_in = open( "out_ssim.txt", "rb" )
output_arr = pickle.load(ssim_in)
ssim_arr = []
# Frame 1849 | PSNR Y:60.957 U:67.682 V:67.326 All:62.262 | SSIM Y:0.99984 U:0.99984 V:0.99982 All:0.99984 (37.96369)
# Frame 1850 | PSNR Y:60.920 U:67.682 V:67.326 All:62.228 | SSIM Y:0.99984 U:0.99984 V:0.99982 All:0.99984 (37.94860)
last_val = 'Total 1851 frames | PSNR Y:45.039 U:52.712 V:52.995 All:46.454 | SSIM Y:0.98834 U:0.99602 V:0.99642 All:0.99096 (20.43960)\n'
last_val = output_arr[-1]
for it in output_arr:
ssim_arr.append(re.split(r"[^\d.]+",it))
psnrs = zip(*ssim_arr)[1]
print psnrs[0:-1][-1]
import matplotlib.pyplot as plt
#
# plt.plot(psnrs[0:-1])
# # plt.ylim(0.8,1)
# plt.show()
tiny_ssim_results = re.split(r"[^\d.]+", last_val)
print("AVG YPSNR: " + str(tiny_ssim_results[2]))
print("AVG ALL PSNR: " + str(tiny_ssim_results[5]))
print("AVG YSSIM: " + str(tiny_ssim_results[6]))
print("AVG All SSIM: " + str(tiny_ssim_results[7]))
timestamp = st = datetime.datetime.fromtimestamp(time.time()).strftime('%Y%m%d%H%M%S')
values = [timestamp, "video", video_url, "reference_video" ,"ypsnr", "apsnr", "yssim", "assim", 0,0,0,0,0,0 ] #15 vlues
cursor = db.cursor()
cursor.execute("INSERT INTO video_quality_info VALUES (null,?,?,?,?,?,?,?,?,?,?,?,?,?,?)", values)
sys.exit()
else:
main()
| gpl-2.0 |
jturney/psi4 | psi4/driver/qcdb/mpl.py | 7 | 54234 | #
# @BEGIN LICENSE
#
# Psi4: an open-source quantum chemistry software package
#
# Copyright (c) 2007-2021 The Psi4 Developers.
#
# The copyrights for code used from other parties are included in
# the corresponding files.
#
# This file is part of Psi4.
#
# Psi4 is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, version 3.
#
# Psi4 is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License along
# with Psi4; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# @END LICENSE
#
"""Module with matplotlib plotting routines. These are not hooked up to
any particular qcdb data structures but can be called with basic
arguments.
"""
import os
#import matplotlib
#matplotlib.use('Agg')
def expand_saveas(saveas, def_filename, def_path=os.path.abspath(os.curdir), def_prefix='', relpath=False):
"""Analyzes string *saveas* to see if it contains information on
path to save file, name to save file, both or neither (*saveas*
ends in '/' to indicate directory only) (able to expand '.'). A full
absolute filename is returned, lacking only file extension. Based on
analysis of missing parts of *saveas*, path information from *def_path*
and/or filename information from *def_prefix* + *def_filename* is
inserted. *def_prefix* is intended to be something like ``mplthread_``
to identify the type of figure.
"""
defname = def_prefix + def_filename.replace(' ', '_')
if saveas is None:
pth = def_path
fil = defname
else:
pth, fil = os.path.split(saveas)
pth = pth if pth != '' else def_path
fil = fil if fil != '' else defname
abspathfile = os.path.join(os.path.abspath(pth), fil)
if relpath:
return os.path.relpath(abspathfile, os.getcwd())
else:
return abspathfile
def segment_color(argcolor, saptcolor):
"""Find appropriate color expression between overall color directive
*argcolor* and particular color availibility *rxncolor*.
"""
import matplotlib
# validate any sapt color
if saptcolor is not None:
if saptcolor < 0.0 or saptcolor > 1.0:
saptcolor = None
if argcolor is None:
# no color argument, so take from rxn
if rxncolor is None:
clr = 'grey'
elif saptcolor is not None:
clr = matplotlib.cm.jet(saptcolor)
else:
clr = rxncolor
elif argcolor == 'sapt':
# sapt color from rxn if available
if saptcolor is not None:
clr = matplotlib.cm.jet(saptcolor)
else:
clr = 'grey'
elif argcolor == 'rgb':
# HB/MX/DD sapt color from rxn if available
if saptcolor is not None:
if saptcolor < 0.333:
clr = 'blue'
elif saptcolor < 0.667:
clr = 'green'
else:
clr = 'red'
else:
clr = 'grey'
else:
# color argument is name of mpl color
clr = argcolor
return clr
def bars(data, title='', saveas=None, relpath=False, graphicsformat=['pdf'], view=True):
"""Generates a 'gray-bars' diagram between model chemistries with error
statistics in list *data*, which is supplied as part of the dictionary
for each participating bar/modelchem, along with *mc* keys in argument
*data*. The plot is labeled with *title* and each bar with *mc* key and
plotted at a fixed scale to facilitate comparison across projects.
"""
import hashlib
import matplotlib.pyplot as plt
# initialize plot, fix dimensions for consistent Illustrator import
fig, ax = plt.subplots(figsize=(12, 7))
plt.ylim([0, 4.86])
plt.xlim([0, 6])
plt.xticks([])
# label plot and tiers
ax.text(0.4, 4.6, title,
verticalalignment='bottom', horizontalalignment='left',
family='Times New Roman', weight='bold', fontsize=12)
widths = [0.15, 0.02, 0.02, 0.02] # TT, HB, MX, DD
xval = 0.1 # starting posn along x-axis
# plot bar sets
for bar in data:
if bar is not None:
lefts = [xval, xval + 0.025, xval + 0.065, xval + 0.105]
rect = ax.bar(lefts, bar['data'], widths, linewidth=0)
rect[0].set_color('grey')
rect[1].set_color('red')
rect[2].set_color('green')
rect[3].set_color('blue')
ax.text(xval + .08, 4.3, bar['mc'],
verticalalignment='center', horizontalalignment='right', rotation='vertical',
family='Times New Roman', fontsize=8)
xval += 0.20
# save and show
pltuid = title + '_' + hashlib.sha1((title + repr([bar['mc'] for bar in data if bar is not None])).encode()).hexdigest()
pltfile = expand_saveas(saveas, pltuid, def_prefix='bar_', relpath=relpath)
files_saved = {}
for ext in graphicsformat:
savefile = pltfile + '.' + ext.lower()
plt.savefig(savefile, transparent=True, format=ext, bbox_inches='tight')
files_saved[ext.lower()] = savefile
if view:
plt.show()
plt.close()
return files_saved
def flat(data, color=None, title='', xlimit=4.0, xlines=[0.0, 0.3, 1.0], mae=None, mape=None, view=True,
saveas=None, relpath=False, graphicsformat=['pdf']):
"""Generates a slat diagram between model chemistries with errors in
single-item list *data*, which is supplied as part of the dictionary
for each participating reaction, along with *dbse* and *rxn* keys in
argument *data*. Limits of plot are *xlimit* from the zero-line. If
*color* is None, slats are black, if 'sapt', colors are taken from
sapt_colors module. Summary statistic *mae* is plotted on the
overbound side and relative statistic *mape* on the underbound side.
Saves a file with name *title* and plots to screen if *view*.
"""
import matplotlib.pyplot as plt
Nweft = 1
positions = range(-1, -1 * Nweft - 1, -1)
# initialize plot
fig, ax = plt.subplots(figsize=(12, 0.33))
plt.xlim([-xlimit, xlimit])
plt.ylim([-1 * Nweft - 1, 0])
plt.yticks([])
plt.xticks([])
# fig.patch.set_visible(False)
# ax.patch.set_visible(False)
ax.axis('off')
for xl in xlines:
plt.axvline(xl, color='grey', linewidth=4)
if xl != 0.0:
plt.axvline(-1 * xl, color='grey', linewidth=4)
# plot reaction errors and threads
for rxn in data:
xvals = rxn['data']
clr = segment_color(color, rxn['color'] if 'color' in rxn else None)
ax.plot(xvals, positions, '|', color=clr, markersize=13.0, mew=4)
# plot trimmings
if mae is not None:
plt.axvline(-1 * mae, color='black', linewidth=12)
if mape is not None: # equivalent to MAE for a 10 kcal/mol interaction energy
ax.plot(0.025 * mape, positions, 'o', color='black', markersize=15.0)
# save and show
pltuid = title # simple (not really unique) filename for LaTeX integration
pltfile = expand_saveas(saveas, pltuid, def_prefix='flat_', relpath=relpath)
files_saved = {}
for ext in graphicsformat:
savefile = pltfile + '.' + ext.lower()
plt.savefig(savefile, transparent=True, format=ext, bbox_inches='tight',
frameon=False, pad_inches=0.0)
files_saved[ext.lower()] = savefile
if view:
plt.show()
plt.close() # give this a try
return files_saved
#def mpl_distslat_multiplot_files(pltfile, dbid, dbname, xmin, xmax, mcdats, labels, titles):
# """Saves a plot with basename *pltfile* with a slat representation
# of the modelchems errors in *mcdat*. Plot is in PNG, PDF, & EPS
# and suitable for download, no mouseover properties. Both labeled
# and labelless (for pub) figures are constructed.
#
# """
# import matplotlib as mpl
# from matplotlib.axes import Subplot
# import sapt_colors
# from matplotlib.figure import Figure
#
# nplots = len(mcdats)
# fht = nplots * 0.8
# fig, axt = plt.subplots(figsize=(12.0, fht))
# plt.subplots_adjust(left=0.01, right=0.99, hspace=0.3)
#
# axt.set_xticks([])
# axt.set_yticks([])
# plt.axis('off')
#
# for item in range(nplots):
# mcdat = mcdats[item]
# label = labels[item]
# title = titles[item]
#
# erdat = np.array(mcdat)
# yvals = np.ones(len(mcdat))
# y = np.array([sapt_colors.sapt_colors[dbname][i] for i in label])
#
# ax = Subplot(fig, nplots, 1, item + 1)
# fig.add_subplot(ax)
# sc = ax.scatter(erdat, yvals, c=y, s=3000, marker="|", cmap=mpl.cm.jet, vmin=0, vmax=1)
#
# ax.set_yticks([])
# ax.set_xticks([])
# ax.set_frame_on(False)
# ax.set_xlim([xmin, xmax])
#
# # Write files with only slats
# plt.savefig('scratch/' + pltfile + '_plain' + '.png', transparent=True, format='PNG')
# plt.savefig('scratch/' + pltfile + '_plain' + '.pdf', transparent=True, format='PDF')
# plt.savefig('scratch/' + pltfile + '_plain' + '.eps', transparent=True, format='EPS')
#
# # Rewrite files with guides and labels
# for item in range(nplots):
# ax_again = fig.add_subplot(nplots, 1, item + 1)
# ax_again.set_title(titles[item], fontsize=8)
# ax_again.text(xmin + 0.3, 1.0, stats(np.array(mcdats[item])), fontsize=7, family='monospace', verticalalignment='center')
# ax_again.plot([0, 0], [0.9, 1.1], color='#cccc00', lw=2)
# ax_again.set_frame_on(False)
# ax_again.set_yticks([])
# ax_again.set_xticks([-12.0, -8.0, -4.0, -2.0, -1.0, 0.0, 1.0, 2.0, 4.0, 8.0, 12.0])
# ax_again.tick_params(axis='both', which='major', labelbottom='off', bottom='off')
# ax_again.set_xticks([-12.0, -8.0, -4.0, -2.0, -1.0, 0.0, 1.0, 2.0, 4.0, 8.0, 12.0])
# ax_again.tick_params(axis='both', which='major', labelbottom='on', bottom='off')
#
# plt.savefig('scratch/' + pltfile + '_trimd' + '.png', transparent=True, format='PNG')
# plt.savefig('scratch/' + pltfile + '_trimd' + '.pdf', transparent=True, format='PDF')
# plt.savefig('scratch/' + pltfile + '_trimd' + '.eps', transparent=True, format='EPS')
def valerr(data, color=None, title='', xtitle='', view=True,
saveas=None, relpath=False, graphicsformat=['pdf']):
"""
"""
import hashlib
from itertools import cycle
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(4, 6))
ax1 = fig.add_subplot(211)
plt.axhline(0.0, axes=ax1, color='black')
ax1.set_ylabel('Reaction Energy')
plt.title(title)
ax2 = plt.subplot(212, sharex=ax1)
plt.axhline(0.0, axes=ax2, color='#cccc00')
ax2.set_ylabel('Energy Error')
ax2.set_xlabel(xtitle)
xmin = 500.0
xmax = -500.0
vmin = 1.0
vmax = -1.0
emin = 1.0
emax = -1.0
linecycler = cycle(['-', '--', '-.', ':'])
# plot reaction errors and threads
for trace, tracedata in data.items():
vaxis = []
vmcdata = []
verror = []
for rxn in tracedata:
clr = segment_color(color, rxn['color'] if 'color' in rxn else None)
xmin = min(xmin, rxn['axis'])
xmax = max(xmax, rxn['axis'])
ax1.plot(rxn['axis'], rxn['mcdata'], '^', color=clr, markersize=6.0, mew=0, zorder=10)
vmcdata.append(rxn['mcdata'])
vaxis.append(rxn['axis'])
vmin = min(0, vmin, rxn['mcdata'])
vmax = max(0, vmax, rxn['mcdata'])
if rxn['bmdata'] is not None:
ax1.plot(rxn['axis'], rxn['bmdata'], 'o', color='black', markersize=6.0, zorder=1)
vmin = min(0, vmin, rxn['bmdata'])
vmax = max(0, vmax, rxn['bmdata'])
if rxn['error'][0] is not None:
ax2.plot(rxn['axis'], rxn['error'][0], 's', color=clr, mew=0, zorder=8)
emin = min(0, emin, rxn['error'][0])
emax = max(0, emax, rxn['error'][0])
verror.append(rxn['error'][0])
ls = next(linecycler)
ax1.plot(vaxis, vmcdata, ls, color='grey', label=trace, zorder=3)
ax2.plot(vaxis, verror, ls, color='grey', label=trace, zorder=4)
xbuf = max(0.05, abs(0.02 * xmax))
vbuf = max(0.1, abs(0.02 * vmax))
ebuf = max(0.01, abs(0.02 * emax))
plt.xlim([xmin - xbuf, xmax + xbuf])
ax1.set_ylim([vmin - vbuf, vmax + vbuf])
plt.legend(fontsize='x-small', frameon=False)
ax2.set_ylim([emin - ebuf, emax + ebuf])
# save and show
pltuid = title + '_' + hashlib.sha1(title.encode()).hexdigest()
pltfile = expand_saveas(saveas, pltuid, def_prefix='valerr_', relpath=relpath)
files_saved = {}
for ext in graphicsformat:
savefile = pltfile + '.' + ext.lower()
plt.savefig(savefile, transparent=True, format=ext, bbox_inches='tight')
files_saved[ext.lower()] = savefile
if view:
plt.show()
plt.close() # give this a try
return files_saved
def disthist(data, title='', xtitle='', xmin=None, xmax=None,
me=None, stde=None, view=True,
saveas=None, relpath=False, graphicsformat=['pdf']):
"""Saves a plot with name *saveas* with a histogram representation
of the reaction errors in *data*. Also plots a gaussian distribution
with mean *me* and standard deviation *stde*. Plot has x-range
*xmin* to *xmax*, x-axis label *xtitle* and overall title *title*.
"""
import hashlib
import numpy as np
import matplotlib.pyplot as plt
def gaussianpdf(u, v, x):
"""*u* is mean, *v* is variance, *x* is value, returns probability"""
return 1.0 / np.sqrt(2.0 * np.pi * v) * np.exp(-pow(x - u, 2) / 2.0 / v)
me = me if me is not None else np.mean(data)
stde = stde if stde is not None else np.std(data, ddof=1)
evenerr = max(abs(me - 4.0 * stde), abs(me + 4.0 * stde))
xmin = xmin if xmin is not None else -1 * evenerr
xmax = xmax if xmax is not None else evenerr
dx = (xmax - xmin) / 40.
nx = int(round((xmax - xmin) / dx)) + 1
pdfx = []
pdfy = []
for i in range(nx):
ix = xmin + i * dx
pdfx.append(ix)
pdfy.append(gaussianpdf(me, pow(stde, 2), ix))
fig, ax1 = plt.subplots(figsize=(16, 6))
plt.axvline(0.0, color='#cccc00')
ax1.set_xlim(xmin, xmax)
ax1.hist(data, bins=30, range=(xmin, xmax), color='#2d4065', alpha=0.7)
ax1.set_xlabel(xtitle)
ax1.set_ylabel('Count')
ax2 = ax1.twinx()
ax2.fill(pdfx, pdfy, color='k', alpha=0.2)
ax2.set_ylabel('Probability Density')
plt.title(title)
# save and show
pltuid = title + '_' + hashlib.sha1((title + str(me) + str(stde) + str(xmin) + str(xmax)).encode()).hexdigest()
pltfile = expand_saveas(saveas, pltuid, def_prefix='disthist_', relpath=relpath)
files_saved = {}
for ext in graphicsformat:
savefile = pltfile + '.' + ext.lower()
plt.savefig(savefile, transparent=True, format=ext, bbox_inches='tight')
files_saved[ext.lower()] = savefile
if view:
plt.show()
plt.close()
return files_saved
#def thread(data, labels, color=None, title='', xlimit=4.0, mae=None, mape=None):
# """Generates a tiered slat diagram between model chemistries with
# errors (or simply values) in list *data*, which is supplied as part of the
# dictionary for each participating reaction, along with *dbse* and *rxn* keys
# in argument *data*. The plot is labeled with *title* and each tier with
# an element of *labels* and plotted at *xlimit* from the zero-line. If
# *color* is None, slats are black, if 'sapt', colors are taken from *color*
# key in *data* [0, 1]. Summary statistics *mae* are plotted on the
# overbound side and relative statistics *mape* on the underbound side.
#
# """
# from random import random
# import matplotlib.pyplot as plt
#
# # initialize tiers/wefts
# Nweft = len(labels)
# lenS = 0.2
# gapT = 0.04
# positions = range(-1, -1 * Nweft - 1, -1)
# posnS = []
# for weft in range(Nweft):
# posnS.extend([positions[weft] + lenS, positions[weft] - lenS, None])
# posnT = []
# for weft in range(Nweft - 1):
# posnT.extend([positions[weft] - lenS - gapT, positions[weft + 1] + lenS + gapT, None])
#
# # initialize plot
# fht = Nweft * 0.8
# fig, ax = plt.subplots(figsize=(12, fht))
# plt.subplots_adjust(left=0.01, right=0.99, hspace=0.3)
# plt.xlim([-xlimit, xlimit])
# plt.ylim([-1 * Nweft - 1, 0])
# plt.yticks([])
#
# # label plot and tiers
# ax.text(-0.9 * xlimit, -0.25, title,
# verticalalignment='bottom', horizontalalignment='left',
# family='Times New Roman', weight='bold', fontsize=12)
# for weft in labels:
# ax.text(-0.9 * xlimit, -(1.2 + labels.index(weft)), weft,
# verticalalignment='bottom', horizontalalignment='left',
# family='Times New Roman', weight='bold', fontsize=18)
#
# # plot reaction errors and threads
# for rxn in data:
#
# # preparation
# xvals = rxn['data']
# clr = segment_color(color, rxn['color'] if 'color' in rxn else None)
# slat = []
# for weft in range(Nweft):
# slat.extend([xvals[weft], xvals[weft], None])
# thread = []
# for weft in range(Nweft - 1):
# thread.extend([xvals[weft], xvals[weft + 1], None])
#
# # plotting
# ax.plot(slat, posnS, color=clr, linewidth=1.0, solid_capstyle='round')
# ax.plot(thread, posnT, color=clr, linewidth=0.5, solid_capstyle='round',
# alpha=0.3)
#
# # labeling
# try:
# toplblposn = next(item for item in xvals if item is not None)
# botlblposn = next(item for item in reversed(xvals) if item is not None)
# except StopIteration:
# pass
# else:
# ax.text(toplblposn, -0.75 + 0.6 * random(), rxn['sys'],
# verticalalignment='bottom', horizontalalignment='center',
# family='Times New Roman', fontsize=8)
# ax.text(botlblposn, -1 * Nweft - 0.75 + 0.6 * random(), rxn['sys'],
# verticalalignment='bottom', horizontalalignment='center',
# family='Times New Roman', fontsize=8)
#
# # plot trimmings
# if mae is not None:
# ax.plot([-x for x in mae], positions, 's', color='black')
# if mape is not None: # equivalent to MAE for a 10 kcal/mol IE
# ax.plot([0.025 * x for x in mape], positions, 'o', color='black')
#
# plt.axvline(0, color='black')
# plt.show()
def threads(data, labels, color=None, title='', xlimit=4.0, mae=None, mape=None,
mousetext=None, mouselink=None, mouseimag=None, mousetitle=None, mousediv=None,
labeled=True, view=True,
saveas=None, relpath=False, graphicsformat=['pdf']):
"""Generates a tiered slat diagram between model chemistries with
errors (or simply values) in list *data*, which is supplied as part of the
dictionary for each participating reaction, along with *dbse* and *rxn* keys
in argument *data*. The plot is labeled with *title* and each tier with
an element of *labels* and plotted at *xlimit* from the zero-line. If
*color* is None, slats are black, if 'sapt', colors are taken from *color*
key in *data* [0, 1]. Summary statistics *mae* are plotted on the
overbound side and relative statistics *mape* on the underbound side.
HTML code for mouseover if mousetext or mouselink or mouseimag specified
based on recipe of Andrew Dalke from
http://www.dalkescientific.com/writings/diary/archive/2005/04/24/interactive_html.html
"""
import random
import hashlib
import matplotlib.pyplot as plt
import numpy as np # only needed for missing data with mouseiness
# initialize tiers/wefts
Nweft = len(labels)
lenS = 0.2
gapT = 0.04
positions = range(-1, -1 * Nweft - 1, -1)
posnS = []
for weft in range(Nweft):
posnS.extend([positions[weft] + lenS, positions[weft] - lenS, None])
posnT = []
for weft in range(Nweft - 1):
posnT.extend([positions[weft] - lenS - gapT, positions[weft + 1] + lenS + gapT, None])
posnM = []
# initialize plot
fht = Nweft * 0.8
#fig, ax = plt.subplots(figsize=(12, fht))
fig, ax = plt.subplots(figsize=(11, fht))
plt.subplots_adjust(left=0.01, right=0.99, hspace=0.3)
plt.xlim([-xlimit, xlimit])
plt.ylim([-1 * Nweft - 1, 0])
plt.yticks([])
ax.set_frame_on(False)
if labeled:
ax.set_xticks([-0.5 * xlimit, -0.25 * xlimit, 0.0, 0.25 * xlimit, 0.5 * xlimit])
else:
ax.set_xticks([])
for tick in ax.xaxis.get_major_ticks():
tick.tick1line.set_markersize(0)
tick.tick2line.set_markersize(0)
# label plot and tiers
if labeled:
ax.text(-0.9 * xlimit, -0.25, title,
verticalalignment='bottom', horizontalalignment='left',
family='Times New Roman', weight='bold', fontsize=12)
for weft in labels:
ax.text(-0.9 * xlimit, -(1.2 + labels.index(weft)), weft,
verticalalignment='bottom', horizontalalignment='left',
family='Times New Roman', weight='bold', fontsize=18)
# plot reaction errors and threads
for rxn in data:
# preparation
xvals = rxn['data']
clr = segment_color(color, rxn['color'] if 'color' in rxn else None)
slat = []
for weft in range(Nweft):
slat.extend([xvals[weft], xvals[weft], None])
thread = []
for weft in range(Nweft - 1):
thread.extend([xvals[weft], xvals[weft + 1], None])
# plotting
if Nweft == 1:
ax.plot(slat, posnS, '|', color=clr, markersize=20.0, mew=1.5, solid_capstyle='round')
else:
ax.plot(slat, posnS, color=clr, linewidth=1.0, solid_capstyle='round')
ax.plot(thread, posnT, color=clr, linewidth=0.5, solid_capstyle='round', alpha=0.3)
# converting into screen coordinates for image map
# block not working for py3 or up-to-date mpl. better ways for html image map nowadays
#npxvals = [np.nan if val is None else val for val in xvals]
#xyscreen = ax.transData.transform(zip(npxvals, positions))
#xscreen, yscreen = zip(*xyscreen)
#posnM.extend(zip([rxn['db']] * Nweft, [rxn['sys']] * Nweft,
# npxvals, [rxn['show']] * Nweft, xscreen, yscreen))
# labeling
if not(mousetext or mouselink or mouseimag):
if labeled and len(data) < 200:
try:
toplblposn = next(item for item in xvals if item is not None)
botlblposn = next(item for item in reversed(xvals) if item is not None)
except StopIteration:
pass
else:
ax.text(toplblposn, -0.75 + 0.6 * random.random(), rxn['sys'],
verticalalignment='bottom', horizontalalignment='center',
family='Times New Roman', fontsize=8)
ax.text(botlblposn, -1 * Nweft - 0.75 + 0.6 * random.random(), rxn['sys'],
verticalalignment='bottom', horizontalalignment='center',
family='Times New Roman', fontsize=8)
# plot trimmings
if mae is not None:
ax.plot([-x for x in mae], positions, 's', color='black')
if labeled:
if mape is not None: # equivalent to MAE for a 10 kcal/mol IE
ax.plot([0.025 * x for x in mape], positions, 'o', color='black')
plt.axvline(0, color='#cccc00')
# save and show
pltuid = title + '_' + ('lbld' if labeled else 'bare') + '_' + hashlib.sha1((title + repr(labels) + repr(xlimit)).encode()).hexdigest()
pltfile = expand_saveas(saveas, pltuid, def_prefix='thread_', relpath=relpath)
files_saved = {}
for ext in graphicsformat:
savefile = pltfile + '.' + ext.lower()
plt.savefig(savefile, transparent=True, format=ext, bbox_inches='tight')
files_saved[ext.lower()] = savefile
if view:
plt.show()
if not (mousetext or mouselink or mouseimag):
plt.close()
return files_saved, None
else:
dpi = 80
img_width = fig.get_figwidth() * dpi
img_height = fig.get_figheight() * dpi
htmlcode = """<SCRIPT>\n"""
htmlcode += """function mouseshow(db, rxn, val, show) {\n"""
if mousetext or mouselink:
htmlcode += """ var cid = document.getElementById("cid");\n"""
if mousetext:
htmlcode += """ cid.innerHTML = %s;\n""" % (mousetext)
if mouselink:
htmlcode += """ cid.href = %s;\n""" % (mouselink)
if mouseimag:
htmlcode += """ var cmpd_img = document.getElementById("cmpd_img");\n"""
htmlcode += """ cmpd_img.src = %s;\n""" % (mouseimag)
htmlcode += """}\n"""
htmlcode += """</SCRIPT>\n"""
if mousediv:
htmlcode += """%s\n""" % (mousediv[0])
if mousetitle:
htmlcode += """%s <BR>""" % (mousetitle)
htmlcode += """<h4>Mouseover</h4><a id="cid"></a><br>\n"""
if mouseimag:
htmlcode += """<div class="text-center">"""
htmlcode += """<IMG ID="cmpd_img" WIDTH="%d" HEIGHT="%d">\n""" % (200, 160)
htmlcode += """</div>"""
if mousediv:
htmlcode += """%s\n""" % (mousediv[1])
#htmlcode += """<IMG SRC="%s" ismap usemap="#points" WIDTH="%d" HEIGHT="%d">\n""" % \
# (pltfile + '.png', img_width, img_height)
htmlcode += """<IMG SRC="%s" ismap usemap="#points" WIDTH="%d">\n""" % \
(pltfile + '.png', img_width)
htmlcode += """<MAP name="points">\n"""
# generating html image map code
# points sorted to avoid overlapping map areas that can overwhelm html for SSI
# y=0 on top for html and on bottom for mpl, so flip the numbers
posnM.sort(key=lambda tup: tup[2])
posnM.sort(key=lambda tup: tup[3])
last = (0, 0)
for dbse, rxn, val, show, x, y in posnM:
if val is None or val is np.nan:
continue
now = (int(x), int(y))
if now == last:
htmlcode += """<!-- map overlap! %s-%s %+.2f skipped -->\n""" % (dbse, rxn, val)
else:
htmlcode += """<AREA shape="rect" coords="%d,%d,%d,%d" onmouseover="javascript:mouseshow('%s', '%s', '%+.2f', '%s');">\n""" % \
(x - 2, img_height - y - 20,
x + 2, img_height - y + 20,
dbse, rxn, val, show)
last = now
htmlcode += """</MAP>\n"""
plt.close()
return files_saved, htmlcode
def ternary(sapt, title='', labeled=True, view=True,
saveas=None, relpath=False, graphicsformat=['pdf']):
"""Takes array of arrays *sapt* in form [elst, indc, disp] and builds formatted
two-triangle ternary diagrams. Either fully-readable or dotsonly depending
on *labeled*. Saves in formats *graphicsformat*.
"""
import hashlib
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib.path import Path
import matplotlib.patches as patches
# initialize plot
fig, ax = plt.subplots(figsize=(6, 3.6))
plt.xlim([-0.75, 1.25])
plt.ylim([-0.18, 1.02])
plt.xticks([])
plt.yticks([])
ax.set_aspect('equal')
if labeled:
# form and color ternary triangles
codes = [Path.MOVETO, Path.LINETO, Path.LINETO, Path.CLOSEPOLY]
pathPos = Path([(0., 0.), (1., 0.), (0.5, 0.866), (0., 0.)], codes)
pathNeg = Path([(0., 0.), (-0.5, 0.866), (0.5, 0.866), (0., 0.)], codes)
ax.add_patch(patches.PathPatch(pathPos, facecolor='white', lw=2))
ax.add_patch(patches.PathPatch(pathNeg, facecolor='#fff5ee', lw=2))
# form and color HB/MX/DD dividing lines
ax.plot([0.667, 0.5], [0., 0.866], color='#eeb4b4', lw=0.5)
ax.plot([-0.333, 0.5], [0.577, 0.866], color='#eeb4b4', lw=0.5)
ax.plot([0.333, 0.5], [0., 0.866], color='#7ec0ee', lw=0.5)
ax.plot([-0.167, 0.5], [0.289, 0.866], color='#7ec0ee', lw=0.5)
# label corners
ax.text(1.0, -0.15, u'Elst (\u2212)',
verticalalignment='bottom', horizontalalignment='center',
family='Times New Roman', weight='bold', fontsize=18)
ax.text(0.5, 0.9, u'Ind (\u2212)',
verticalalignment='bottom', horizontalalignment='center',
family='Times New Roman', weight='bold', fontsize=18)
ax.text(0.0, -0.15, u'Disp (\u2212)',
verticalalignment='bottom', horizontalalignment='center',
family='Times New Roman', weight='bold', fontsize=18)
ax.text(-0.5, 0.9, u'Elst (+)',
verticalalignment='bottom', horizontalalignment='center',
family='Times New Roman', weight='bold', fontsize=18)
xvals = []
yvals = []
cvals = []
for sys in sapt:
[elst, indc, disp] = sys
# calc ternary posn and color
Ftop = abs(indc) / (abs(elst) + abs(indc) + abs(disp))
Fright = abs(elst) / (abs(elst) + abs(indc) + abs(disp))
xdot = 0.5 * Ftop + Fright
ydot = 0.866 * Ftop
cdot = 0.5 + (xdot - 0.5) / (1. - Ftop)
if elst > 0.:
xdot = 0.5 * (Ftop - Fright)
ydot = 0.866 * (Ftop + Fright)
#print elst, indc, disp, '', xdot, ydot, cdot
xvals.append(xdot)
yvals.append(ydot)
cvals.append(cdot)
sc = ax.scatter(xvals, yvals, c=cvals, s=15, marker="o", \
cmap=mpl.cm.jet, edgecolor='none', vmin=0, vmax=1, zorder=10)
# remove figure outline
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
# save and show
pltuid = title + '_' + ('lbld' if labeled else 'bare') + '_' + hashlib.sha1((title + repr(sapt)).encode()).hexdigest()
pltfile = expand_saveas(saveas, pltuid, def_prefix='tern_', relpath=relpath)
files_saved = {}
for ext in graphicsformat:
savefile = pltfile + '.' + ext.lower()
plt.savefig(savefile, transparent=True, format=ext, bbox_inches='tight',
frameon=False, dpi=450, edgecolor='none', pad_inches=0.0)
files_saved[ext.lower()] = savefile
if view:
plt.show()
plt.close()
return files_saved
#def thread_mouseover_web(pltfile, dbid, dbname, xmin, xmax, mcdats, labels, titles):
# """Saves a plot with name *pltfile* with a slat representation of
# the modelchems errors in *mcdat*. Mouseover shows geometry and error
# from *labels* based on recipe of Andrew Dalke from
# http://www.dalkescientific.com/writings/diary/archive/2005/04/24/interactive_html.html
#
# """
# from matplotlib.backends.backend_agg import FigureCanvasAgg
# import matplotlib
# import sapt_colors
#
# cmpd_width = 200
# cmpd_height = 160
#
# nplots = len(mcdats)
# fht = nplots * 0.8
# fht = nplots * 0.8 * 1.4
# fig = matplotlib.figure.Figure(figsize=(12.0, fht))
# fig.subplots_adjust(left=0.01, right=0.99, hspace=0.3, top=0.8, bottom=0.2)
# img_width = fig.get_figwidth() * 80
# img_height = fig.get_figheight() * 80
#
# htmlcode = """
#<SCRIPT>
#function mouseandshow(name, id, db, dbname) {
# var cid = document.getElementById("cid");
# cid.innerHTML = name;
# cid.href = "fragmentviewer.py?name=" + id + "&dataset=" + db;
# var cmpd_img = document.getElementById("cmpd_img");
# cmpd_img.src = dbname + "/dimers/" + id + ".png";
#}
#</SCRIPT>
#
#Distribution of Fragment Errors in Interaction Energy (kcal/mol)<BR>
#Mouseover:<BR><a id="cid"></a><br>
#<IMG SRC="scratch/%s" ismap usemap="#points" WIDTH="%d" HEIGHT="%d">
#<IMG ID="cmpd_img" WIDTH="%d" HEIGHT="%d">
#<MAP name="points">
#""" % (pltfile, img_width, img_height, cmpd_width, cmpd_height)
#
# for item in range(nplots):
# print '<br><br><br><br><br><br>'
# mcdat = mcdats[item]
# label = labels[item]
# tttle = titles[item]
#
# erdat = np.array(mcdat)
# # No masked_array because interferes with html map
# #erdat = np.ma.masked_array(mcdat, mask=mask)
# yvals = np.ones(len(mcdat))
# y = np.array([sapt_colors.sapt_colors[dbname][i] for i in label])
#
# ax = fig.add_subplot(nplots, 1, item + 1)
# sc = ax.scatter(erdat, yvals, c=y, s=3000, marker="|", cmap=matplotlib.cm.jet, vmin=0, vmax=1)
# ax.set_title(tttle, fontsize=8)
# ax.set_yticks([])
# lp = ax.plot([0, 0], [0.9, 1.1], color='#cccc00', lw=2)
# ax.set_ylim([0.95, 1.05])
# ax.text(xmin + 0.3, 1.0, stats(erdat), fontsize=7, family='monospace', verticalalignment='center')
# if item + 1 == nplots:
# ax.set_xticks([-12.0, -8.0, -4.0, -2.0, -1.0, 0.0, 1.0, 2.0, 4.0, 8.0, 12.0])
# for tick in ax.xaxis.get_major_ticks():
# tick.tick1line.set_markersize(0)
# tick.tick2line.set_markersize(0)
# else:
# ax.set_xticks([])
# ax.set_frame_on(False)
# ax.set_xlim([xmin, xmax])
#
# # Convert the data set points into screen space coordinates
# #xyscreencoords = ax.transData.transform(zip(erdat, yvals))
# xyscreencoords = ax.transData.transform(zip(erdat, yvals))
# xcoords, ycoords = zip(*xyscreencoords)
#
# # HTML image coordinates have y=0 on the top. Matplotlib
# # has y=0 on the bottom. We'll need to flip the numbers
# for cid, x, y, er in zip(label, xcoords, ycoords, erdat):
# htmlcode += """<AREA shape="rect" coords="%d,%d,%d,%d" onmouseover="javascript:mouseandshow('%s %+.2f', '%s', %s, '%s');">\n""" % \
# (x - 2, img_height - y - 20, x + 2, img_height - y + 20, cid, er, cid, dbid, dbname)
#
# htmlcode += "</MAP>\n"
# canvas = FigureCanvasAgg(fig)
# canvas.print_figure('scratch/' + title, dpi=80, transparent=True)
#
# #plt.savefig('mplflat_' + title + '.pdf', bbox_inches='tight', transparent=True, format='PDF')
# #plt.savefig(os.environ['HOME'] + os.sep + 'mplflat_' + title + '.pdf', bbox_inches='tight', transparent=T rue, format='PDF')
#
# return htmlcode
def composition_tile(db, aa1, aa2):
"""Takes dictionary *db* of label, error pairs and amino acids *aa1*
and *aa2* and returns a square array of all errors for that amino
acid pair, buffered by zeros.
"""
import re
import numpy as np
bfdbpattern = re.compile("\d\d\d([A-Z][A-Z][A-Z])-\d\d\d([A-Z][A-Z][A-Z])-\d")
tiles = []
for key, val in db.items():
bfdbname = bfdbpattern.match(key)
if (bfdbname.group(1) == aa1 and bfdbname.group(2) == aa2) or \
(bfdbname.group(2) == aa1 and bfdbname.group(1) == aa2):
tiles.append(val)
if not tiles:
# fill in background when no data. only sensible for neutral center colormaps
tiles = [0]
dim = int(np.ceil(np.sqrt(len(tiles))))
pad = dim * dim - len(tiles)
tiles += [0] * pad
return np.reshape(np.array(tiles), (dim, dim))
def iowa(mcdat, mclbl, title='', xtitle='', xlimit=2.0, view=True,
saveas=None, relpath=False, graphicsformat=['pdf']):
"""Saves a plot with (extensionless) name *pltfile* with an Iowa
representation of the modelchems errors in *mcdat* for BBI/SSI-style
*labels*.
"""
import numpy as np
import hashlib
import matplotlib
import matplotlib.pyplot as plt
aa = ['ARG', 'HIE', 'LYS', 'ASP', 'GLU', 'SER', 'THR', 'ASN', 'GLN', 'CYS', 'MET', 'GLY', 'ALA', 'VAL', 'ILE', 'LEU', 'PRO', 'PHE', 'TYR', 'TRP']
#aa = ['ILE', 'LEU', 'ASP', 'GLU', 'PHE']
err = dict(zip(mclbl, mcdat))
# handle for frame, overall axis
fig, axt = plt.subplots(figsize=(6, 6))
#axt.set_xticks([]) # for quick nolabel, whiteback
#axt.set_yticks([]) # for quick nolabel, whiteback
axt.set_xticks(np.arange(len(aa)) + 0.3, minor=False)
axt.set_yticks(np.arange(len(aa)) + 0.3, minor=False)
axt.invert_yaxis()
axt.xaxis.tick_top() # comment for quick nolabel, whiteback
axt.set_xticklabels(aa, minor=False, rotation=60, size='small') # comment for quick nolabel, whiteback
axt.set_yticklabels(aa, minor=False, size='small') # comment for quick nolabel, whiteback
axt.xaxis.set_tick_params(width=0, length=0)
axt.yaxis.set_tick_params(width=0, length=0)
#axt.set_title('%s' % (title), fontsize=16, verticalalignment='bottom')
#axt.text(10.0, -1.5, title, horizontalalignment='center', fontsize=16)
# nill spacing between 20x20 heatmaps
plt.subplots_adjust(hspace=0.001, wspace=0.001)
index = 1
for aa1 in aa:
for aa2 in aa:
cb = composition_tile(err, aa1, aa2)
ax = matplotlib.axes.Subplot(fig, len(aa), len(aa), index)
fig.add_subplot(ax)
heatmap = ax.pcolor(cb, vmin=-xlimit, vmax=xlimit, cmap=plt.cm.PRGn)
ax.set_xticks([])
ax.set_yticks([])
index += 1
#plt.title(title)
axt.axvline(x=4.8, linewidth=5, color='k')
axt.axvline(x=8.75, linewidth=5, color='k')
axt.axvline(x=11.6, linewidth=5, color='k')
axt.axhline(y=4.8, linewidth=5, color='k')
axt.axhline(y=8.75, linewidth=5, color='k')
axt.axhline(y=11.6, linewidth=5, color='k')
axt.set_zorder(100)
# save and show
pltuid = title + '_' + hashlib.sha1((title + str(xlimit)).encode()).hexdigest()
pltfile = expand_saveas(saveas, pltuid, def_prefix='iowa_', relpath=relpath)
files_saved = {}
for ext in graphicsformat:
savefile = pltfile + '.' + ext.lower()
plt.savefig(savefile, transparent=True, format=ext, bbox_inches='tight')
#plt.savefig(savefile, transparent=False, format=ext, bbox_inches='tight') # for quick nolabel, whiteback
files_saved[ext.lower()] = savefile
if view:
plt.show()
plt.close()
return files_saved
def liliowa(mcdat, title='', xlimit=2.0, view=True,
saveas=None, relpath=False, graphicsformat=['pdf']):
"""Saves a plot with a heatmap representation of *mcdat*.
"""
import numpy as np
import hashlib
import matplotlib
import matplotlib.pyplot as plt
# handle for frame, overall axis
fig, axt = plt.subplots(figsize=(1, 1))
axt.set_xticks([])
axt.set_yticks([])
axt.invert_yaxis()
axt.xaxis.set_tick_params(width=0, length=0)
axt.yaxis.set_tick_params(width=0, length=0)
axt.set_aspect('equal')
# remove figure outline
axt.spines['top'].set_visible(False)
axt.spines['right'].set_visible(False)
axt.spines['bottom'].set_visible(False)
axt.spines['left'].set_visible(False)
tiles = mcdat
dim = int(np.ceil(np.sqrt(len(tiles))))
pad = dim * dim - len(tiles)
tiles += [0] * pad
cb = np.reshape(np.array(tiles), (dim, dim))
heatmap = axt.pcolor(cb, vmin=-xlimit, vmax=xlimit, cmap=plt.cm.PRGn)
# save and show
pltuid = title + '_' + hashlib.sha1((title + str(xlimit)).encode()).hexdigest()
pltfile = expand_saveas(saveas, pltuid, def_prefix='liliowa_', relpath=relpath)
files_saved = {}
for ext in graphicsformat:
savefile = pltfile + '.' + ext.lower()
plt.savefig(savefile, transparent=True, format=ext, bbox_inches='tight',
frameon=False, pad_inches=0.0)
files_saved[ext.lower()] = savefile
if view:
plt.show()
plt.close()
return files_saved
if __name__ == "__main__":
merge_dats = [
{'show':'a', 'db':'HSG', 'sys':'1', 'data':[0.3508, 0.1234, 0.0364, 0.0731, 0.0388]},
{'show':'b', 'db':'HSG', 'sys':'3', 'data':[0.2036, -0.0736, -0.1650, -0.1380, -0.1806]},
#{'show':'', 'db':'S22', 'sys':'14', 'data':[np.nan, -3.2144, np.nan, np.nan, np.nan]},
{'show':'c', 'db':'S22', 'sys':'14', 'data':[None, -3.2144, None, None, None]},
{'show':'d', 'db':'S22', 'sys':'15', 'data':[-1.5090, -2.5263, -2.9452, -2.8633, -3.1059]},
{'show':'e', 'db':'S22', 'sys':'22', 'data':[0.3046, -0.2632, -0.5070, -0.4925, -0.6359]}]
threads(merge_dats, labels=['d', 't', 'dt', 'q', 'tq'], color='sapt',
title='MP2-CPa[]z', mae=[0.25, 0.5, 0.5, 0.3, 1.0], mape=[20.1, 25, 15, 5.5, 3.6])
more_dats = [
{'mc':'MP2-CP-adz', 'data':[1.0, 0.8, 1.4, 1.6]},
{'mc':'MP2-CP-adtz', 'data':[0.6, 0.2, 0.4, 0.6]},
None,
{'mc':'MP2-CP-adzagain', 'data':[1.0, 0.8, 1.4, 1.6]}]
bars(more_dats, title='asdf')
single_dats = [
{'dbse':'HSG', 'sys':'1', 'data':[0.3508]},
{'dbse':'HSG', 'sys':'3', 'data':[0.2036]},
{'dbse':'S22', 'sys':'14', 'data':[None]},
{'dbse':'S22', 'sys':'15', 'data':[-1.5090]},
{'dbse':'S22', 'sys':'22', 'data':[0.3046]}]
#flat(single_dats, color='sapt', title='fg_MP2_adz', mae=0.25, mape=20.1)
flat([{'sys': '1', 'color': 0.6933450559423702, 'data': [0.45730000000000004]}, {'sys': '2', 'color': 0.7627027688599753, 'data': [0.6231999999999998]}, {'sys': '3', 'color': 0.7579958735528617, 'data': [2.7624999999999993]}, {'sys': '4', 'color': 0.7560883254421639, 'data': [2.108600000000001]}, {'sys': '5', 'color': 0.7515161912065955, 'data': [2.2304999999999993]}, {'sys': '6', 'color': 0.7235223893438876, 'data': [1.3782000000000014]}, {'sys': '7', 'color': 0.7120099024225569, 'data': [1.9519000000000002]}, {'sys': '8', 'color': 0.13721565059144678, 'data': [0.13670000000000004]}, {'sys': '9', 'color': 0.3087395095814767, 'data': [0.2966]}, {'sys': '10', 'color': 0.25493207637105103, 'data': [-0.020199999999999996]}, {'sys': '11', 'color': 0.24093814608979347, 'data': [-1.5949999999999998]}, {'sys': '12', 'color': 0.3304746631959777, 'data': [-1.7422000000000004]}, {'sys': '13', 'color': 0.4156050644764822, 'data': [0.0011999999999989797]}, {'sys': '14', 'color': 0.2667207259626991, 'data': [-2.6083999999999996]}, {'sys': '15', 'color': 0.3767053567641695, 'data': [-1.5090000000000003]}, {'sys': '16', 'color': 0.5572641509433963, 'data': [0.10749999999999993]}, {'sys': '17', 'color': 0.4788598239641578, 'data': [0.29669999999999996]}, {'sys': '18', 'color': 0.3799031371351281, 'data': [0.10209999999999964]}, {'sys': '19', 'color': 0.5053227185999078, 'data': [0.16610000000000014]}, {'sys': '20', 'color': 0.2967660584483015, 'data': [-0.37739999999999974]}, {'sys': '21', 'color': 0.38836460733750316, 'data': [-0.4712000000000005]}, {'sys': '22', 'color': 0.5585849893078809, 'data': [0.30460000000000065]}, {'sys': 'BzBz_PD36-1.8', 'color': 0.1383351040559965, 'data': [-1.1921]}, {'sys': 'BzBz_PD34-2.0', 'color': 0.23086034843049832, 'data': [-1.367]}, {'sys': 'BzBz_T-5.2', 'color': 0.254318060864096, 'data': [-0.32230000000000025]}, {'sys': 'BzBz_T-5.1', 'color': 0.26598486566733337, 'data': [-0.3428]}, {'sys': 'BzBz_T-5.0', 'color': 0.28011258347610224, 'data': [-0.36060000000000025]}, {'sys': 'PyPy_S2-3.9', 'color': 0.14520332101084785, 'data': [-0.9853000000000001]}, {'sys': 'PyPy_S2-3.8', 'color': 0.1690757103699542, 'data': [-1.0932]}, {'sys': 'PyPy_S2-3.5', 'color': 0.25615734567417053, 'data': [-1.4617]}, {'sys': 'PyPy_S2-3.7', 'color': 0.19566550224566906, 'data': [-1.2103999999999995]}, {'sys': 'PyPy_S2-3.6', 'color': 0.22476748600170826, 'data': [-1.3333]}, {'sys': 'BzBz_PD32-2.0', 'color': 0.31605681987208084, 'data': [-1.6637]}, {'sys': 'BzBz_T-4.8', 'color': 0.31533827331543723, 'data': [-0.38759999999999994]}, {'sys': 'BzBz_T-4.9', 'color': 0.2966146678069063, 'data': [-0.3759999999999999]}, {'sys': 'BzH2S-3.6', 'color': 0.38284814928043304, 'data': [-0.1886000000000001]}, {'sys': 'BzBz_PD32-1.7', 'color': 0.3128835191478639, 'data': [-1.8703999999999998]}, {'sys': 'BzMe-3.8', 'color': 0.24117892478245323, 'data': [-0.034399999999999986]}, {'sys': 'BzMe-3.9', 'color': 0.22230903086047088, 'data': [-0.046499999999999986]}, {'sys': 'BzH2S-3.7', 'color': 0.36724255203373696, 'data': [-0.21039999999999992]}, {'sys': 'BzMe-3.6', 'color': 0.284901522674611, 'data': [0.007099999999999884]}, {'sys': 'BzMe-3.7', 'color': 0.2621086166558813, 'data': [-0.01770000000000005]}, {'sys': 'BzBz_PD32-1.9', 'color': 0.314711251903219, 'data': [-1.7353999999999998]}, {'sys': 'BzBz_PD32-1.8', 'color': 0.3136181753200793, 'data': [-1.8039999999999998]}, {'sys': 'BzH2S-3.8', 'color': 0.3542001591399945, 'data': [-0.22230000000000016]}, {'sys': 'BzBz_PD36-1.9', 'color': 0.14128552184232473, 'data': [-1.1517]}, {'sys': 'BzBz_S-3.7', 'color': 0.08862098445220466, 'data': [-1.3414]}, {'sys': 'BzH2S-4.0', 'color': 0.33637540012259076, 'data': [-0.2265999999999999]}, {'sys': 'BzBz_PD36-1.5', 'color': 0.13203548045236127, 'data': [-1.3035]}, {'sys': 'BzBz_S-3.8', 'color': 0.0335358832178858, 'data': [-1.2022]}, {'sys': 'BzBz_S-3.9', 'color': 0.021704594689389095, 'data': [-1.0747]}, {'sys': 'PyPy_T3-5.1', 'color': 0.3207725129126432, 'data': [-0.2958000000000003]}, {'sys': 'PyPy_T3-5.0', 'color': 0.3254925304351165, 'data': [-0.30710000000000015]}, {'sys': 'BzBz_PD36-1.7', 'color': 0.13577087141986593, 'data': [-1.2333000000000003]}, {'sys': 'PyPy_T3-4.8', 'color': 0.3443704059902452, 'data': [-0.32010000000000005]}, {'sys': 'PyPy_T3-4.9', 'color': 0.3333442013628509, 'data': [-0.3158999999999996]}, {'sys': 'PyPy_T3-4.7', 'color': 0.35854000505665756, 'data': [-0.31530000000000014]}, {'sys': 'BzBz_PD36-1.6', 'color': 0.13364651314909243, 'data': [-1.2705000000000002]}, {'sys': 'BzMe-4.0', 'color': 0.20560117919562013, 'data': [-0.05389999999999984]}, {'sys': 'MeMe-3.6', 'color': 0.16934865900383142, 'data': [0.18420000000000003]}, {'sys': 'MeMe-3.7', 'color': 0.1422332591197123, 'data': [0.14680000000000004]}, {'sys': 'MeMe-3.4', 'color': 0.23032794290360467, 'data': [0.29279999999999995]}, {'sys': 'MeMe-3.5', 'color': 0.19879551978386897, 'data': [0.23260000000000003]}, {'sys': 'MeMe-3.8', 'color': 0.11744404936205816, 'data': [0.11680000000000001]}, {'sys': 'BzBz_PD34-1.7', 'color': 0.22537382457222138, 'data': [-1.5286999999999997]}, {'sys': 'BzBz_PD34-1.6', 'color': 0.22434088042760192, 'data': [-1.5754000000000001]}, {'sys': 'BzBz_PD32-2.2', 'color': 0.3189891685300601, 'data': [-1.5093999999999999]}, {'sys': 'BzBz_S-4.1', 'color': 0.10884135031532088, 'data': [-0.8547000000000002]}, {'sys': 'BzBz_S-4.0', 'color': 0.06911476296747143, 'data': [-0.9590000000000001]}, {'sys': 'BzBz_PD34-1.8', 'color': 0.22685419834431494, 'data': [-1.476]}, {'sys': 'BzBz_PD34-1.9', 'color': 0.2287079261672095, 'data': [-1.4223999999999997]}, {'sys': 'BzH2S-3.9', 'color': 0.3439077006047999, 'data': [-0.22739999999999982]}, {'sys': 'FaNNFaNN-4.1', 'color': 0.7512716174974567, 'data': [1.7188999999999997]}, {'sys': 'FaNNFaNN-4.0', 'color': 0.7531388297328865, 'data': [1.9555000000000007]}, {'sys': 'FaNNFaNN-4.3', 'color': 0.7478064149182957, 'data': [1.2514000000000003]}, {'sys': 'FaNNFaNN-4.2', 'color': 0.7493794908838113, 'data': [1.4758000000000013]}, {'sys': 'FaOOFaON-4.0', 'color': 0.7589275618320565, 'data': [2.0586]}, {'sys': 'FaOOFaON-3.7', 'color': 0.7619465815742713, 'data': [3.3492999999999995]}, {'sys': 'FaOOFaON-3.9', 'color': 0.7593958895631474, 'data': [2.4471000000000007]}, {'sys': 'FaOOFaON-3.8', 'color': 0.7605108059280967, 'data': [2.8793999999999986]}, {'sys': 'FaONFaON-4.1', 'color': 0.7577459277014137, 'data': [1.8697999999999997]}, {'sys': 'FaOOFaON-3.6', 'color': 0.7633298028299997, 'data': [3.847599999999998]}, {'sys': 'FaNNFaNN-3.9', 'color': 0.7548200901251662, 'data': [2.2089]}, {'sys': 'FaONFaON-3.8', 'color': 0.7582294603551467, 'data': [2.967699999999999]}, {'sys': 'FaONFaON-3.9', 'color': 0.7575285282217349, 'data': [2.578900000000001]}, {'sys': 'FaONFaON-4.2', 'color': 0.7594549221042256, 'data': [1.5579999999999998]}, {'sys': 'FaOOFaNN-3.6', 'color': 0.7661655616885379, 'data': [3.701599999999999]}, {'sys': 'FaOOFaNN-3.7', 'color': 0.7671068376007428, 'data': [3.156500000000001]}, {'sys': 'FaOOFaNN-3.8', 'color': 0.766947626251711, 'data': [2.720700000000001]}, {'sys': 'FaONFaNN-3.9', 'color': 0.7569836601896789, 'data': [2.4281000000000006]}, {'sys': 'FaONFaNN-3.8', 'color': 0.758024548462959, 'data': [2.7561999999999998]}, {'sys': 'FaOOFaOO-3.6', 'color': 0.7623422640217077, 'data': [3.851800000000001]}, {'sys': 'FaOOFaOO-3.7', 'color': 0.7597430792159379, 'data': [3.2754999999999974]}, {'sys': 'FaOOFaOO-3.4', 'color': 0.7672554950739594, 'data': [5.193299999999999]}, {'sys': 'FaOOFaOO-3.5', 'color': 0.764908813123865, 'data': [4.491900000000001]}, {'sys': 'FaONFaNN-4.2', 'color': 0.7549212942233738, 'data': [1.534699999999999]}, {'sys': 'FaONFaNN-4.0', 'color': 0.7559404310956357, 'data': [2.1133000000000024]}, {'sys': 'FaONFaNN-4.1', 'color': 0.7551574698775625, 'data': [1.813900000000002]}, {'sys': 'FaONFaON-4.0', 'color': 0.7572064604483282, 'data': [2.2113999999999994]}, {'sys': 'FaOOFaOO-3.8', 'color': 0.7573810956831686, 'data': [2.7634000000000007]}, {'sys': '1', 'color': 0.2784121805328983, 'data': [0.3508]}, {'sys': '2', 'color': 0.22013842798900166, 'data': [-0.034600000000000186]}, {'sys': '3', 'color': 0.12832496088281312, 'data': [0.20360000000000023]}, {'sys': '4', 'color': 0.6993695033529733, 'data': [1.9092000000000002]}, {'sys': '5', 'color': 0.7371192790053749, 'data': [1.656600000000001]}, {'sys': '6', 'color': 0.5367033190796172, 'data': [0.27970000000000006]}, {'sys': '7', 'color': 0.3014220615964802, 'data': [0.32289999999999974]}, {'sys': '8', 'color': 0.01605867807629261, 'data': [0.12199999999999994]}, {'sys': '9', 'color': 0.6106300539083558, 'data': [0.3075999999999999]}, {'sys': '10', 'color': 0.6146680031333968, 'data': [0.6436000000000002]}, {'sys': '11', 'color': 0.6139747851721759, 'data': [0.4551999999999996]}, {'sys': '12', 'color': 0.32122739401126593, 'data': [0.44260000000000005]}, {'sys': '13', 'color': 0.24678148099136055, 'data': [-0.11789999999999967]}, {'sys': '14', 'color': 0.23700950710597016, 'data': [0.42689999999999995]}, {'sys': '15', 'color': 0.23103396678138563, 'data': [0.3266]}, {'sys': '16', 'color': 0.1922070769654413, 'data': [0.0696000000000001]}, {'sys': '17', 'color': 0.19082151944747366, 'data': [0.11159999999999992]}, {'sys': '18', 'color': 0.2886200282444196, 'data': [0.4114]}, {'sys': '19', 'color': 0.23560171133945224, 'data': [-0.1392]}, {'sys': '20', 'color': 0.3268270751294533, 'data': [0.5593]}, {'sys': '21', 'color': 0.7324460869158442, 'data': [0.6806000000000001]}],
color='sapt', title='MP2-CP-adz', mae=1.21356003247, mape=24.6665886087, xlimit=4.0)
lin_dats = [-0.5, -0.4, -0.3, 0, .5, .8, 5]
lin_labs = ['008ILE-012LEU-1', '012LEU-085ASP-1', '004GLU-063LEU-2',
'011ILE-014PHE-1', '027GLU-031LEU-1', '038PHE-041ILE-1', '199LEU-202GLU-1']
iowa(lin_dats, lin_labs, title='ttl', xlimit=0.5)
figs = [0.22, 0.41, 0.14, 0.08, 0.47,
0, 0.38, 0.22, 0.10, 0.20,
0, 0, 0.13, 0.07, 0.25,
0, 0, 0, 0.06, 0.22,
0, 0, 0, 0, 0.69]
liliowa(figs, saveas='SSI-default-MP2-CP-aqz', xlimit=1.0)
disthist(lin_dats)
valerrdata = [{'color': 0.14255710779686612, 'db': 'NBC1', 'sys': 'BzBz_S-3.6', 'error': [0.027999999999999803], 'mcdata': -1.231, 'bmdata': -1.259, 'axis': 3.6}, {'color': 0.08862098445220466, 'db': 'NBC1', 'sys': 'BzBz_S-3.7', 'error': [0.02300000000000013], 'mcdata': -1.535, 'bmdata': -1.558, 'axis': 3.7}, {'color': 0.246634626511043, 'db': 'NBC1', 'sys': 'BzBz_S-3.4', 'error': [0.04200000000000001], 'mcdata': 0.189, 'bmdata': 0.147, 'axis': 3.4}, {'color': 0.19526236766857613, 'db': 'NBC1', 'sys': 'BzBz_S-3.5', 'error': [0.03500000000000003], 'mcdata': -0.689, 'bmdata': -0.724, 'axis': 3.5}, {'color': 0.3443039102164425, 'db': 'NBC1', 'sys': 'BzBz_S-3.2', 'error': [0.05999999999999961], 'mcdata': 3.522, 'bmdata': 3.462, 'axis': 3.2}, {'color': 0.29638827303466814, 'db': 'NBC1', 'sys': 'BzBz_S-3.3', 'error': [0.050999999999999934], 'mcdata': 1.535, 'bmdata': 1.484, 'axis': 3.3}, {'color': 0.42859228971962615, 'db': 'NBC1', 'sys': 'BzBz_S-6.0', 'error': [0.0020000000000000018], 'mcdata': -0.099, 'bmdata': -0.101, 'axis': 6.0}, {'color': 0.30970751839224836, 'db': 'NBC1', 'sys': 'BzBz_S-5.0', 'error': [0.0040000000000000036], 'mcdata': -0.542, 'bmdata': -0.546, 'axis': 5.0}, {'color': 0.3750832778147902, 'db': 'NBC1', 'sys': 'BzBz_S-5.5', 'error': [0.0030000000000000027], 'mcdata': -0.248, 'bmdata': -0.251, 'axis': 5.5}, {'color': 0.0335358832178858, 'db': 'NBC1', 'sys': 'BzBz_S-3.8', 'error': [0.019000000000000128], 'mcdata': -1.674, 'bmdata': -1.693, 'axis': 3.8}, {'color': 0.021704594689389095, 'db': 'NBC1', 'sys': 'BzBz_S-3.9', 'error': [0.016000000000000014], 'mcdata': -1.701, 'bmdata': -1.717, 'axis': 3.9}, {'color': 0.22096255119953187, 'db': 'NBC1', 'sys': 'BzBz_S-4.5', 'error': [0.008000000000000007], 'mcdata': -1.058, 'bmdata': -1.066, 'axis': 4.5}, {'color': 0.10884135031532088, 'db': 'NBC1', 'sys': 'BzBz_S-4.1', 'error': [0.01200000000000001], 'mcdata': -1.565, 'bmdata': -1.577, 'axis': 4.1}, {'color': 0.06911476296747143, 'db': 'NBC1', 'sys': 'BzBz_S-4.0', 'error': [0.014000000000000012], 'mcdata': -1.655, 'bmdata': -1.669, 'axis': 4.0}, {'color': 0.14275218373289067, 'db': 'NBC1', 'sys': 'BzBz_S-4.2', 'error': [0.01100000000000012], 'mcdata': -1.448, 'bmdata': -1.459, 'axis': 4.2}, {'color': 0.4740372133275638, 'db': 'NBC1', 'sys': 'BzBz_S-6.5', 'error': [0.0010000000000000009], 'mcdata': -0.028, 'bmdata': -0.029, 'axis': 6.5}, {'color': 0.6672504378283713, 'db': 'NBC1', 'sys': 'BzBz_S-10.0', 'error': [0.0], 'mcdata': 0.018, 'bmdata': 0.018, 'axis': 10.0}]
valerr({'cat': valerrdata},
color='sapt', xtitle='Rang', title='aggh', graphicsformat=['png'])
| lgpl-3.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.