markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
And note we can use NUTS directly because there's no need to infer any discrete parameters. | mcmc = MCMC(
NUTS(model_with_known_low),
**MCMC_KWARGS,
)
mcmc.run(MCMC_RNG, num_observations, true_x)
mcmc.print_summary() | notebooks/source/truncated_distributions.ipynb | pyro-ppl/numpyro | apache-2.0 |
Removing the truncation | model_without_truncation = numpyro.handlers.condition(
truncated_poisson_model,
{"low": 0},
)
pred = Predictive(model_without_truncation, posterior_samples=mcmc.get_samples())
pred_samples = pred(PRED_RNG, num_observations)
thinned_samples = pred_samples["x"][::500]
discrete_distplot(thinned_samples.copy()); | notebooks/source/truncated_distributions.ipynb | pyro-ppl/numpyro | apache-2.0 |
Simple Exponential Smoothing
Lets use Simple Exponential Smoothing to forecast the below oil data. | ax = oildata.plot()
ax.set_xlabel("Year")
ax.set_ylabel("Oil (millions of tonnes)")
print("Figure 7.1: Oil production in Saudi Arabia from 1996 to 2007.") | v0.13.2/examples/notebooks/generated/exponential_smoothing.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Here we run three variants of simple exponential smoothing:
1. In fit1 we do not use the auto optimization but instead choose to explicitly provide the model with the $\alpha=0.2$ parameter
2. In fit2 as above we choose an $\alpha=0.6$
3. In fit3 we allow statsmodels to automatically find an optimized $\alpha$ value for us. This is the recommended approach. | fit1 = SimpleExpSmoothing(oildata, initialization_method="heuristic").fit(
smoothing_level=0.2, optimized=False
)
fcast1 = fit1.forecast(3).rename(r"$\alpha=0.2$")
fit2 = SimpleExpSmoothing(oildata, initialization_method="heuristic").fit(
smoothing_level=0.6, optimized=False
)
fcast2 = fit2.forecast(3).rename(r"$\alpha=0.6$")
fit3 = SimpleExpSmoothing(oildata, initialization_method="estimated").fit()
fcast3 = fit3.forecast(3).rename(r"$\alpha=%s$" % fit3.model.params["smoothing_level"])
plt.figure(figsize=(12, 8))
plt.plot(oildata, marker="o", color="black")
plt.plot(fit1.fittedvalues, marker="o", color="blue")
(line1,) = plt.plot(fcast1, marker="o", color="blue")
plt.plot(fit2.fittedvalues, marker="o", color="red")
(line2,) = plt.plot(fcast2, marker="o", color="red")
plt.plot(fit3.fittedvalues, marker="o", color="green")
(line3,) = plt.plot(fcast3, marker="o", color="green")
plt.legend([line1, line2, line3], [fcast1.name, fcast2.name, fcast3.name]) | v0.13.2/examples/notebooks/generated/exponential_smoothing.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Holt's Method
Lets take a look at another example.
This time we use air pollution data and the Holt's Method.
We will fit three examples again.
1. In fit1 we again choose not to use the optimizer and provide explicit values for $\alpha=0.8$ and $\beta=0.2$
2. In fit2 we do the same as in fit1 but choose to use an exponential model rather than a Holt's additive model.
3. In fit3 we used a damped versions of the Holt's additive model but allow the dampening parameter $\phi$ to be optimized while fixing the values for $\alpha=0.8$ and $\beta=0.2$ | fit1 = Holt(air, initialization_method="estimated").fit(
smoothing_level=0.8, smoothing_trend=0.2, optimized=False
)
fcast1 = fit1.forecast(5).rename("Holt's linear trend")
fit2 = Holt(air, exponential=True, initialization_method="estimated").fit(
smoothing_level=0.8, smoothing_trend=0.2, optimized=False
)
fcast2 = fit2.forecast(5).rename("Exponential trend")
fit3 = Holt(air, damped_trend=True, initialization_method="estimated").fit(
smoothing_level=0.8, smoothing_trend=0.2
)
fcast3 = fit3.forecast(5).rename("Additive damped trend")
plt.figure(figsize=(12, 8))
plt.plot(air, marker="o", color="black")
plt.plot(fit1.fittedvalues, color="blue")
(line1,) = plt.plot(fcast1, marker="o", color="blue")
plt.plot(fit2.fittedvalues, color="red")
(line2,) = plt.plot(fcast2, marker="o", color="red")
plt.plot(fit3.fittedvalues, color="green")
(line3,) = plt.plot(fcast3, marker="o", color="green")
plt.legend([line1, line2, line3], [fcast1.name, fcast2.name, fcast3.name]) | v0.13.2/examples/notebooks/generated/exponential_smoothing.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Seasonally adjusted data
Lets look at some seasonally adjusted livestock data. We fit five Holt's models.
The below table allows us to compare results when we use exponential versus additive and damped versus non-damped.
Note: fit4 does not allow the parameter $\phi$ to be optimized by providing a fixed value of $\phi=0.98$ | fit1 = SimpleExpSmoothing(livestock2, initialization_method="estimated").fit()
fit2 = Holt(livestock2, initialization_method="estimated").fit()
fit3 = Holt(livestock2, exponential=True, initialization_method="estimated").fit()
fit4 = Holt(livestock2, damped_trend=True, initialization_method="estimated").fit(
damping_trend=0.98
)
fit5 = Holt(
livestock2, exponential=True, damped_trend=True, initialization_method="estimated"
).fit()
params = [
"smoothing_level",
"smoothing_trend",
"damping_trend",
"initial_level",
"initial_trend",
]
results = pd.DataFrame(
index=[r"$\alpha$", r"$\beta$", r"$\phi$", r"$l_0$", "$b_0$", "SSE"],
columns=["SES", "Holt's", "Exponential", "Additive", "Multiplicative"],
)
results["SES"] = [fit1.params[p] for p in params] + [fit1.sse]
results["Holt's"] = [fit2.params[p] for p in params] + [fit2.sse]
results["Exponential"] = [fit3.params[p] for p in params] + [fit3.sse]
results["Additive"] = [fit4.params[p] for p in params] + [fit4.sse]
results["Multiplicative"] = [fit5.params[p] for p in params] + [fit5.sse]
results | v0.13.2/examples/notebooks/generated/exponential_smoothing.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Plots of Seasonally Adjusted Data
The following plots allow us to evaluate the level and slope/trend components of the above table's fits. | for fit in [fit2, fit4]:
pd.DataFrame(np.c_[fit.level, fit.trend]).rename(
columns={0: "level", 1: "slope"}
).plot(subplots=True)
plt.show()
print(
"Figure 7.4: Level and slope components for Holt’s linear trend method and the additive damped trend method."
) | v0.13.2/examples/notebooks/generated/exponential_smoothing.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Comparison
Here we plot a comparison Simple Exponential Smoothing and Holt's Methods for various additive, exponential and damped combinations. All of the models parameters will be optimized by statsmodels. | fit1 = SimpleExpSmoothing(livestock2, initialization_method="estimated").fit()
fcast1 = fit1.forecast(9).rename("SES")
fit2 = Holt(livestock2, initialization_method="estimated").fit()
fcast2 = fit2.forecast(9).rename("Holt's")
fit3 = Holt(livestock2, exponential=True, initialization_method="estimated").fit()
fcast3 = fit3.forecast(9).rename("Exponential")
fit4 = Holt(livestock2, damped_trend=True, initialization_method="estimated").fit(
damping_trend=0.98
)
fcast4 = fit4.forecast(9).rename("Additive Damped")
fit5 = Holt(
livestock2, exponential=True, damped_trend=True, initialization_method="estimated"
).fit()
fcast5 = fit5.forecast(9).rename("Multiplicative Damped")
ax = livestock2.plot(color="black", marker="o", figsize=(12, 8))
livestock3.plot(ax=ax, color="black", marker="o", legend=False)
fcast1.plot(ax=ax, color="red", legend=True)
fcast2.plot(ax=ax, color="green", legend=True)
fcast3.plot(ax=ax, color="blue", legend=True)
fcast4.plot(ax=ax, color="cyan", legend=True)
fcast5.plot(ax=ax, color="magenta", legend=True)
ax.set_ylabel("Livestock, sheep in Asia (millions)")
plt.show()
print(
"Figure 7.5: Forecasting livestock, sheep in Asia: comparing forecasting performance of non-seasonal methods."
) | v0.13.2/examples/notebooks/generated/exponential_smoothing.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Holt's Winters Seasonal
Finally we are able to run full Holt's Winters Seasonal Exponential Smoothing including a trend component and a seasonal component.
statsmodels allows for all the combinations including as shown in the examples below:
1. fit1 additive trend, additive seasonal of period season_length=4 and the use of a Box-Cox transformation.
1. fit2 additive trend, multiplicative seasonal of period season_length=4 and the use of a Box-Cox transformation..
1. fit3 additive damped trend, additive seasonal of period season_length=4 and the use of a Box-Cox transformation.
1. fit4 additive damped trend, multiplicative seasonal of period season_length=4 and the use of a Box-Cox transformation.
The plot shows the results and forecast for fit1 and fit2.
The table allows us to compare the results and parameterizations. | fit1 = ExponentialSmoothing(
aust,
seasonal_periods=4,
trend="add",
seasonal="add",
use_boxcox=True,
initialization_method="estimated",
).fit()
fit2 = ExponentialSmoothing(
aust,
seasonal_periods=4,
trend="add",
seasonal="mul",
use_boxcox=True,
initialization_method="estimated",
).fit()
fit3 = ExponentialSmoothing(
aust,
seasonal_periods=4,
trend="add",
seasonal="add",
damped_trend=True,
use_boxcox=True,
initialization_method="estimated",
).fit()
fit4 = ExponentialSmoothing(
aust,
seasonal_periods=4,
trend="add",
seasonal="mul",
damped_trend=True,
use_boxcox=True,
initialization_method="estimated",
).fit()
results = pd.DataFrame(
index=[r"$\alpha$", r"$\beta$", r"$\phi$", r"$\gamma$", r"$l_0$", "$b_0$", "SSE"]
)
params = [
"smoothing_level",
"smoothing_trend",
"damping_trend",
"smoothing_seasonal",
"initial_level",
"initial_trend",
]
results["Additive"] = [fit1.params[p] for p in params] + [fit1.sse]
results["Multiplicative"] = [fit2.params[p] for p in params] + [fit2.sse]
results["Additive Dam"] = [fit3.params[p] for p in params] + [fit3.sse]
results["Multiplica Dam"] = [fit4.params[p] for p in params] + [fit4.sse]
ax = aust.plot(
figsize=(10, 6),
marker="o",
color="black",
title="Forecasts from Holt-Winters' multiplicative method",
)
ax.set_ylabel("International visitor night in Australia (millions)")
ax.set_xlabel("Year")
fit1.fittedvalues.plot(ax=ax, style="--", color="red")
fit2.fittedvalues.plot(ax=ax, style="--", color="green")
fit1.forecast(8).rename("Holt-Winters (add-add-seasonal)").plot(
ax=ax, style="--", marker="o", color="red", legend=True
)
fit2.forecast(8).rename("Holt-Winters (add-mul-seasonal)").plot(
ax=ax, style="--", marker="o", color="green", legend=True
)
plt.show()
print(
"Figure 7.6: Forecasting international visitor nights in Australia using Holt-Winters method with both additive and multiplicative seasonality."
)
results | v0.13.2/examples/notebooks/generated/exponential_smoothing.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
The Internals
It is possible to get at the internals of the Exponential Smoothing models.
Here we show some tables that allow you to view side by side the original values $y_t$, the level $l_t$, the trend $b_t$, the season $s_t$ and the fitted values $\hat{y}_t$. Note that these values only have meaningful values in the space of your original data if the fit is performed without a Box-Cox transformation. | fit1 = ExponentialSmoothing(
aust,
seasonal_periods=4,
trend="add",
seasonal="add",
initialization_method="estimated",
).fit()
fit2 = ExponentialSmoothing(
aust,
seasonal_periods=4,
trend="add",
seasonal="mul",
initialization_method="estimated",
).fit()
df = pd.DataFrame(
np.c_[aust, fit1.level, fit1.trend, fit1.season, fit1.fittedvalues],
columns=[r"$y_t$", r"$l_t$", r"$b_t$", r"$s_t$", r"$\hat{y}_t$"],
index=aust.index,
)
df.append(fit1.forecast(8).rename(r"$\hat{y}_t$").to_frame(), sort=True)
df = pd.DataFrame(
np.c_[aust, fit2.level, fit2.trend, fit2.season, fit2.fittedvalues],
columns=[r"$y_t$", r"$l_t$", r"$b_t$", r"$s_t$", r"$\hat{y}_t$"],
index=aust.index,
)
df.append(fit2.forecast(8).rename(r"$\hat{y}_t$").to_frame(), sort=True) | v0.13.2/examples/notebooks/generated/exponential_smoothing.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Finally lets look at the levels, slopes/trends and seasonal components of the models. | states1 = pd.DataFrame(
np.c_[fit1.level, fit1.trend, fit1.season],
columns=["level", "slope", "seasonal"],
index=aust.index,
)
states2 = pd.DataFrame(
np.c_[fit2.level, fit2.trend, fit2.season],
columns=["level", "slope", "seasonal"],
index=aust.index,
)
fig, [[ax1, ax4], [ax2, ax5], [ax3, ax6]] = plt.subplots(3, 2, figsize=(12, 8))
states1[["level"]].plot(ax=ax1)
states1[["slope"]].plot(ax=ax2)
states1[["seasonal"]].plot(ax=ax3)
states2[["level"]].plot(ax=ax4)
states2[["slope"]].plot(ax=ax5)
states2[["seasonal"]].plot(ax=ax6)
plt.show() | v0.13.2/examples/notebooks/generated/exponential_smoothing.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Simulations and Confidence Intervals
By using a state space formulation, we can perform simulations of future values. The mathematical details are described in Hyndman and Athanasopoulos [2] and in the documentation of HoltWintersResults.simulate.
Similar to the example in [2], we use the model with additive trend, multiplicative seasonality, and multiplicative error. We simulate up to 8 steps into the future, and perform 1000 simulations. As can be seen in the below figure, the simulations match the forecast values quite well.
[2] Hyndman, Rob J., and George Athanasopoulos. Forecasting: principles and practice, 2nd edition. OTexts, 2018. | fit = ExponentialSmoothing(
aust,
seasonal_periods=4,
trend="add",
seasonal="mul",
initialization_method="estimated",
).fit()
simulations = fit.simulate(8, repetitions=100, error="mul")
ax = aust.plot(
figsize=(10, 6),
marker="o",
color="black",
title="Forecasts and simulations from Holt-Winters' multiplicative method",
)
ax.set_ylabel("International visitor night in Australia (millions)")
ax.set_xlabel("Year")
fit.fittedvalues.plot(ax=ax, style="--", color="green")
simulations.plot(ax=ax, style="-", alpha=0.05, color="grey", legend=False)
fit.forecast(8).rename("Holt-Winters (add-mul-seasonal)").plot(
ax=ax, style="--", marker="o", color="green", legend=True
)
plt.show() | v0.13.2/examples/notebooks/generated/exponential_smoothing.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Simulations can also be started at different points in time, and there are multiple options for choosing the random noise. | fit = ExponentialSmoothing(
aust,
seasonal_periods=4,
trend="add",
seasonal="mul",
initialization_method="estimated",
).fit()
simulations = fit.simulate(
16, anchor="2009-01-01", repetitions=100, error="mul", random_errors="bootstrap"
)
ax = aust.plot(
figsize=(10, 6),
marker="o",
color="black",
title="Forecasts and simulations from Holt-Winters' multiplicative method",
)
ax.set_ylabel("International visitor night in Australia (millions)")
ax.set_xlabel("Year")
fit.fittedvalues.plot(ax=ax, style="--", color="green")
simulations.plot(ax=ax, style="-", alpha=0.05, color="grey", legend=False)
fit.forecast(8).rename("Holt-Winters (add-mul-seasonal)").plot(
ax=ax, style="--", marker="o", color="green", legend=True
)
plt.show() | v0.13.2/examples/notebooks/generated/exponential_smoothing.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Get status information related to your form based request | # To be completed | dkrz_forms/Templates/Retrieve_Form.ipynb | IS-ENES-Data/submission_forms | apache-2.0 |
Contact the DKRZ data managers for form related issues | # tob be completed | dkrz_forms/Templates/Retrieve_Form.ipynb | IS-ENES-Data/submission_forms | apache-2.0 |
Here we import the NumPy and pandas data libraries with their standard abbreviations, plus HoloViews with its standard abbreviation hv. The line reading hv.extension('bokeh') loads and activates the bokeh plotting backend, so all visualizations will be generated using Bokeh. We will see how to use matplotlib instead of bokeh later in the tutorial Customizing Visual Appearance.
What are elements?
In short, elements are HoloViews' most basic, core primitives. All the various types of hv.Element accept semantic metadata that allows their input data to be given an automatic, visual representation. Most importantly, element objects always preserve the raw data they are supplied.
In this notebook we will explore a number of different element types and examine some of the ways that elements can supplement the supplied data with useful semantic data. To choose your own types to use in the exercises, you can browse them all in the reference gallery.
Creating elements
All basic elements accept their data as a single, mandatory positional argument which may be supplied in a number of different formats, some of which we will now examine. A handful of annotation elements are exceptions to this rule, namely Arrow, Text, Bounds, Box and Ellipse, as they require additional positional arguments.
A simple curve
To start with a simple example, we will sample a quadratic function $y=100-x^2$ at 21 different values of $x$ and wrap that data in a HoloViews element: | xs = [i for i in range(-10,11)]
ys = [100-(x**2) for x in xs]
simple_curve = hv.Curve((xs,ys))
simple_curve | notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Here we supplied two lists of values as a tuple to [hv.Curve]((http://build.holoviews.org/reference/elements/bokeh/Curve.html), assigned the result to the attribute simple_curve, and let Jupyter display the object using its default visual representation. As you can see, that default visual representation is a Bokeh plot, which is automatically generated by HoloViews when Jupyter requests it. But simple_curve itself is just a wrapper around your data, not a plot, and you can choose other representations that are not plots. For instance, printing the object will give you a purely textual representation instead: | print(simple_curve) | notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
The textual representation indicates that this object is a continuous mapping from x to y, which is how HoloViews knew to render it as a continuous curve. You can also access the full original data if you wish: | #simple_curve.data | notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
If you uncomment that line, you should see the original data values, though in some cases like this one the data has been converted to a better format (a Pandas dataframe instead of Python lists).
There are a number of similar elements to Curve such as Area and Scatter, which you can try out for yourself in the exercises. | # Exercise: Try switching hv.Curve with hv.Area and hv.Scatter
# Optional:
# Look at the .data attribute of the elements you created to see the raw data (as a pandas DataFrame)
| notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Annotating the curve
Wrapping your data (xs and ys) here as a HoloViews element is sufficient to make it visualizable, but there are many other aspects of the data that we can capture to convey more about its meaning to HoloViews. For instance, we might want to specify what the x-axis and y-axis actually correspond to, in the real world. Perhaps this parabola is the trajectory of a ball thrown into the air, in which case we could declare the object as: | trajectory = hv.Curve((xs,ys), kdims=['distance'], vdims=['height'])
trajectory | notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Here we have added semantic information about our data to the Curve element. Specifically, we told HoloViews that the kdim or key dimension of our data corresponds to the real-world independent variable ('distance'), and the vdim or value dimension 'height' is the real-world dependent variable. Even though the additional information we provided is about the data, not directly about the plot, HoloViews is designed to reveal the properties of your data accurately, and so the axes now update to show what these dimensions represent. | # Exercise: Take a look at trajectory.vdims
| notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Casting between elements
The type of an element is a declaration of important facts about your data, which gives HoloViews the appropriate hint required to generate a suitable visual representation from it. For instance, calling it a Curve is a declaration from the user that the data consists of samples from an underlying continuous function, which is why HoloViews plots it as a connected object. If we convert to an hv.Scatter object instead, the same set of data will show up as separated points, because "Scatter" does not make an assumption that the data is meant to be continuous: | hv.Scatter(simple_curve) | notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Casting the same data between different Element types in this way is often useful as a way to see your data differently, particularly if you are not certain of a single best way to interpret the data. Casting preserves your declared metadata as much as possible, propagating your declarations from the original object to the new one. | # How do you predict the representation for hv.Scatter(trajectory) will differ from
# hv.Scatter(simple_curve) above? Try it!
# Also try casting the trajectory to an area then back to a curve.
| notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Turning arrays into elements
The curve above was constructed from a list of x-values and a list of y-values. Next we will create an element using an entirely different datatype, namely a NumPy array: | x = np.linspace(0, 10, 500)
y = np.linspace(0, 10, 500)
xx, yy = np.meshgrid(x, y)
arr = np.sin(xx)*np.cos(yy)
image = hv.Image(arr) | notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
As above, we know that this data was sampled from a continuous function, but this time the data is mapping from two key dimensions, so we declare it as an [hv.Image]((http://build.holoviews.org/reference/elements/bokeh/Image.html) object. As you might expect, an Image object is visualized as an image by default: | image
# Exercise: Try visualizing different two-dimensional arrays.
# You can try a new function entirely or simple modifications of the existing one
# E.g., explore the effect of squaring and cubing the sine and cosine terms
# Optional: Try supplying appropriate labels for the x- and y- axes
# Hint: The x,y positions are how you *index* (or key) the array *values* (so x and y are both kdims)
| notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Selecting columns from tables to make elements
In addition to basic Python datatypes and xarray and NumPy array types, HoloViews elements can be passed tabular data in the form of pandas DataFrames: | economic_data = pd.read_csv('../data/macro.csv')
economic_data.tail() | notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Let's build an element that helps us understand how the percentage growth in US GDP varies over time. As our dataframe contains GDP growth data for lots of countries, let us select the United States from the table and create a Curve element from it: | US_data = economic_data[economic_data['country'] == 'United States'] # Select data for the US only
US_data.tail()
growth_curve = hv.Curve(US_data, kdims=['year'], vdims=['growth'])
growth_curve | notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
In this case, declaring the kdims and vdims does not simply declare the axis labels, it allows HoloViews to discover which columns of the data should be used from the dataframe for each of the axes. | # Exercise: Plot the unemployment (unem) over year
| notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Dimension labels
In this example, the simplistic axis labels are starting to get rather limiting. Changing the kdims and vdims is no longer trivial either, as they need to match the column names in the dataframe. Is the only solution to rename the columns in our dataframe to something more descriptive but more awkward to type?
Luckily, no. The recommendation is that you continue to use short, programmer and pandas-friendly, tab-completeable column names as these are also the most convenient dimension names to use with HoloViews.
What you should do instead is set the dimension labels, using the fact that dimensions are full, rich objects behind the scenes: | gdp_growth = growth_curve.redim.label(growth='GDP growth')
gdp_growth | notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
With the redim method, we have associated a dimension label with the growth dimension, resulting in a new element called gdp_growth (you can check for yourself that growth_curve is unchanged). Let's look at what the new dimension contains: | gdp_growth.vdims
# Exercise: Use redim.label to give the year dimension a better label
| notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
The redim utility lets you easily change other dimension parameters, and as an example let's give our GDP growth dimension the appropriate unit: | gdp_growth.redim.unit(growth='%')
# Exercise: Use redim.unit to give the year dimension a better unit
# For instance, relabel to 'Time' then give the unit as 'year'
| notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Composing elements together
Viewing a single element at a time often conveys very little information for the space used. In this section, we introduce the two composition operators + and * to build Layout and Overlay objects.
Layouts
Earlier on we were casting a parabola to different element types. Viewing the different types was awkward, wasting lots of vertical space in the notebook. What we will often want to do is view these elements side by side: | layout = trajectory + hv.Scatter(trajectory) + hv.Area(trajectory) + hv.Spikes(trajectory)
layout.cols(2) | notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
What we have created with the + operator is an hv.Layout object (with a hint that a two-column layout is desired): | print(layout) | notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Now let us build a new layout by selecting elements from layout: | layout.Curve.I + layout.Spikes.I | notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
We see that a Layout lets us pick component elements via two levels of tab-completable attribute access. Note that by default the type of the element defines the first level of access and the second level of access automatically uses Roman numerals (because Python identifiers cannot start with numbers).
These two levels correspond to another type of semantic declaration that applies to the elements directly (rather than their dimensions), called group and label. Specifically, group allows you to declare what kind of thing this object is, while label allows you to label which specific object it is. What you put in those declarations, if anything, will form the title of the plot: | cannonball = trajectory.relabel('Cannonball', group='Trajectory')
integral = hv.Area(trajectory).relabel('Filled', group='Trajectory')
labelled_layout = cannonball + integral
labelled_layout
# Exercise: Try out the tab-completion of labelled_layout to build a new layout swapping the position of these elements
# Optional: Try using two levels of dictionary-style access to grab the cannonball trajectory
| notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Overlays
Layout places objects side by side, allowing it to collect (almost!) any HoloViews objects that you want to indicate are related. Another operator * allows you to overlay elements into a single plot, if they live in the same space (with matching dimensions and similar ranges over those dimensions). The result of * is an Overlay: | trajectory * hv.Spikes(trajectory) | notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
The indexing system of Overlay is identical to that of Layout. | # Exercise: Make an overlay of the Spikes object from layout on top of the filled trajectory area of labelled_layout
| notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
One thing that is specific to Overlays is the use of color cycles to automatically differentiate between elements of the same type and group: | tennis_ball = cannonball.clone((xs, 0.5*np.array(ys)), label='Tennis Ball')
cannonball + tennis_ball + (cannonball * tennis_ball) | notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Here we use the clone method to make a shallower tennis-ball trajectory: the clone method create a new object that preserves semantic metadata while allowing overrides (in this case we override the input data and the label).
As you can see, HoloViews can determine that the two overlaid curves will be distinguished by color, and so it also provides a legend so that the mapping from color to data is clear. | # Optional Exercise:
# 1. Create a thrown_ball curve with half the height of tennis_ball by cloning it and assigning the label 'Thrown ball'
# 2. Add thrown_ball to the overlay
| notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Slicing and selecting
HoloViews elements can be easily sliced using array-style syntax or using the .select method. The following example shows how we can slice the cannonball trajectory into its ascending and descending components: | full_trajectory = cannonball.redim.label(distance='Horizontal distance', height='Vertical height')
ascending = full_trajectory[-10:1].relabel('ascending')
descending = cannonball.select(distance=(0,11.)).relabel('descending')
ascending * descending | notebooks/01-introduction-to-elements.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Next, load the spectroscopy data that we are going to analyse using hoggorm. After the data has been loaded into the pandas data frame, we'll display it in the notebook. | # Load data
# Insert code for reading data from other folder in repository instead of directly from same repository.
data_df = pd.read_csv('gasoline_NIR.txt', header=None, sep='\s+') | examples/PCA/PCA_on_spectroscopy_data.ipynb | olivertomic/hoggorm | bsd-2-clause |
Let's have a look at the dimensions of the data frame. | np.shape(data_df) | examples/PCA/PCA_on_spectroscopy_data.ipynb | olivertomic/hoggorm | bsd-2-clause |
The nipalsPCA class in hoggorm accepts only numpy arrays with numerical values and not pandas data frames. Therefore, the pandas data frame holding the imported data needs to be "taken apart" into three parts:
* a numpy array holding the numeric values
* a Python list holding variable (column) names
* a Python list holding object (row) names.
The array with values will be used as input for the nipalsPCA class for analysis. The Python lists holding the variable and row names will be used later in the plotting function from the hoggormPlot package when visualising the results of the analysis. Below is the code needed to access both data, variable names and object names. | # Get the values from the data frame
data = data_df.values | examples/PCA/PCA_on_spectroscopy_data.ipynb | olivertomic/hoggorm | bsd-2-clause |
Apply PCA to our data
Now, let's run PCA on the data using the nipalsPCA class. The documentation provides a description of the input parameters. Using input paramter arrX we define which numpy array we would like to analyse. By setting input parameter Xstand=False we make sure that the variables are only mean centered, not scaled to unit variance. This is the default setting and actually doesn't need to expressed explicitly. Setting paramter cvType=["loo"] we make sure that we compute the PCA model using full cross validation. "loo" means "Leave One Out". By setting paramter numpComp=4 we ask for four principal components (PC) to be computed. | model = ho.nipalsPCA(arrX=data, Xstand=False, cvType=["loo"], numComp=5) | examples/PCA/PCA_on_spectroscopy_data.ipynb | olivertomic/hoggorm | bsd-2-clause |
That's it, the PCA model has been computed. Now we would like to inspect the results by visualising them. We can do this using the taylor-made plotting function for PCA from the separate hoggormPlot package. If we wish to plot the results for component 1 and component 2, we can do this by setting the input argument comp=[1, 2]. The input argument plots=[1, 6] lets the user define which plots are to be plotted. If this list for example contains value 1, the function will generate the scores plot for the model. If the list contains value 6, the function will generate a explained variance plot. The hoggormPlot documentation provides a description of input paramters. | hop.plot(model, comp=[1, 2],
plots=[1, 6]) | examples/PCA/PCA_on_spectroscopy_data.ipynb | olivertomic/hoggorm | bsd-2-clause |
It is also possible to generate the same plots one by one with specific plot functions as shown below. | hop.loadings(model, line=True) | examples/PCA/PCA_on_spectroscopy_data.ipynb | olivertomic/hoggorm | bsd-2-clause |
Accessing numerical results
Now that we have visualised the PCA results, we may also want to access the numerical results. Below are some examples. For a complete list of accessible results, please see this part of the documentation. | # Get scores and store in numpy array
scores = model.X_scores()
# Get scores and store in pandas dataframe with row and column names
scores_df = pd.DataFrame(model.X_scores())
#scores_df.index = data_objNames
scores_df.columns = ['PC{0}'.format(x+1) for x in range(model.X_scores().shape[1])]
scores_df
help(ho.nipalsPCA.X_scores)
# Dimension of the scores
np.shape(model.X_scores()) | examples/PCA/PCA_on_spectroscopy_data.ipynb | olivertomic/hoggorm | bsd-2-clause |
We see that the numpy array holds the scores for four components as required when computing the PCA model. | # Get loadings and store in numpy array
loadings = model.X_loadings()
# Get loadings and store in pandas dataframe with row and column names
loadings_df = pd.DataFrame(model.X_loadings())
#loadings_df.index = data_varNames
loadings_df.columns = ['PC{0}'.format(x+1) for x in range(model.X_loadings().shape[1])]
loadings_df
help(ho.nipalsPCA.X_loadings)
np.shape(model.X_loadings())
# Get loadings and store in numpy array
loadings = model.X_corrLoadings()
# Get loadings and store in pandas dataframe with row and column names
loadings_df = pd.DataFrame(model.X_corrLoadings())
#loadings_df.index = data_varNames
loadings_df.columns = ['PC{0}'.format(x+1) for x in range(model.X_corrLoadings().shape[1])]
loadings_df
help(ho.nipalsPCA.X_corrLoadings)
# Get calibrated explained variance of each component
calExplVar = model.X_calExplVar()
# Get calibrated explained variance and store in pandas dataframe with row and column names
calExplVar_df = pd.DataFrame(model.X_calExplVar())
calExplVar_df.columns = ['calibrated explained variance']
calExplVar_df.index = ['PC{0}'.format(x+1) for x in range(model.X_loadings().shape[1])]
calExplVar_df
help(ho.nipalsPCA.X_calExplVar)
# Get cumulative calibrated explained variance
cumCalExplVar = model.X_cumCalExplVar()
# Get cumulative calibrated explained variance and store in pandas dataframe with row and column names
cumCalExplVar_df = pd.DataFrame(model.X_cumCalExplVar())
cumCalExplVar_df.columns = ['cumulative calibrated explained variance']
cumCalExplVar_df.index = ['PC{0}'.format(x) for x in range(model.X_loadings().shape[1] + 1)]
cumCalExplVar_df
help(ho.nipalsPCA.X_cumCalExplVar)
# Get cumulative calibrated explained variance for each variable
cumCalExplVar_ind = model.X_cumCalExplVar_indVar()
# Get cumulative calibrated explained variance for each variable and store in pandas dataframe with row and column names
cumCalExplVar_ind_df = pd.DataFrame(model.X_cumCalExplVar_indVar())
#cumCalExplVar_ind_df.columns = data_varNames
cumCalExplVar_ind_df.index = ['PC{0}'.format(x) for x in range(model.X_loadings().shape[1] + 1)]
cumCalExplVar_ind_df
help(ho.nipalsPCA.X_cumCalExplVar_indVar)
# Get calibrated predicted X for a given number of components
# Predicted X from calibration using 1 component
X_from_1_component = model.X_predCal()[1]
# Predicted X from calibration using 1 component stored in pandas data frame with row and columns names
X_from_1_component_df = pd.DataFrame(model.X_predCal()[1])
#X_from_1_component_df.index = data_objNames
#X_from_1_component_df.columns = data_varNames
X_from_1_component_df
# Get predicted X for a given number of components
# Predicted X from calibration using 4 components
X_from_4_component = model.X_predCal()[4]
# Predicted X from calibration using 1 component stored in pandas data frame with row and columns names
X_from_4_component_df = pd.DataFrame(model.X_predCal()[4])
#X_from_4_component_df.index = data_objNames
#X_from_4_component_df.columns = data_varNames
X_from_4_component_df
help(ho.nipalsPCA.X_predCal)
# Get validated explained variance of each component
valExplVar = model.X_valExplVar()
# Get calibrated explained variance and store in pandas dataframe with row and column names
valExplVar_df = pd.DataFrame(model.X_valExplVar())
valExplVar_df.columns = ['validated explained variance']
valExplVar_df.index = ['PC{0}'.format(x+1) for x in range(model.X_loadings().shape[1])]
valExplVar_df
help(ho.nipalsPCA.X_valExplVar)
# Get cumulative validated explained variance
cumValExplVar = model.X_cumValExplVar()
# Get cumulative validated explained variance and store in pandas dataframe with row and column names
cumValExplVar_df = pd.DataFrame(model.X_cumValExplVar())
cumValExplVar_df.columns = ['cumulative validated explained variance']
cumValExplVar_df.index = ['PC{0}'.format(x) for x in range(model.X_loadings().shape[1] + 1)]
cumValExplVar_df
help(ho.nipalsPCA.X_cumValExplVar)
# Get cumulative validated explained variance for each variable
cumCalExplVar_ind = model.X_cumCalExplVar_indVar()
# Get cumulative validated explained variance for each variable and store in pandas dataframe with row and column names
cumValExplVar_ind_df = pd.DataFrame(model.X_cumValExplVar_indVar())
#cumValExplVar_ind_df.columns = data_varNames
cumValExplVar_ind_df.index = ['PC{0}'.format(x) for x in range(model.X_loadings().shape[1] + 1)]
cumValExplVar_ind_df
help(ho.nipalsPCA.X_cumValExplVar_indVar)
# Get validated predicted X for a given number of components
# Predicted X from validation using 1 component
X_from_1_component_val = model.X_predVal()[1]
# Predicted X from calibration using 1 component stored in pandas data frame with row and columns names
X_from_1_component_val_df = pd.DataFrame(model.X_predVal()[1])
#X_from_1_component_val_df.index = data_objNames
#X_from_1_component_val_df.columns = data_varNames
X_from_1_component_val_df
# Get validated predicted X for a given number of components
# Predicted X from validation using 3 components
X_from_3_component_val = model.X_predVal()[3]
# Predicted X from calibration using 3 components stored in pandas data frame with row and columns names
X_from_3_component_val_df = pd.DataFrame(model.X_predVal()[3])
#X_from_3_component_val_df.index = data_objNames
#X_from_3_component_val_df.columns = data_varNames
X_from_3_component_val_df
help(ho.nipalsPCA.X_predVal)
# Get predicted scores for new measurements (objects) of X
# First pretend that we acquired new X data by using part of the existing data and overlaying some noise
import numpy.random as npr
new_data = data[0:4, :] + npr.rand(4, np.shape(data)[1])
np.shape(new_data)
# Now insert the new data into the existing model and compute scores for two components (numComp=2)
pred_scores = model.X_scores_predict(new_data, numComp=2)
# Same as above, but results stored in a pandas dataframe with row names and column names
pred_scores_df = pd.DataFrame(model.X_scores_predict(new_data, numComp=2))
pred_scores_df.columns = ['PC{0}'.format(x) for x in range(2)]
pred_scores_df.index = ['new object {0}'.format(x) for x in range(np.shape(new_data)[0])]
pred_scores_df
help(ho.nipalsPCA.X_scores_predict) | examples/PCA/PCA_on_spectroscopy_data.ipynb | olivertomic/hoggorm | bsd-2-clause |
<div style="float: right; color: red;">Please, rename this file to <code style="color:red">HW6.ipynb</code> and save it in <code style="color:red">MSA8010F16/HW6</code>
</div>
Homework 6: Preprocessing Data
We use a data set from the UCI Machine Learning Repository
https://archive.ics.uci.edu/ml/datasets/Bank+Marketing
to experiment with a Decision Tree classifier http://www.saedsayad.com/decision_tree.htm
Scikit-Learn: http://scikit-learn.org/stable/modules/tree.html#tree
Book slides:
- http://131.96.197.204/~pmolnar/mlbook/BookSlides_4A_Information-based_Learning.pdf
- http://131.96.197.204/~pmolnar/mlbook/BookSlides_4B_Information-based_Learning.pdf
Bank Marketing Data Set
The data is related with direct marketing campaigns (phone calls) of a Portuguese banking institution. The classification goal is to predict if the client will subscribe a term deposit (variable y).
Data Set Information:
The data is related with direct marketing campaigns of a Portuguese banking institution. The marketing campaigns were based on phone calls. Often, more than one contact to the same client was required, in order to access if the product (bank term deposit) would be ('yes') or not ('no') subscribed.
There are four datasets:
1) bank-additional-full.csv with all examples (41188) and 20 inputs, ordered by date (from May 2008 to November 2010), very close to the data analyzed in [Moro et al., 2014]
2) bank-additional.csv with 10% of the examples (4119), randomly selected from 1), and 20 inputs.
3) bank-full.csv with all examples and 17 inputs, ordered by date (older version of this dataset with less inputs).
4) bank.csv with 10% of the examples and 17 inputs, randomly selected from 3 (older version of this dataset with less inputs).
The smallest datasets are provided to test more computationally demanding machine learning algorithms (e.g., SVM).
The classification goal is to predict if the client will subscribe (yes/no) a term deposit (variable y).
Attribute Information:
Input variables:
- bank client data:
1 age (numeric)
2 job : type of job (categorical: 'admin.','blue-collar','entrepreneur','housemaid','management','retired','self-employed','services','student','technician','unemployed','unknown')
3 marital : marital status (categorical: 'divorced','married','single','unknown'; note: 'divorced' means divorced or widowed)
4 education (categorical: 'basic.4y','basic.6y','basic.9y','high.school','illiterate','professional.course','university.degree','unknown')
5 default: has credit in default? (categorical: 'no','yes','unknown')
6 housing: has housing loan? (categorical: 'no','yes','unknown')
7 loan: has personal loan? (categorical: 'no','yes','unknown')
- related with the last contact of the current campaign:
8 contact: contact communication type (categorical: 'cellular','telephone')
9 month: last contact month of year (categorical: 'jan', 'feb', 'mar', ..., 'nov', 'dec')
10 day_of_week: last contact day of the week (categorical: 'mon','tue','wed','thu','fri')
11 duration: last contact duration, in seconds (numeric). Important note: this attribute highly affects the output target (e.g., if duration=0 then y='no'). Yet, the duration is not known before a call is performed. Also, after the end of the call y is obviously known. Thus, this input should only be included for benchmark purposes and should be discarded if the intention is to have a realistic predictive model.
- other attributes:
12 campaign: number of contacts performed during this campaign and for this client (numeric, includes last contact)
13 pdays: number of days that passed by after the client was last contacted from a previous campaign (numeric; 999 means client was not previously contacted)
14 previous: number of contacts performed before this campaign and for this client (numeric)
15 poutcome: outcome of the previous marketing campaign (categorical: 'failure','nonexistent','success')
- social and economic context attributes
16 emp.var.rate: employment variation rate - quarterly indicator (numeric)
17 cons.price.idx: consumer price index - monthly indicator (numeric)
18 cons.conf.idx: consumer confidence index - monthly indicator (numeric)
19 euribor3m: euribor 3 month rate - daily indicator (numeric)
20 nr.employed: number of employees - quarterly indicator (numeric)
Output variable (desired target):
21 y - has the client subscribed a term deposit? (binary: 'yes','no') | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
DATAFILE = '/home/data/archive.ics.uci.edu/BankMarketing/bank.csv'
###DATAFILE = 'data/bank.csv' ### using locally
df = pd.read_csv(DATAFILE, sep=';')
list(df.columns) | DataScienceProgramming/09-Machine-Learning-II/HW6_orig.ipynb | squishbug/DataScienceProgramming | cc0-1.0 |
Step 1: Investigate Data Set
We have a number of categorical data: What's their cardinality? How are the levels distributed?
What's the distribution on numeric values? Do we see any correlations?
Let's first look at columns (i.e. variables) with continuous values. We can get a sense of the distribution from aggregate functions like mean, standard variation, quantiles, as well as, minimum and maximum values.
The Pandas method describe creates a table view of those metrics. (The methods can also be used to identify numeric features in the data frame. | ### use sets and '-' difference operation 'A-B'. Also there is a symmetric different '^'
all_features = set(df.columns)-set(['y'])
num_features = set(df.describe().columns)
cat_features = all_features-num_features
print("All features: ", ", ".join(all_features), "\nNumerical features: ", ", ".join(num_features), "\nCategorical features: ", ", ".join(cat_features))
set(df.columns)-set(df.describe().columns)-set('y')
### Describe Columns
help(pd.DataFrame.describe)
### Let's get the description of the numeric data for each of the target values separately.
### We need to rename the columns before we can properly join the tables. The column names may look strange...
desc_yes = df[df.y=='yes'].describe().rename_axis(lambda c: "%s|A"%c, axis='columns')
desc_no = df[df.y=='no'].describe().rename_axis(lambda c: "%s|B"%c, axis='columns')
### ...but this way we can get them in the desired order...
desc = desc_yes.join(desc_no).reindex_axis(sorted(desc_yes.columns), axis=1)
### ...because we're changing them anyway:
#desc.set_axis(1, [sorted(list(num_features)*2), ['yes', 'no']*len(num_features)])
#desc | DataScienceProgramming/09-Machine-Learning-II/HW6_orig.ipynb | squishbug/DataScienceProgramming | cc0-1.0 |
Let's look at the distribution of numerical features... | %matplotlib inline
fig = plt.figure(figsize=(32, 8))
for i in range(len(num_features)):
f = list(num_features)[i]
plt.subplot(2, 4, i+1)
hst = plt.hist(df[f], alpha=0.5)
plt.title(f)
plt.suptitle('Distribution of Numeric Values', fontsize=20)
None | DataScienceProgramming/09-Machine-Learning-II/HW6_orig.ipynb | squishbug/DataScienceProgramming | cc0-1.0 |
Now, let's look at the categorical variables and their distribution... | for f in cat_features:
tab = df[f].value_counts()
print('%s:\t%s' % (f, ', '.join([ ("%s(%d)" %(tab.index[i], tab.values[i])) for i in range(len(tab))]) )) | DataScienceProgramming/09-Machine-Learning-II/HW6_orig.ipynb | squishbug/DataScienceProgramming | cc0-1.0 |
Results in a data frame: | mat = pd.DataFrame(
[ df[f].value_counts() for f in list(cat_features) ],
index=list(cat_features)
).stack()
pd.DataFrame(mat.values, index=mat.index) | DataScienceProgramming/09-Machine-Learning-II/HW6_orig.ipynb | squishbug/DataScienceProgramming | cc0-1.0 |
Step 2: Prepare for ML algorithm
The ML algorithms in Scikit-Learn use Matrices (with numeric values). We need to convert our data-frame into a feature matrix X and a target vector y.
Many algorithms also require the features to be in the same range. Decision-trees don't bother because they don't perform any operations across features.
Use the pd.DataFrame.as_matrix method to convert a DataFrame into a matrix. | help(pd.DataFrame.as_matrix)
## We copy our original dataframe into a new one, and then perform replacements on categorical levels.
## We may also keep track of our replacement
level_substitution = {}
def levels2index(levels):
dct = {}
for i in range(len(levels)):
dct[levels[i]] = i
return dct
df_num = df.copy()
for c in cat_features:
level_substitution[c] = levels2index(df[c].unique())
df_num[c].replace(level_substitution[c], inplace=True)
## same for target
df_num.y.replace({'no':0, 'yes':1}, inplace=True)
df_num
level_substitution | DataScienceProgramming/09-Machine-Learning-II/HW6_orig.ipynb | squishbug/DataScienceProgramming | cc0-1.0 |
Step 3: Training
Now that we have our DataFrame prepared, we can create the feature matrix X and target vector y:
1. split data into training and test sets
2. fit the model | X = df_num[list(all_features)].as_matrix()
y = df_num.y.as_matrix()
X, y
### Scikit-learn provides us with a nice function to split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.4, random_state=42)
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier(max_depth=5)
clf.fit(X_train, y_train)
score_train = clf.score(X_train, y_train)
score_test = clf.score(X_test, y_test)
print('Ratio of correctly classified samples for:\n\tTraining-set:\t%f\n\tTest-set:\t%f'%(score_train, score_test)) | DataScienceProgramming/09-Machine-Learning-II/HW6_orig.ipynb | squishbug/DataScienceProgramming | cc0-1.0 |
score returns the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. For binary classification it means percentage of correctly classified samples.
The score should be close to 1. Though, one single number does not tell the whole story...
Step 4: Evaluate Model
predict $\hat y$ for your model on test set
calculate confusion matrix and derive measures
visualize if suitable
Let's see what we got. We can actually print the entire decision tree and trace for each sample ... though you may need to use the viz-wall for that. | import sklearn.tree
import pydot_ng as pdot
dot_data = sklearn.tree.export_graphviz(clf, out_file=None, feature_names = list(all_features), class_names=['no', 'yes'])
graph = pdot.graph_from_dot_data(dot_data)
#--- we can save the graph into a file ... preferrably vector graphics
#graph.write_svg('mydt.svg')
graph.write_pdf('/home/pmolnar/public_html/mydt.pdf')
#--- or display right here
##from IPython.display import HTML
HTML(str(graph.create_svg().decode('utf-8'))) | DataScienceProgramming/09-Machine-Learning-II/HW6_orig.ipynb | squishbug/DataScienceProgramming | cc0-1.0 |
Now, we use out classifier and predict on the test set (In order to get the ŷ character type: 'y\hat' followed by the TAB-key.) | ŷ = clf.predict(X_test)
## a function that produces the confusion matrix: 1. parameter y=actual target, 2. parameter ŷ=predicted
def binary_confusion_matrix(y,ŷ):
TP = ((y+ŷ)== 2).sum()
TN = ((y+ŷ)== 0).sum()
FP = ((y-ŷ)== -1).sum()
FN = ((y-ŷ)== 1).sum()
return pd.DataFrame( [[TP, FP], [FN, TN]], index=[['Prediction', 'Prediction'],['Yes', 'No']], columns=[['Actual', 'Actual'],['Yes', 'No']])
cm = binary_confusion_matrix(y_test, ŷ)
cm
### Scikit-Learn can do that too ... so so nice though
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, ŷ)
cm
### Here are some metrics
from sklearn.metrics import classification_report
print(classification_report(y_test, ŷ))
### http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html#sphx-glr-auto-examples-model-selection-plot-confusion-matrix-py
import itertools
np.set_printoptions(precision=2)
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
%matplotlib inline
fig = plt.figure()
plot_confusion_matrix(cm, classes=['No', 'Yes'], normalize=True, title='Normalized confusion matrix')
plt.show() | DataScienceProgramming/09-Machine-Learning-II/HW6_orig.ipynb | squishbug/DataScienceProgramming | cc0-1.0 |
Sample Dataset <a name="section10"></a>
The file sample_frame.csv - shown below - contains synthetic data of 100 clusters classified by region (East, North, South and West). Clusters represent a group of households. In the file, each cluster has an associated number of households (number_households) and a status variable indicating whether the cluster is in scope or not.
This synthetic data represents a simplified version of enumeration areas (EAs) frames found in many countries and used by major household survey programs such as the Demographic and Health Surveys (DHS), the Population-based HIV Impact Assessment (PHIA) surveys and the Multiple Cluster Indicator Surveys (MICS). | psu_frame_cls = PSUFrame()
psu_frame_cls.load_data()
psu_frame = psu_frame_cls.data
psu_frame.head(25) | docs/source/tutorial/psu_selection.ipynb | survey-methods/samplics | mit |
Often, sampling frames are not available for the sampling units of interest. For example, most countries do not have a list of all households or people living in the country. Even if such frames exist, it may not be operationally and financially feasible to directly select sampling units without any form of clustering.
Hence, stage sampling is a common strategy used by large household national surveys for selecting samples of households and people. At the first stage, geographic or administrative clusters of households are selected. At the second stage, a frame of households is created from the selected clusters and a sample of households is selected. At the third stage (if applicable), a sample of people is selected from the households in the sample. This is a high level description of the process; usually implementations are much less straightforward and may require many adjustments to address complexities.
PSU Probability of Selection <a name="section11"></a>
At the first stage, we use the proportional to size (pps) method to select a random sample of clusters. The measure of size is the number of households (number_households) as provided in the psu sampling frame. The sample is stratified by region. The probabilities, for stratified pps, is obtained as follow: \begin{equation} p_{hi} = \frac{n_h M_{hi}}{\sum_{i=1}^{N_h}{M_{hi}}} \end{equation} where $p_{hi}$ is the probability of selection for unit $i$ from stratum $h$, $M_{hi}$ is the measure of size (mos), $n_h$ and $N_h$ are the sample size and the total number of clusters in stratum $h$, respectively.
Important. The pps method is used in many surveys not just for multistage household surveys. For example, in business surveys, establishments can greatly vary in size; hence pps methods are often use to select samples. Simarly, facility-based surveys can benefit from pps methods when frames with measures of size are available.
PSU Sample size
For a stratified sampling design, the sample size is provided using a Python dictionary. Python dictionaries allow us to pair the strata with the sample sizes. Let's say that we want to select 3 clusters from stratum East, 2 from West, 2 from North and 3 from South. The snippet of code below demonstrates how to create the Python dictionary. Note that it is important to correctly spell out the keys of the dictionary which corresponds to the values of the variable stratum (in our case it's region). | psu_sample_size = {"East":3, "West": 2, "North": 2, "South": 3}
print(f"\nThe sample size per domain is: {psu_sample_size}\n") | docs/source/tutorial/psu_selection.ipynb | survey-methods/samplics | mit |
The function array_to_dict() converts an array to a dictionnary by pairing the values of the array to their frequency. We can use this function to calculates the number of clusters per stratum and store the result in a Python dictionnary. Then, we modify the values of the dictionnary to create the sample size dictionnary.
If some of the clusters are certainties then an exception will be raised. Hence, the user will have to manually handle the certaininties. Better handling of certainties is planned for future versions of the library samplics. | from samplics import array_to_dict
frame_size = array_to_dict(psu_frame["region"])
print(f"\nThe number of clusters per stratum is: {frame_size}")
psu_sample_size = frame_size.copy()
psu_sample_size["East"] = 3
psu_sample_size["North"] = 2
psu_sample_size["South"] = 3
psu_sample_size["West"] = 2
print(f"\nThe sample size per stratum is: {psu_sample_size}\n")
stage1_design = SampleSelection(method="pps-sys", stratification=True, with_replacement=False)
psu_frame["psu_prob"] = stage1_design.inclusion_probs(
psu_frame["cluster"],
psu_sample_size,
psu_frame["region"],
psu_frame["number_households_census"],
)
nb_obs = 15
print(f"\nFirst {nb_obs} observations of the PSU frame \n")
psu_frame.head(nb_obs) | docs/source/tutorial/psu_selection.ipynb | survey-methods/samplics | mit |
PSU Selection <a name="section12"></a>
In this section, we select a sample of psus using pps methods. In the section above, we have calculated the probabilities of selection. That step is not necessary when using samplics. We can use the method select() to calculate the probability of selection and select the sample, in one run. As shown below, select() method returns a tuple of three arrays.
The first array indicates the selected units (i.e. psu_sample = 1 if selected, and 0 if not selected).
The second array provides the number of hits, useful when the sample is selected with replacement.
* The third array is the probability of selection.
NB: np.random.seed() fixes the random seed to allow us to reproduce the random selection. | np.random.seed(23)
psu_frame["psu_sample"], psu_frame["psu_hits"], psu_frame["psu_probs"] = stage1_design.select(
psu_frame["cluster"],
psu_sample_size,
psu_frame["region"],
psu_frame["number_households_census"]
)
nb_obs = 15
print(f"\nFirst {nb_obs} observations of the PSU frame with the sampling information \n")
psu_frame.head(nb_obs) | docs/source/tutorial/psu_selection.ipynb | survey-methods/samplics | mit |
The default setting sample_only=False returns the entire frame. We can easily reduce the output data to the sample by filtering i.e. psu_sample == 1. However, if we are only interested in the sample, we could use sample_only=True when calling select(). This will reduce the output data to the sampled units and to_dataframe=true will convert the data to a pandas dataframe (pd.DataFrame). Note that the columns in the dataframe will be reduced to the minimum. | np.random.seed(23)
psu_sample = stage1_design.select(
psu_frame["cluster"],
psu_sample_size,
psu_frame["region"],
psu_frame["number_households_census"],
to_dataframe = True,
sample_only = True
)
print("\nPSU sample without the non-sampled units\n")
psu_sample | docs/source/tutorial/psu_selection.ipynb | survey-methods/samplics | mit |
The systematic selection method can be implemented with or without replacement. The other samplics algorithms for selecting sample with unequal probablities of selection are Brewer, Hanurav-Vijayan (hv), Murphy, and Rao-Sampford (rs) methods. As shown below, all these sampling techniques can be specified when extentiating a Sample class; then call select() to draw samples.
python
Sample(method="pps-sys", with_replacement=True)
Sample(method="pps-sys", with_replacement=False)
Sample(method="pps-brewer", with_replacement=False)
Sample(method="pps-hv", with_replacement=False) # Hanurav-Vijayan method
Sample(method="pps-murphy", with_replacement=False)
Sample(method="pps-sampford", with_replacement=False) # Rao-Sampford method
For example, if we wanted to select the sample using the Rao-Sampford method, we could use the following snippet of code. | np.random.seed(23)
stage1_sampford = SampleSelection(method="pps-rs", stratification=True, with_replacement=False)
psu_sample_sampford = stage1_sampford.select(
psu_frame["cluster"],
psu_sample_size,
psu_frame["region"],
psu_frame["number_households_census"],
to_dataframe=True,
sample_only=False
)
psu_sample_sampford | docs/source/tutorial/psu_selection.ipynb | survey-methods/samplics | mit |
根据任意的字典字段来排序输入结果行是很容易实现的,代码示例: | from operator import itemgetter
rows_by_fname = sorted(rows, key = itemgetter("fname"))
print(rows_by_fname)
rows_by_uid = sorted(rows, key = itemgetter("uid"))
print(rows_by_uid) | 01 data structures and algorithms/01.13 sort list of dicts by key.ipynb | wuafeing/Python3-Tutorial | gpl-3.0 |
代码的输出如上:
itemgetter() 函数也支持多个 keys ,比如下面的代码: | rows_by_lfname = sorted(rows, key = itemgetter("lname", "fname"))
print(rows_by_lfname) | 01 data structures and algorithms/01.13 sort list of dicts by key.ipynb | wuafeing/Python3-Tutorial | gpl-3.0 |
讨论
在上面例子中, rows 被传递给接受一个关键字参数的 sorted() 内置函数。 这个参数是 callable 类型,并且从 rows 中接受一个单一元素,然后返回被用来排序的值。 itemgetter() 函数就是负责创建这个 callable 对象的。
operator.itemgetter() 函数有一个被 rows 中的记录用来查找值的索引参数。可以是一个字典键名称, 一个整形值或者任何能够传入一个对象的 __getitem__() 方法的值。 如果你传入多个索引参数给 itemgetter() ,它生成的 callable 对象会返回一个包含所有元素值的元组, 并且 sorted() 函数会根据这个元组中元素顺序去排序。 但你想要同时在几个字段上面进行排序(比如通过姓和名来排序,也就是例子中的那样)的时候这种方法是很有用的。
itemgetter() 有时候也可以用 lambda 表达式代替,比如: | rows_by_fname = sorted(rows, key = lambda r: r["fname"])
rows_by_lfname = sorted(rows, key = lambda r: (r["lname"], r["fname"])) | 01 data structures and algorithms/01.13 sort list of dicts by key.ipynb | wuafeing/Python3-Tutorial | gpl-3.0 |
这种方案也不错。但是,使用 itemgetter() 方式会运行的稍微快点。因此,如果你对性能要求比较高的话就使用 itemgetter() 方式。
最后,不要忘了这节中展示的技术也同样适用于 min() 和 max() 等函数。比如: | min(rows, key = itemgetter("uid"))
max(rows, key = itemgetter("uid")) | 01 data structures and algorithms/01.13 sort list of dicts by key.ipynb | wuafeing/Python3-Tutorial | gpl-3.0 |
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/text/tutorials/classify_text_with_bert"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/classify_text_with_bert.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/tutorials/classify_text_with_bert.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/text/docs/tutorials/classify_text_with_bert.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/collections/bert/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
Classify text with BERT
This tutorial contains complete code to fine-tune BERT to perform sentiment analysis on a dataset of plain-text IMDB movie reviews.
In addition to training a model, you will learn how to preprocess text into an appropriate format.
In this notebook, you will:
Load the IMDB dataset
Load a BERT model from TensorFlow Hub
Build your own model by combining BERT with a classifier
Train your own model, fine-tuning BERT as part of that
Save your model and use it to classify sentences
If you're new to working with the IMDB dataset, please see Basic text classification for more details.
About BERT
BERT and other Transformer encoder architectures have been wildly successful on a variety of tasks in NLP (natural language processing). They compute vector-space representations of natural language that are suitable for use in deep learning models. The BERT family of models uses the Transformer encoder architecture to process each token of input text in the full context of all tokens before and after, hence the name: Bidirectional Encoder Representations from Transformers.
BERT models are usually pre-trained on a large corpus of text, then fine-tuned for specific tasks.
Setup | # A dependency of the preprocessing for BERT inputs
!pip install -q -U tensorflow-text | third_party/tensorflow-text/src/docs/tutorials/classify_text_with_bert.ipynb | nwjs/chromium.src | bsd-3-clause |
You will use the AdamW optimizer from tensorflow/models. | !pip install -q tf-models-official
import os
import shutil
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_text as text
from official.nlp import optimization # to create AdamW optimizer
import matplotlib.pyplot as plt
tf.get_logger().setLevel('ERROR') | third_party/tensorflow-text/src/docs/tutorials/classify_text_with_bert.ipynb | nwjs/chromium.src | bsd-3-clause |
Sentiment analysis
This notebook trains a sentiment analysis model to classify movie reviews as positive or negative, based on the text of the review.
You'll use the Large Movie Review Dataset that contains the text of 50,000 movie reviews from the Internet Movie Database.
Download the IMDB dataset
Let's download and extract the dataset, then explore the directory structure. | url = 'https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'
dataset = tf.keras.utils.get_file('aclImdb_v1.tar.gz', url,
untar=True, cache_dir='.',
cache_subdir='')
dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')
train_dir = os.path.join(dataset_dir, 'train')
# remove unused folders to make it easier to load the data
remove_dir = os.path.join(train_dir, 'unsup')
shutil.rmtree(remove_dir) | third_party/tensorflow-text/src/docs/tutorials/classify_text_with_bert.ipynb | nwjs/chromium.src | bsd-3-clause |
Next, you will use the text_dataset_from_directory utility to create a labeled tf.data.Dataset.
The IMDB dataset has already been divided into train and test, but it lacks a validation set. Let's create a validation set using an 80:20 split of the training data by using the validation_split argument below.
Note: When using the validation_split and subset arguments, make sure to either specify a random seed, or to pass shuffle=False, so that the validation and training splits have no overlap. | AUTOTUNE = tf.data.AUTOTUNE
batch_size = 32
seed = 42
raw_train_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train',
batch_size=batch_size,
validation_split=0.2,
subset='training',
seed=seed)
class_names = raw_train_ds.class_names
train_ds = raw_train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train',
batch_size=batch_size,
validation_split=0.2,
subset='validation',
seed=seed)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
test_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/test',
batch_size=batch_size)
test_ds = test_ds.cache().prefetch(buffer_size=AUTOTUNE) | third_party/tensorflow-text/src/docs/tutorials/classify_text_with_bert.ipynb | nwjs/chromium.src | bsd-3-clause |
Let's take a look at a few reviews. | for text_batch, label_batch in train_ds.take(1):
for i in range(3):
print(f'Review: {text_batch.numpy()[i]}')
label = label_batch.numpy()[i]
print(f'Label : {label} ({class_names[label]})') | third_party/tensorflow-text/src/docs/tutorials/classify_text_with_bert.ipynb | nwjs/chromium.src | bsd-3-clause |
Loading models from TensorFlow Hub
Here you can choose which BERT model you will load from TensorFlow Hub and fine-tune. There are multiple BERT models available.
BERT-Base, Uncased and seven more models with trained weights released by the original BERT authors.
Small BERTs have the same general architecture but fewer and/or smaller Transformer blocks, which lets you explore tradeoffs between speed, size and quality.
ALBERT: four different sizes of "A Lite BERT" that reduces model size (but not computation time) by sharing parameters between layers.
BERT Experts: eight models that all have the BERT-base architecture but offer a choice between different pre-training domains, to align more closely with the target task.
Electra has the same architecture as BERT (in three different sizes), but gets pre-trained as a discriminator in a set-up that resembles a Generative Adversarial Network (GAN).
BERT with Talking-Heads Attention and Gated GELU [base, large] has two improvements to the core of the Transformer architecture.
The model documentation on TensorFlow Hub has more details and references to the
research literature. Follow the links above, or click on the tfhub.dev URL
printed after the next cell execution.
The suggestion is to start with a Small BERT (with fewer parameters) since they are faster to fine-tune. If you like a small model but with higher accuracy, ALBERT might be your next option. If you want even better accuracy, choose
one of the classic BERT sizes or their recent refinements like Electra, Talking Heads, or a BERT Expert.
Aside from the models available below, there are multiple versions of the models that are larger and can yield even better accuracy, but they are too big to be fine-tuned on a single GPU. You will be able to do that on the Solve GLUE tasks using BERT on a TPU colab.
You'll see in the code below that switching the tfhub.dev URL is enough to try any of these models, because all the differences between them are encapsulated in the SavedModels from TF Hub. | #@title Choose a BERT model to fine-tune
bert_model_name = 'small_bert/bert_en_uncased_L-4_H-512_A-8' #@param ["bert_en_uncased_L-12_H-768_A-12", "bert_en_cased_L-12_H-768_A-12", "bert_multi_cased_L-12_H-768_A-12", "small_bert/bert_en_uncased_L-2_H-128_A-2", "small_bert/bert_en_uncased_L-2_H-256_A-4", "small_bert/bert_en_uncased_L-2_H-512_A-8", "small_bert/bert_en_uncased_L-2_H-768_A-12", "small_bert/bert_en_uncased_L-4_H-128_A-2", "small_bert/bert_en_uncased_L-4_H-256_A-4", "small_bert/bert_en_uncased_L-4_H-512_A-8", "small_bert/bert_en_uncased_L-4_H-768_A-12", "small_bert/bert_en_uncased_L-6_H-128_A-2", "small_bert/bert_en_uncased_L-6_H-256_A-4", "small_bert/bert_en_uncased_L-6_H-512_A-8", "small_bert/bert_en_uncased_L-6_H-768_A-12", "small_bert/bert_en_uncased_L-8_H-128_A-2", "small_bert/bert_en_uncased_L-8_H-256_A-4", "small_bert/bert_en_uncased_L-8_H-512_A-8", "small_bert/bert_en_uncased_L-8_H-768_A-12", "small_bert/bert_en_uncased_L-10_H-128_A-2", "small_bert/bert_en_uncased_L-10_H-256_A-4", "small_bert/bert_en_uncased_L-10_H-512_A-8", "small_bert/bert_en_uncased_L-10_H-768_A-12", "small_bert/bert_en_uncased_L-12_H-128_A-2", "small_bert/bert_en_uncased_L-12_H-256_A-4", "small_bert/bert_en_uncased_L-12_H-512_A-8", "small_bert/bert_en_uncased_L-12_H-768_A-12", "albert_en_base", "electra_small", "electra_base", "experts_pubmed", "experts_wiki_books", "talking-heads_base"]
map_name_to_handle = {
'bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3',
'bert_en_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_cased_L-12_H-768_A-12/3',
'bert_multi_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/3',
'small_bert/bert_en_uncased_L-2_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-128_A-2/1',
'small_bert/bert_en_uncased_L-2_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-256_A-4/1',
'small_bert/bert_en_uncased_L-2_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-512_A-8/1',
'small_bert/bert_en_uncased_L-2_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-768_A-12/1',
'small_bert/bert_en_uncased_L-4_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-128_A-2/1',
'small_bert/bert_en_uncased_L-4_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-256_A-4/1',
'small_bert/bert_en_uncased_L-4_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-512_A-8/1',
'small_bert/bert_en_uncased_L-4_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-768_A-12/1',
'small_bert/bert_en_uncased_L-6_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-128_A-2/1',
'small_bert/bert_en_uncased_L-6_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-256_A-4/1',
'small_bert/bert_en_uncased_L-6_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-512_A-8/1',
'small_bert/bert_en_uncased_L-6_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-768_A-12/1',
'small_bert/bert_en_uncased_L-8_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-128_A-2/1',
'small_bert/bert_en_uncased_L-8_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-256_A-4/1',
'small_bert/bert_en_uncased_L-8_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-512_A-8/1',
'small_bert/bert_en_uncased_L-8_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-768_A-12/1',
'small_bert/bert_en_uncased_L-10_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-128_A-2/1',
'small_bert/bert_en_uncased_L-10_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-256_A-4/1',
'small_bert/bert_en_uncased_L-10_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-512_A-8/1',
'small_bert/bert_en_uncased_L-10_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-768_A-12/1',
'small_bert/bert_en_uncased_L-12_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-128_A-2/1',
'small_bert/bert_en_uncased_L-12_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-256_A-4/1',
'small_bert/bert_en_uncased_L-12_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-512_A-8/1',
'small_bert/bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-768_A-12/1',
'albert_en_base':
'https://tfhub.dev/tensorflow/albert_en_base/2',
'electra_small':
'https://tfhub.dev/google/electra_small/2',
'electra_base':
'https://tfhub.dev/google/electra_base/2',
'experts_pubmed':
'https://tfhub.dev/google/experts/bert/pubmed/2',
'experts_wiki_books':
'https://tfhub.dev/google/experts/bert/wiki_books/2',
'talking-heads_base':
'https://tfhub.dev/tensorflow/talkheads_ggelu_bert_en_base/1',
}
map_model_to_preprocess = {
'bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'bert_en_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_cased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'bert_multi_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_multi_cased_preprocess/3',
'albert_en_base':
'https://tfhub.dev/tensorflow/albert_en_preprocess/3',
'electra_small':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'electra_base':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'experts_pubmed':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'experts_wiki_books':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'talking-heads_base':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
}
tfhub_handle_encoder = map_name_to_handle[bert_model_name]
tfhub_handle_preprocess = map_model_to_preprocess[bert_model_name]
print(f'BERT model selected : {tfhub_handle_encoder}')
print(f'Preprocess model auto-selected: {tfhub_handle_preprocess}') | third_party/tensorflow-text/src/docs/tutorials/classify_text_with_bert.ipynb | nwjs/chromium.src | bsd-3-clause |
The preprocessing model
Text inputs need to be transformed to numeric token ids and arranged in several Tensors before being input to BERT. TensorFlow Hub provides a matching preprocessing model for each of the BERT models discussed above, which implements this transformation using TF ops from the TF.text library. It is not necessary to run pure Python code outside your TensorFlow model to preprocess text.
The preprocessing model must be the one referenced by the documentation of the BERT model, which you can read at the URL printed above. For BERT models from the drop-down above, the preprocessing model is selected automatically.
Note: You will load the preprocessing model into a hub.KerasLayer to compose your fine-tuned model. This is the preferred API to load a TF2-style SavedModel from TF Hub into a Keras model. | bert_preprocess_model = hub.KerasLayer(tfhub_handle_preprocess) | third_party/tensorflow-text/src/docs/tutorials/classify_text_with_bert.ipynb | nwjs/chromium.src | bsd-3-clause |
Let's try the preprocessing model on some text and see the output: | text_test = ['this is such an amazing movie!']
text_preprocessed = bert_preprocess_model(text_test)
print(f'Keys : {list(text_preprocessed.keys())}')
print(f'Shape : {text_preprocessed["input_word_ids"].shape}')
print(f'Word Ids : {text_preprocessed["input_word_ids"][0, :12]}')
print(f'Input Mask : {text_preprocessed["input_mask"][0, :12]}')
print(f'Type Ids : {text_preprocessed["input_type_ids"][0, :12]}') | third_party/tensorflow-text/src/docs/tutorials/classify_text_with_bert.ipynb | nwjs/chromium.src | bsd-3-clause |
As you can see, now you have the 3 outputs from the preprocessing that a BERT model would use (input_words_id, input_mask and input_type_ids).
Some other important points:
- The input is truncated to 128 tokens. The number of tokens can be customized, and you can see more details on the Solve GLUE tasks using BERT on a TPU colab.
- The input_type_ids only have one value (0) because this is a single sentence input. For a multiple sentence input, it would have one number for each input.
Since this text preprocessor is a TensorFlow model, It can be included in your model directly.
Using the BERT model
Before putting BERT into your own model, let's take a look at its outputs. You will load it from TF Hub and see the returned values. | bert_model = hub.KerasLayer(tfhub_handle_encoder)
bert_results = bert_model(text_preprocessed)
print(f'Loaded BERT: {tfhub_handle_encoder}')
print(f'Pooled Outputs Shape:{bert_results["pooled_output"].shape}')
print(f'Pooled Outputs Values:{bert_results["pooled_output"][0, :12]}')
print(f'Sequence Outputs Shape:{bert_results["sequence_output"].shape}')
print(f'Sequence Outputs Values:{bert_results["sequence_output"][0, :12]}') | third_party/tensorflow-text/src/docs/tutorials/classify_text_with_bert.ipynb | nwjs/chromium.src | bsd-3-clause |
The BERT models return a map with 3 important keys: pooled_output, sequence_output, encoder_outputs:
pooled_output represents each input sequence as a whole. The shape is [batch_size, H]. You can think of this as an embedding for the entire movie review.
sequence_output represents each input token in the context. The shape is [batch_size, seq_length, H]. You can think of this as a contextual embedding for every token in the movie review.
encoder_outputs are the intermediate activations of the L Transformer blocks. outputs["encoder_outputs"][i] is a Tensor of shape [batch_size, seq_length, 1024] with the outputs of the i-th Transformer block, for 0 <= i < L. The last value of the list is equal to sequence_output.
For the fine-tuning you are going to use the pooled_output array.
Define your model
You will create a very simple fine-tuned model, with the preprocessing model, the selected BERT model, one Dense and a Dropout layer.
Note: for more information about the base model's input and output you can follow the model's URL for documentation. Here specifically, you don't need to worry about it because the preprocessing model will take care of that for you. | def build_classifier_model():
text_input = tf.keras.layers.Input(shape=(), dtype=tf.string, name='text')
preprocessing_layer = hub.KerasLayer(tfhub_handle_preprocess, name='preprocessing')
encoder_inputs = preprocessing_layer(text_input)
encoder = hub.KerasLayer(tfhub_handle_encoder, trainable=True, name='BERT_encoder')
outputs = encoder(encoder_inputs)
net = outputs['pooled_output']
net = tf.keras.layers.Dropout(0.1)(net)
net = tf.keras.layers.Dense(1, activation=None, name='classifier')(net)
return tf.keras.Model(text_input, net) | third_party/tensorflow-text/src/docs/tutorials/classify_text_with_bert.ipynb | nwjs/chromium.src | bsd-3-clause |
Let's check that the model runs with the output of the preprocessing model. | classifier_model = build_classifier_model()
bert_raw_result = classifier_model(tf.constant(text_test))
print(tf.sigmoid(bert_raw_result)) | third_party/tensorflow-text/src/docs/tutorials/classify_text_with_bert.ipynb | nwjs/chromium.src | bsd-3-clause |
The output is meaningless, of course, because the model has not been trained yet.
Let's take a look at the model's structure. | tf.keras.utils.plot_model(classifier_model) | third_party/tensorflow-text/src/docs/tutorials/classify_text_with_bert.ipynb | nwjs/chromium.src | bsd-3-clause |
Model training
You now have all the pieces to train a model, including the preprocessing module, BERT encoder, data, and classifier.
Loss function
Since this is a binary classification problem and the model outputs a probability (a single-unit layer), you'll use losses.BinaryCrossentropy loss function. | loss = tf.keras.losses.BinaryCrossentropy(from_logits=True)
metrics = tf.metrics.BinaryAccuracy() | third_party/tensorflow-text/src/docs/tutorials/classify_text_with_bert.ipynb | nwjs/chromium.src | bsd-3-clause |
Optimizer
For fine-tuning, let's use the same optimizer that BERT was originally trained with: the "Adaptive Moments" (Adam). This optimizer minimizes the prediction loss and does regularization by weight decay (not using moments), which is also known as AdamW.
For the learning rate (init_lr), you will use the same schedule as BERT pre-training: linear decay of a notional initial learning rate, prefixed with a linear warm-up phase over the first 10% of training steps (num_warmup_steps). In line with the BERT paper, the initial learning rate is smaller for fine-tuning (best of 5e-5, 3e-5, 2e-5). | epochs = 5
steps_per_epoch = tf.data.experimental.cardinality(train_ds).numpy()
num_train_steps = steps_per_epoch * epochs
num_warmup_steps = int(0.1*num_train_steps)
init_lr = 3e-5
optimizer = optimization.create_optimizer(init_lr=init_lr,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
optimizer_type='adamw') | third_party/tensorflow-text/src/docs/tutorials/classify_text_with_bert.ipynb | nwjs/chromium.src | bsd-3-clause |
Loading the BERT model and training
Using the classifier_model you created earlier, you can compile the model with the loss, metric and optimizer. | classifier_model.compile(optimizer=optimizer,
loss=loss,
metrics=metrics) | third_party/tensorflow-text/src/docs/tutorials/classify_text_with_bert.ipynb | nwjs/chromium.src | bsd-3-clause |
Note: training time will vary depending on the complexity of the BERT model you have selected. | print(f'Training model with {tfhub_handle_encoder}')
history = classifier_model.fit(x=train_ds,
validation_data=val_ds,
epochs=epochs) | third_party/tensorflow-text/src/docs/tutorials/classify_text_with_bert.ipynb | nwjs/chromium.src | bsd-3-clause |
Evaluate the model
Let's see how the model performs. Two values will be returned. Loss (a number which represents the error, lower values are better), and accuracy. | loss, accuracy = classifier_model.evaluate(test_ds)
print(f'Loss: {loss}')
print(f'Accuracy: {accuracy}') | third_party/tensorflow-text/src/docs/tutorials/classify_text_with_bert.ipynb | nwjs/chromium.src | bsd-3-clause |
Plot the accuracy and loss over time
Based on the History object returned by model.fit(). You can plot the training and validation loss for comparison, as well as the training and validation accuracy: | history_dict = history.history
print(history_dict.keys())
acc = history_dict['binary_accuracy']
val_acc = history_dict['val_binary_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
fig = plt.figure(figsize=(10, 6))
fig.tight_layout()
plt.subplot(2, 1, 1)
# r is for "solid red line"
plt.plot(epochs, loss, 'r', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
# plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.subplot(2, 1, 2)
plt.plot(epochs, acc, 'r', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='lower right') | third_party/tensorflow-text/src/docs/tutorials/classify_text_with_bert.ipynb | nwjs/chromium.src | bsd-3-clause |
In this plot, the red lines represent the training loss and accuracy, and the blue lines are the validation loss and accuracy.
Export for inference
Now you just save your fine-tuned model for later use. | dataset_name = 'imdb'
saved_model_path = './{}_bert'.format(dataset_name.replace('/', '_'))
classifier_model.save(saved_model_path, include_optimizer=False) | third_party/tensorflow-text/src/docs/tutorials/classify_text_with_bert.ipynb | nwjs/chromium.src | bsd-3-clause |
Let's reload the model, so you can try it side by side with the model that is still in memory. | reloaded_model = tf.saved_model.load(saved_model_path) | third_party/tensorflow-text/src/docs/tutorials/classify_text_with_bert.ipynb | nwjs/chromium.src | bsd-3-clause |
Here you can test your model on any sentence you want, just add to the examples variable below. | def print_my_examples(inputs, results):
result_for_printing = \
[f'input: {inputs[i]:<30} : score: {results[i][0]:.6f}'
for i in range(len(inputs))]
print(*result_for_printing, sep='\n')
print()
examples = [
'this is such an amazing movie!', # this is the same sentence tried earlier
'The movie was great!',
'The movie was meh.',
'The movie was okish.',
'The movie was terrible...'
]
reloaded_results = tf.sigmoid(reloaded_model(tf.constant(examples)))
original_results = tf.sigmoid(classifier_model(tf.constant(examples)))
print('Results from the saved model:')
print_my_examples(examples, reloaded_results)
print('Results from the model in memory:')
print_my_examples(examples, original_results) | third_party/tensorflow-text/src/docs/tutorials/classify_text_with_bert.ipynb | nwjs/chromium.src | bsd-3-clause |
If you want to use your model on TF Serving, remember that it will call your SavedModel through one of its named signatures. In Python, you can test them as follows: | serving_results = reloaded_model \
.signatures['serving_default'](tf.constant(examples))
serving_results = tf.sigmoid(serving_results['classifier'])
print_my_examples(examples, serving_results) | third_party/tensorflow-text/src/docs/tutorials/classify_text_with_bert.ipynb | nwjs/chromium.src | bsd-3-clause |
https://www.youtube.com/watch?v=ElmBrKyMXxs
https://github.com/hans/ipython-notebooks/blob/master/tf/TF%20tutorial.ipynb
https://github.com/ematvey/tensorflow-seq2seq-tutorials | from __future__ import division
import tensorflow as tf
from os import path, remove
import numpy as np
import pandas as pd
import csv
from sklearn.model_selection import StratifiedShuffleSplit
from time import time
from matplotlib import pyplot as plt
import seaborn as sns
from mylibs.jupyter_notebook_helper import show_graph, renderStatsList, renderStatsCollection, \
renderStatsListWithLabels, renderStatsCollectionOfCrossValids
from tensorflow.contrib import rnn
from tensorflow.contrib import learn
import shutil
from tensorflow.contrib.learn.python.learn import learn_runner
from mylibs.tf_helper import getDefaultGPUconfig
from sklearn.metrics import r2_score
from mylibs.py_helper import factors
from fastdtw import fastdtw
from collections import OrderedDict
from scipy.spatial.distance import euclidean
from statsmodels.tsa.stattools import coint
from common import get_or_run_nn
from data_providers.price_history_seq2seq_data_provider import PriceHistorySeq2SeqDataProvider
from data_providers.price_history_dataset_generator import PriceHistoryDatasetGenerator
from skopt.space.space import Integer, Real
from skopt import gp_minimize
from skopt.plots import plot_convergence
import pickle
import inspect
import dill
import sys
from models.price_history_seq2seq_raw_dummy import PriceHistorySeq2SeqRawDummy
dtype = tf.float32
seed = 16011984
random_state = np.random.RandomState(seed=seed)
config = getDefaultGPUconfig()
n_jobs = 1
%matplotlib inline
bb = tf.constant(0., dtype=tf.float32)
bb.get_shape()
aa = tf.zeros((40, 2))
aa.get_shape().concatenate(tf.TensorShape([1])) | 04_time_series_prediction/17_price_history_seq2seq-overfitting.ipynb | pligor/predicting-future-product-prices | agpl-3.0 |
Step 0 - hyperparams
vocab_size is all the potential words you could have (classification for translation case)
and max sequence length are the SAME thing
decoder RNN hidden units are usually same size as encoder RNN hidden units in translation but for our case it does not seem really to be a relationship there but we can experiment and find out later, not a priority thing right now | epochs = 15
num_features = 1
num_units = 400 #state size
input_len = 60
target_len = 30
batch_size = 50 #47
#trunc_backprop_len = ??
rnn_cell = PriceHistorySeq2SeqRawDummy.RNN_CELLS.GRU
with_EOS = False
total_train_size = 57994
train_size = 6400
test_size = 1282 | 04_time_series_prediction/17_price_history_seq2seq-overfitting.ipynb | pligor/predicting-future-product-prices | agpl-3.0 |
Once generate data | data_path = '../data/price_history'
#npz_full_train = data_path + '/price_history_03_dp_60to30_train.npz'
#npz_full_train = data_path + '/price_history_60to30_targets_normed_train.npz'
#npz_train = data_path + '/price_history_03_dp_60to30_57980_train.npz'
#npz_train = data_path + '/price_history_03_dp_60to30_6400_train.npz'
npz_train = data_path + '/price_history_60to30_6400_targets_normed_train.npz'
#npz_test = data_path + '/price_history_03_dp_60to30_test.npz'
npz_test = data_path + '/price_history_60to30_targets_normed_test.npz'
# PriceHistoryDatasetGenerator.create_subsampled(inpath=npz_full_train, target_size=6400, outpath=npz_train,
# random_state=random_state)
# %%time
# csv_in = '../price_history_03_seq_start_suddens_trimmed.csv'
# train_sku_ids, train_XX, train_YY, train_sequence_lens, train_seq_mask, test_pack = \
# PriceHistoryDatasetGenerator(random_state=random_state).\
# createAndSaveDataset(
# csv_in=csv_in,
# input_seq_len=input_len,
# target_seq_len=target_len,
# allowSmallerSequencesThanWindow=False,
# #min_date = '2016-11-01',
# split_fraction = 0.40,
# #keep_training_fraction = 0.22, #57994 * 0.22 = 12758.68
# normalize_targets = True,
# #disable saving for now since we have already created them
# save_files_dic = {"train": npz_full_train, "test": npz_test,},
# )
# print train_sku_ids.shape, train_XX.shape, train_YY.shape, train_sequence_lens.shape, train_seq_mask.shape
# aa,bb,cc,dd,ee = test_pack.get_data()
# aa.shape,bb.shape,cc.shape,dd.shape,ee.shape | 04_time_series_prediction/17_price_history_seq2seq-overfitting.ipynb | pligor/predicting-future-product-prices | agpl-3.0 |
Step 1 - collect data | dp = PriceHistorySeq2SeqDataProvider(npz_path=npz_train, batch_size=batch_size, with_EOS=with_EOS)
dp.inputs.shape, dp.targets.shape
aa, bb = dp.next()
aa.shape, bb.shape | 04_time_series_prediction/17_price_history_seq2seq-overfitting.ipynb | pligor/predicting-future-product-prices | agpl-3.0 |
Step 2 - Build model | model = PriceHistorySeq2SeqRawDummy(rng=random_state, dtype=dtype, config=config, with_EOS=with_EOS)
graph = model.getGraph(batch_size=batch_size,
num_units=num_units,
input_len=input_len,
target_len=target_len,
rnn_cell=rnn_cell)
#show_graph(graph) | 04_time_series_prediction/17_price_history_seq2seq-overfitting.ipynb | pligor/predicting-future-product-prices | agpl-3.0 |
Step 3 training the network
RECALL: baseline is around 4 for huber loss for current problem, anything above 4 should be considered as major errors | #rnn_cell = PriceHistorySeq2SeqCV.RNN_CELLS.GRU
#cross_val_n_splits = 5
epochs, num_units, batch_size
#set(factors(train_size)).intersection(factors(train_size/5))
best_learning_rate = 1e-3 #0.0026945952539362472
def experiment():
return model.run(npz_path=npz_train,
epochs=10,
batch_size = 50,
num_units = 400,
input_len=input_len,
target_len=target_len,
learning_rate = best_learning_rate,
preds_gather_enabled=True,
#eos_token = float(1e3),
rnn_cell=rnn_cell)
dyn_stats, preds_dict = experiment() | 04_time_series_prediction/17_price_history_seq2seq-overfitting.ipynb | pligor/predicting-future-product-prices | agpl-3.0 |
Recall that without batch normalization within 10 epochs with num units 400 and batch_size 64 we reached at 4.940
and with having the decoder inputs NOT filled from the outputs | %%time
dyn_stats, preds_dict = get_or_run_nn(experiment,
filename='017_seq2seq_60to30_epochs{}_learning_rate_{:.4f}'.format(
epochs, best_learning_rate
))
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])
for ind in range(len(dp.targets))]
ind = np.argmin(r2_scores)
ind
reals = dp.targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]
for ind in range(len(dp.targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(dp.targets))
reals = dp.targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show() | 04_time_series_prediction/17_price_history_seq2seq-overfitting.ipynb | pligor/predicting-future-product-prices | agpl-3.0 |
Pre-processing a single image | original = imread('data/some_signature.png')
# Manually normalizing the image following the steps provided in the paper.
# These steps are also implemented in preprocess.normalize.preprocess_signature
normalized = 255 - normalize_image(original, size=(952, 1360))
resized = resize_image(normalized, (170, 242))
cropped = crop_center(resized, (150,220))
# Visualizing the intermediate steps
f, ax = plt.subplots(4,1, figsize=(6,15))
ax[0].imshow(original, cmap='Greys_r')
ax[1].imshow(normalized)
ax[2].imshow(resized)
ax[3].imshow(cropped)
ax[0].set_title('Original')
ax[1].set_title('Background removed/centered')
ax[2].set_title('Resized')
ax[3].set_title('Cropped center of the image') | interactive_example.ipynb | luizgh/sigver_wiwd | bsd-2-clause |
Processing multiple images and obtaining feature vectors | user1_sigs = [imread('data/a%d.png' % i) for i in [1,2]]
user2_sigs = [imread('data/b%d.png' % i) for i in [1,2]]
canvas_size = (952, 1360)
processed_user1_sigs = np.array([preprocess_signature(sig, canvas_size) for sig in user1_sigs])
processed_user2_sigs = np.array([preprocess_signature(sig, canvas_size) for sig in user2_sigs])
# Shows pre-processed samples of the two users
f, ax = plt.subplots(2,2, figsize=(10,6))
ax[0,0].imshow(processed_user1_sigs[0])
ax[0,1].imshow(processed_user1_sigs[1])
ax[1,0].imshow(processed_user2_sigs[0])
ax[1,1].imshow(processed_user2_sigs[1]) | interactive_example.ipynb | luizgh/sigver_wiwd | bsd-2-clause |
Using the CNN to obtain the feature representations | # Path to the learned weights
model_weight_path = 'models/signet.pkl'
# Instantiate the model
model = CNNModel(signet, model_weight_path)
# Obtain the features. Note that you can process multiple images at the same time
user1_features = model.get_feature_vector_multiple(processed_user1_sigs, layer='fc2')
user2_features = model.get_feature_vector_multiple(processed_user2_sigs, layer='fc2') | interactive_example.ipynb | luizgh/sigver_wiwd | bsd-2-clause |
Inspecting the learned features
The feature vectors have size 2048: | user1_features.shape
print('Euclidean distance between signatures from the same user')
print(np.linalg.norm(user1_features[0] - user1_features[1]))
print(np.linalg.norm(user2_features[0] - user2_features[1]))
print('Euclidean distance between signatures from different users')
dists = [np.linalg.norm(u1 - u2) for u1 in user1_features for u2 in user2_features]
print(dists)
# Other models:
# model_weight_path = 'models/signetf_lambda0.95.pkl'
# model_weight_path = 'models/signetf_lambda0.999.pkl' | interactive_example.ipynb | luizgh/sigver_wiwd | bsd-2-clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.