markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
This concludes our overview of high-level plotting with regard to the topic of the Grammar of Graphics in the Python (and especially matplotlib) world. Next we will look at high-level plotting examples in the context of a particular data set and various methods for analyzing trends in that data. Data analysis This next section will cover the use of matplotlib and some of the realted Python libraries from the scientific computing ecosystem in order to explore more facets of high-level plotting, but with a focus on the practical, hands-on aspect. Pandas, SciPy, and Seaborn In this section on data analysis, we will be making heavy use of the Pandas, SciPy, and Seaborn libraries. Here is a quick review of each: Pandas - Python has long been great for data munging and preparation, but less so for data analysis and modeling. pandas helps fill this gap, enabling you to carry out your entire data analysis workflow in Python without having to switch to a more domain specific language like R. SciPy - The SciPy library is one of the core packages that make up the SciPy stack. It provides many user-friendly and efficient numerical routines such as routines for numerical integration and optimization; clustering; image analysis and singal processing; and statistics, among others. Seaborn - Seaborn aims to make visualization a central part of exploring and understanding data. The plotting functions operate on dataframes and arrays containing a whole dataset and internally perform the necessary aggregation and statistical model-fitting to produce informative plots. Seaborn’s goals are similar to those of R’s ggplot, but it takes a different approach with an imperative and object-oriented style that tries to make it straightforward to construct sophisticated plots. If matplotlib “tries to make easy things easy and hard things possible”, seaborn aims to make a well-defined set of hard things easy too. Examining and shaping a data set Let's do the imports we will need and set the Seaborn style for our plots:
sns.set(style="darkgrid")
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
For the following sections we will be using the precipitation and temperature data for Saint Francis, Kansas, USA, from 1894 to 2013. You can obtain CSV files for weather stations that interest you from the United States Historical Climatology Network. Let's load the CSV data that's been prepared for us, using the Pandas CSV converter:
data_file = "../data/KS147093_0563_data_only.csv" data = pd.read_csv(data_file)
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
This will have read the data in and instantiated a Pandas DataFrame object, converting the first row to column data:
data.columns
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
Here's what the data set looks like (well, the first bit of it, anyway):
data.head()
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
We'd like to see the "Month" data as names rather than numbers, so let's update that (but let's create a copy of the original, in case we need it later). We will be using month numbers and names later, so we'll set those now as well.
data_raw = pd.read_csv(data_file) month_nums = list(range(1, 13)) month_lookup = {x: calendar.month_name[x] for x in month_nums} month_names = [x[1] for x in sorted(month_lookup.items())] data["Month"] = data["Month"].map(month_lookup) data.head()
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
That's better :-) We're going to make repeated use of some of this data more than others, so let's pull those bits out:
years = data["Year"].values temps_degrees = data["Mean Temperature (F)"].values precips_inches = data["Precipitation (in)"].values
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
Let's confirm the date range we're working with:
years_min = data.get("Year").min() years_min years_max = data.get("Year").max() years_max
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
Let's set get the maximum and minimum values for our mean temperature and precipitation data:
temp_max = data.get("Mean Temperature (F)").max() temp_max temp_min = data.get("Mean Temperature (F)").min() temp_min precip_max = data.get("Precipitation (in)").max() precip_max precip_min = data.get("Precipitation (in)").min() precip_min
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
Next, we'll create a Pandas pivot table, providing us with a convenient view of our data (making some of our analysis tasks much easier). If we use our converted data frame here (the one where we updated month numbers to names), our table will have the data in alphabetical order by month. As such, we'll want to use the raw data (the copy we made before converting), and only once it has been put into a pivot table (in numerical order) will we update it with month names. Here's how:
temps = data_raw.pivot("Month", "Year", "Mean Temperature (F)") temps.index = [calendar.month_name[x] for x in temps.index] temps
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
Let's do the the same thing for precipitation:
precips = data_raw.pivot("Month", "Year", "Precipitation (in)") precips.index = [calendar.month_name[x] for x in precips.index] precips
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
We've extracted most of the data and views that we'll need for the following sections, which are: * Analysis of Temperature, 1894-2013 * Analysis of Precipitation, 1894-2013 We've got everything we need, now; let's get started! Analysis of Temperature, 1894-2013 We're going to be analyzing temperatures in this section; create an appropriate a color map to use in our various plots:
temps_colors = ["#FCF8D4", "#FAEAB9", "#FAD873", "#FFA500", "#FF8C00", "#B22222"] sns.palplot(temps_colors) temps_cmap = mpl.colors.LinearSegmentedColormap.from_list("temp colors", temps_colors)
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
Now let's take a look at the temperature data we have:
sns.set(style="ticks") (figure, axes) = plt.subplots(figsize=(18,6)) scatter = axes.scatter(years, temps_degrees, s=100, color="0.5", alpha=0.5) axes.set_xlim([years_min, years_max]) axes.set_ylim([temp_min - 5, temp_max + 5]) axes.set_title("Mean Monthly Temperatures from 1894-2013\nSaint Francis, KS, USA", fontsize=20) axes.set_xlabel("Years", fontsize=16) _ = axes.set_ylabel("Temperature (F)", fontsize=16)
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
Notice something? The banding around the minimum and maximum values looks to be trending upwards. The scatter plot makes it a bit hard to see, though. We're going to need to do some work to make sure we're not just seeing things. So what do we want to do? * get the maximum and minimum values for every year * find the best fit line though those points * examine the slopes * compare the slopes Let's do math! There are a couple of conveniences we can take advantage of: * SciPy provides several options for linear (and polynomial!) fitting and regression * We can create a Pandas Series instance that represents our linear model and use it like the other Pandas objects we're working with in this section.
def get_fit(series, m, b): x = series.index y = m * x + b return pd.Series(y, x) temps_max_x = temps.max().index temps_max_y = temps.max().values temps_min_x = temps.min().index temps_min_y = temps.min().values (temps_max_slope, temps_max_intercept, temps_max_r_value, temps_max_p_value, temps_max_std_err) = stats.linregress(temps_max_x, temps_max_y) temps_max_fit = get_fit(temps.max(), temps_max_slope, temps_max_intercept) (temps_min_slope, temps_min_intercept, temps_min_r_value, temps_min_p_value, temps_min_std_err) = stats.linregress(temps_min_x, temps_min_y) temps_min_fit = get_fit(temps.min(), temps_min_slope, temps_min_intercept)
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
Let's look at the slopes of the two:
(temps_max_slope, temps_min_slope)
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
Quick refresher: the slope $m$ is defined as the change in $y$ values over the change in $x$ values: \begin{align} m = \frac{\Delta y}{\Delta x} = \frac{\text{vertical} \, \text{change} }{\text{horizontal} \, \text{change} } \end{align} In our case, the $y$ values are the minimum and maximum mean monthly temperatures in degrees Fahrenheit; the $x$ values are the years these measurements were taken. The slope for the minimum mean monthly temperatures over the last 120 years is about 3 times greater than that of the maximum mean monthly temperatures:
temps_min_slope/temps_max_slope
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
Let's go back to our scatter plot and superimpose our linear fits for the maximum and minimum annual means:
(figure, axes) = plt.subplots(figsize=(18,6)) scatter = axes.scatter(years, temps_degrees, s=100, color="0.5", alpha=0.5) temps_max_fit.plot(ax=axes, lw=5, color=temps_colors[5], alpha=0.7) temps_min_fit.plot(ax=axes, lw=5, color=temps_colors[3], alpha=0.7) axes.set_xlim([years_min, years_max]) axes.set_ylim([temp_min - 5, temp_max + 5]) axes.set_title(("Mean Monthly Temperatures from 1894-2013\n" "Saint Francis, KS, USA\n(with max and min fit)"), fontsize=20) axes.set_xlabel("Years", fontsize=16) _ = axes.set_ylabel("Temperature (F)", fontsize=16)
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
By looking at at the gaps above and below the min and max fits, it seems like there is a greater rise in the minimums. We can get a better visual, though, by superimposing the two lines. Let's remove the vertical distance and compare:
diff_1894 = temps_max_fit.iloc[0] - temps_min_fit.iloc[0] diff_2013 = temps_max_fit.iloc[-1] - temps_min_fit.iloc[-1] (diff_1894, diff_2013)
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
So that's the difference between the high and low for 1894 and then the difference in 2013. As we can see, the trend over the last century for this one weather station has been a lessening in the difference between the maximum and minimum values. Let's shift the highs down by the difference in 2013 and compare the slopes overlaid:
vert_shift = temps_max_fit - diff_2013 (figure, axes) = plt.subplots(figsize=(18,6)) vert_shift.plot(ax=axes, lw=5, color=temps_colors[5], alpha=0.7) temps_min_fit.plot(ax=axes, lw=5, color=temps_colors[3], alpha=0.7) axes.set_xlim([years_min, years_max]) axes.set_ylim([vert_shift.min() - 5, vert_shift.max() + 1]) axes.set_title(("Mean Monthly Temperature Difference from 1894-2013\n" "Saint Francis, KS, USA\n(vertical offset adjusted to converge at 2013)"), fontsize=20) axes.set_xlabel("Years", fontsize=16) _ = axes.set_ylabel("Temperature\nDifference (F)", fontsize=16)
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
Now you can see the difference! Let's tweak our seaborn style for the next set of plots we'll be doing:
sns.set(style="darkgrid")
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
Seaborn offers some plots that are very useful when looking at lots of data: * heat maps * cluster maps (and the normalized variant) Let's use the first one next, to get a sense of what the means temperatures look like for each month over the course of the given century -- without any analysis, just a visualization of the raw data.
(figure, axes) = plt.subplots(figsize=(17,9)) axes.set_title(("Heat Map\nMean Monthly Temperatures, 1894-2013\n" "Saint Francis, KS, USA"), fontsize=20) sns.heatmap(temps, cmap=temps_cmap, cbar_kws={"label": "Temperature (F)"}) figure.tight_layout()
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
If you want to render your plot as the book has published it, you can do the following instead: ```python sns.set(font_scale=1.8) (figure, axes) = plt.subplots(figsize=(17,9)) axes.set_title(("Heat Map\nMean Monthly Temperatures, 1894-2013\n" "Saint Francis, KS, USA"), fontsize=24) xticks = temps.columns keptticks = xticks[::int(len(xticks)/36)] xticks = ['' for y in xticks] xticks[::int(len(xticks)/36)] = keptticks sns.heatmap(temps, linewidth=0, xticklabels=xticks, cmap=temps_cmap, cbar_kws={"label": "Temperature (F)"}) figure.tight_layout() ``` Given that this is a town in the Northern hemisphere near the 40th parallel, we don't see any surprises: * highest temperatures are in the summer * lowest temperatures are in the winter There is some interesting summer banding in the 1930s which indicates several years of hotter-than-normal summers. There also seems to be a wide band of cold Decembers from 1907 through about 1932. Next we're going to look at Seaborn's cluster map functionality. Cluster maps of this sort are very useful in sorting out data that may have hidden (or not) hierarchical structure. We don't expect that with this data set, so this is more an demonstration of the plot more than anything. However, it might have a few insights for us. We shall see. Due to the fact that this is a composite plot, we'll need to access subplot axes as provided by the ClusterMap class.
clustermap = sns.clustermap( temps, figsize=(19, 12), cbar_kws={"label": "Temperature\n(F)"}, cmap=temps_cmap) _ = clustermap.ax_col_dendrogram.set_title( "Cluster Map\nMean Monthly Temperatures, 1894-2013\nSaint Francis, KS, USA", fontsize=20)
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
For the book version: ```python sns.set(font_scale=1.5) xticks = temps.columns keptticks = xticks[::int(len(xticks)/36)] xticks = ['' for y in xticks] xticks[::int(len(xticks)/36)] = keptticks clustermap = sns.clustermap( temps, figsize=(19, 12), linewidth=0, xticklabels=xticks, cmap=temps_cmap, cbar_kws={"label": "Temperature\n(F)"}) _ = clustermap.ax_col_dendrogram.set_title( "Cluster Map\nMean Monthly Temperatures, 1894-2013\nSaint Francis, KS, USA", fontsize=24) ``` So here's what's happened here: while keeping the temperatures for each year together, the $x$ (years) and $y$ (months) values have been sorted/grouped to be close to those with which it shares the most similarity. Here's what we can discern from the graph with regard to our current data set: The century's temperature patterns each year can be viewed in two groups: higher and lower temperatures. January and December share similar low-temperature patterns, with the next-closest being February. The next grouping of similar temperature patterns are November and March, sibling to the Jan/Dec/Feb grouping. The last grouping of the low-temperature months is the April/October pairing. A similar analysis (with no surprises) can be done for the high-temperature months. Looking across the $x$-axis, we can view patterns/groupings by year. With careful tracing (ideally with a larger rendering of the cluster map), one could identify similar temperature patterns in various years. Though this doesn't reveal anything intrinsically, it could assist in additional analysis (e.g., pointing towards historical records to examine in the possibility the trends may be discovered). There are two distinct bands that show up for two different groups of years. However, when rendering this image at twice its current width, the banding goes away; it's an artifact of this particular resolution (and the decreased spacing between the given years). In the cluster map above, we passed a valuer for the color map to use, the one we defined at the beginning of this section. If we leave that out, seaborn will do something quite nice: it will normalize our data and then select a color map that highlights values above and below the mean. Let's try that :-)
clustermap = sns.clustermap( temps, z_score=1, figsize=(19, 12), cbar_kws={"label": "Normalized\nTemperature (F)"}) _ = clustermap.ax_col_dendrogram.set_title( "Normalized Cluster Map\nMean Monthly Temperatures, 1894-2013\nSaint Francis, KS, USA", fontsize=20)
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
For the book version: python sns.set(font_scale=1.5) clustermap = sns.clustermap( temps, z_score=1, figsize=(19, 12), linewidth=0, xticklabels=xticks, cbar_kws={"label": "Normalized\nTemperature (F)"}) _ = clustermap.ax_col_dendrogram.set_title( "Normalized Cluster Map\nMean Monthly Temperatures, 1894-2013\nSaint Francis, KS, USA", fontsize=24) Note that we get the same grouping as in the previous heat map; the internal values at each coordinate of the map (and the associated color) are all that have changed. This view offers great insight for statistical data: not only do we see the large and obvious grouping between above and below the mean, but the colors give obvious insights as to how far any given point isfrom the overall mean. With the next plot, we're going to return to two previous plots: * the temperature heat map * with the previous scatter plot for our temperature data Seaborn has an option for heat maps to display a histogram above them. We will see this usage when we examine the precipitation. However, for the temperatures, counts for a year isn't quite as meaningful as the actual values for each month of that year. As such, we will replace the standard histogram with our scatter plot:
figure = plt.figure(figsize=(18,13)) grid_spec = plt.GridSpec(2, 2, width_ratios=[50, 1], height_ratios=[1, 3], wspace=0.05, hspace=0.05) scatter_axes = figure.add_subplot(grid_spec[0]) cluster_axes = figure.add_subplot(grid_spec[2]) colorbar_axes = figure.add_subplot(grid_spec[3]) scatter_axes.scatter(years, temps_degrees, s=40, c="0.3", alpha=0.5) scatter_axes.set(xticks=[], ylabel="Yearly. Temp. (F)") scatter_axes.set_xlim([years_min, years_max]) scatter_axes.set_title( "Heat Map with Scatter Plot\nMean Monthly Temperatures, 1894-2013\nSaint Francis, KS, USA", fontsize=20) sns.heatmap(temps, cmap=temps_cmap, ax=cluster_axes, cbar_ax=colorbar_axes, cbar_kws={"orientation": "vertical"}) _ = colorbar_axes.set(xlabel="Temperature\n(F)")
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
For the book version: ```python sns.set(font_scale=1.8) figure = plt.figure(figsize=(18,13)) grid_spec = plt.GridSpec(2, 2, width_ratios=[50, 1], height_ratios=[1, 3], wspace=0.05, hspace=0.05) scatter_axes = figure.add_subplot(grid_spec[0]) cluster_axes = figure.add_subplot(grid_spec[2]) colorbar_axes = figure.add_subplot(grid_spec[3]) scatter_axes.scatter(years, temps_degrees, s=40, c="0.3", alpha=0.5) scatter_axes.set(xticks=[], ylabel="Yearly. Temp. (F)") scatter_axes.set_xlim([years_min, years_max]) scatter_axes.set_title( "Heat Map with Scatter Plot\nMean Monthly Temperatures, 1894-2013\nSaint Francis, KS, USA", fontsize=20) sns.heatmap(temps, cmap=temps_cmap, ax=cluster_axes, linewidth=0, xticklabels=xticks, cbar_ax=colorbar_axes, cbar_kws={"label": "Temperature\n(F)"}) ``` No new insights here, rather a demonstration of combining two views of the same data for easier examination and exploration of trends. Next, let's take a closer look at average monthly temperatures by month using a histogram matrix. To do this, we'll need a new pivot. Our first one created a pivot with the "Month" data being the index; now we want to index by "Year". We'll do the same trick of keeping the data in the correct month-order by converting the month numbers to names after we create the pivot table ... but in the case of the histogram matrix plot, that won't actually help us: to keep the sorting correct, we'll need to pre-pend the zero-filled month number:
temps2 = data_raw.pivot("Year", "Month", "Mean Temperature (F)") temps2.columns = [str(x).zfill(2) + " - " + calendar.month_name[x] for x in temps2.columns] monthly_means = temps2.mean() temps2.head()
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
Now we're ready for our histogram. We'll use the histogram provided by Pandas for this. Unfortunately, Pandas does not return the figure and axes that it creates with its hist wrapper. Instead, it returns an NumPy array of subplots. As such, we're left with fewer options than we might like for further tweaking of the plot. Our use below of plt.text is a quick hack (of trial and error) that lets us label the overall figure (instead of the enclosing axes, as we'd prefer).
axes = temps2.hist(figsize=(16,12)) plt.text(-20, -10, "Temperatures (F)", fontsize=16) plt.text(-74, 77, "Counts", rotation="vertical", fontsize=16) _ = plt.suptitle("Temperatue Counts by Month, 1894-2013\nSaint Francis, KS, USA", fontsize=20)
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
This provides a nice view on the number of occurrences for temperature ranges in each month over the course of the century. Now what we'd like to do is: * look at the mean temperature for all months over the century * but also show the constituent data that generated that mean * and trace the max, mean, and min temperatures Let's tackle that last one first. The min, max, and means are are discrete values in our case, one for each month. What we'd like to do is see what a smooth curve through those points might look like (as a visual aid more than anything). SciPy provides just the thing: spline interpolation. This will give us a smooth curve for our discrete values:
from scipy.interpolate import UnivariateSpline smooth_mean = UnivariateSpline(month_nums, list(monthly_means), s=0.5) means_xs = np.linspace(0, 13, 2000) means_ys = smooth_mean(means_xs) smooth_maxs = UnivariateSpline(month_nums, list(temps2.max()), s=0) maxs_xs = np.linspace(0, 13, 2000) maxs_ys = smooth_maxs(maxs_xs) smooth_mins = UnivariateSpline(month_nums, list(temps2.min()), s=0) mins_xs = np.linspace(0, 13, 2000) mins_ys = smooth_mins(mins_xs)
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
We'll use the raw data from the beginning of this section, since we'll be doing interpolation on our $x$ values (month numbers):
temps3 = data_raw[["Month", "Mean Temperature (F)"]]
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
Now we can plot our means for all months, a scatter plot (as lines, in this case) for each month superimposed over each mean, and finally our max/mean/min interpolations:
(figure, axes) = plt.subplots(figsize=(18,10)) axes.bar(month_nums, monthly_means, width=0.96, align="center", alpha=0.6) axes.scatter(temps3["Month"], temps3["Mean Temperature (F)"], s=2000, marker="_", alpha=0.6) axes.plot(means_xs, means_ys, "b", linewidth=6, alpha=0.6) axes.plot(maxs_xs, maxs_ys, "r", linewidth=6, alpha=0.2) axes.plot(mins_xs, mins_ys, "y", linewidth=6, alpha=0.5) axes.axis((0.5, 12.5, temps_degrees.min() - 5, temps_degrees.max() + 5)) axes.set_title("Mean Monthly Temperatures from 1894-2013\nSaint Francis, KS, USA", fontsize=20) axes.set_xticks(month_nums) axes.set_xticklabels(month_names) _ = axes.set_ylabel("Temperature (F)", fontsize=16)
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
When we created our by-month pivot (the assigned to the temps2 variable), we provided ourselves with the means to easily look at statistical data for each month. We'll print out the highlights below so we can look at the numbers in preparation for sanity checking our visuals on the next plot:
temps2.max() temps2.mean() temps2.min()
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
We've seen those above (in various forms). We haven't seen the standard deviation for this data yet, though:
temps2.std()
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
Have a good look at those numbers; we're going to use them to make sure that our box plot results make sense in the next plot. What is a box plot? The box plot was invented by the famous statistical mathematician John Tukey (the inventor of many important concepts, he is often forgotten as the person who coined the term "bit"). Box plots concisely and visually convey the following "bits" (couldn't resist) of information: * upper part of the box: approximate distribution, 75th percentile * line across box: median * lower part of the box: approximate distribution, 25th percentile * height of the box: fourth spread * upper line out of box: greatest non-outlying value * lower line out of box: smallest non-outlying value * dots above and below: outliers Sometimes you will see box plots of different width; the width indicates the relative size of the data sets. The box plot allows one to view data without any assumptions having made about it; the basic statistics are there to view, in plain sight. Our next plot will overlay a box plot on our barchart of medians (and line scatter plot of values).
(figure, axes) = plt.subplots(figsize=(18,10)) axes.bar(month_nums, monthly_means, width=0.96, align="center", alpha=0.6) axes.scatter(temps3["Month"], temps3["Mean Temperature (F)"], s=2000, marker="_", alpha=0.6) sns.boxplot(temps2, ax=axes) axes.axis((0.5, 12.5, temps_degrees.min() - 5, temps_degrees.max() + 5)) axes.set_title("Mean Monthly Temperatures, 1894-2013\nSaint Francis, KS, USA", fontsize=20) axes.set_xticks(month_nums) axes.set_xticklabels(month_names) _ = axes.set_ylabel("Temperature (F)", fontsize=16)
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
Now we can easily identify the spread, the outliers, the area that contains 50% of the distribution, etc. The violin plot, as previously mentioned, is a variation on the box plot, it's shape indicating the probability distribution of the data in that particular set. We will configure it to show our data points as lines (the "stick" option), thus combining our use of the line-scatter plot above with the box plot. Let's see this same data as a violin plot:
sns.set(style="whitegrid") (figure, axes) = plt.subplots(figsize=(18, 10)) sns.violinplot(temps2, bw=0.2, lw=1, inner="stick") axes.set_title(("Violin Plots\nMean Monthly Temperatures, 1894-2013\n" "Saint Francis, KS, USA"), fontsize=20) axes.set_xticks(month_nums) axes.set_xticklabels(month_names) _ = axes.set_ylabel("Temperature (F)", fontsize=16)
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
With the next plot, Andrews' curves, we reach the end of the section on temperature analysis. The application of Andrews' curves to this particular data set is a bit forced. It's a more useful analysis tool when applied to data sets with higher dimensionality, due to the fact that the computed curves can reveal structure (grouping/clustering) where it might not otherwise be (as) evident. We're essentially looking at just two dimensions here: * temperature * month As we have already seen above, there is not a lot of unexpected (or unexplained) structure in this data. A data set that included wind speed and air pressure might render much more interesting results in an Andrews' curve ...
months_cmap = sns.cubehelix_palette(8, start=-0.5, rot=0.75, as_cmap=True) (figure, axes) = plt.subplots(figsize=(18, 10)) temps4 = data_raw[["Mean Temperature (F)", "Month"]] axes.set_xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi]) axes.set_xticklabels([r"$-{\pi}$", r"$-\frac{\pi}{2}$", r"$0$", r"$\frac{\pi}{2}$", r"${\pi}$"]) axes.set_title("Andrews Curves for\nMean Monthly Temperatures, 1894-2013\nSaint Francis, KS, USA", fontsize=20) axes.set_xlabel(r"Data points mapped to lines in the range $[-{\pi},{\pi}]$", fontsize=16) axes.set_ylabel(r"$f_{x}(t)$", fontsize=16) pd.tools.plotting.andrews_curves( temps4, class_column="Month", ax=axes, colormap=months_cmap) axes.axis([-np.pi, np.pi] + [x * 1.025 for x in axes.axis()[2:]]) _ = axes.legend(labels=month_names, loc=(0, 0.67))
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
Andrews' curves are groups of lines where each line represents a point in the input data set. The line itself is a the plot of a finite Fourier series, as defined below (taken from the paper linked above). Each data point $x = \left { x_1, x_2, \ldots x_d \right }$ defines a finite Fourier series: \begin{align} f_x(t) = \frac{x_1}{\sqrt 2} + x_2 \sin(t) + x_3 \cos(t) + x_4 \sin(2t) + x_5 \cos(2t) + \ldots \end{align} This function is then plotted for $-\pi < t < \pi$. Thus each data point may be viewed as a line between $-\pi$ and $\pi$. This formula can be thought of as the projection of the data point onto the vector: \begin{align} \left ( \frac{1}{\sqrt 2}, \sin(t), \cos(t), \sin(2t), \cos(2t), \ldots \right ) \end{align} If we examine the rendered curves, we see the same patterns we identified in the cluster map plots: * the temperatures of January and December are similar (thus the light and dark banding) * likewise for the temperatures during the summer months Notice that the curves preserve the distance between the high and low temperatures. This is another property of the curves. Other include: * the mean is preserved * linear relationships are preserved * the variance is preserved Things to keep in mind when using Andrews' curves in your projects: * the order of the variables matters; changing that order will result in different curves * the lower frequencies show up better; as such, put the variables you feel to be more important first For example, if we did have a data set with atmospheric pressure and wind speed, we might have defined our Pandas DataFrame with the columns in this order: python temps4 = data_raw[["Mean Temperature (F)", "Wind Speed (kn)", "Pressure (Pa)", "Month"]] This concludes the section on temperature analysis. Next we will look precipitation. For the most part, the notes and comments are the same; as such, we will not repeat the text, but merely run through the examples without interruption or commentary. Analysis of Precipitation, 1894-2013
sns.set(style="darkgrid") precips_colors = ["#f2d98f", "#f8ed39", "#a7cf38", "#7fc242", "#4680c2", "#3a53a3", "#6e4a98"] sns.palplot(precips_colors) precips_cmap = mpl.colors.LinearSegmentedColormap.from_list("precip colors", precips_colors) (figure, axes) = plt.subplots(figsize=(17,9)) axes.set_title(("Heat Map\nMean Monthly Precipitation, 1894-2013\n" "Saint Francis, KS, USA"), fontsize=20) sns.heatmap(precips, cmap=precips_cmap, cbar_kws={"label": "Inches"}) figure.tight_layout() figure = plt.figure(figsize=(18, 13)) grid_spec = plt.GridSpec(2, 2, width_ratios=[50, 1], height_ratios=[1, 3], wspace=0.05, hspace=0.05) hist_axes = figure.add_subplot(grid_spec[0]) cluster_axes = figure.add_subplot(grid_spec[2]) colorbar_axes = figure.add_subplot(grid_spec[3]) precips_sum = precips.sum(axis=0) years_unique = data["Year"].unique() hist_axes.bar(years_unique, precips_sum, 1, ec="w", lw=2, color="0.5", alpha=0.5) hist_axes.set(xticks=[], ylabel="Total Yearly\nPrecip. (in)") hist_axes.set_xlim([years_min, years_max]) hist_axes.set_title( "Heat Map with Histogram\nMean Monthly Precipitation, 1894-2013\nSaint Francis, KS, USA", fontsize=20) sns.heatmap(precips, cmap=precips_cmap, ax=cluster_axes, cbar_ax=colorbar_axes, cbar_kws={"orientation": "vertical"}) _ = colorbar_axes.set(xlabel="Precipitation\n(in)")
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
For the book version: ```python sns.set(font_scale=1.8) figure = plt.figure(figsize=(18, 13)) grid_spec = plt.GridSpec(2, 2, width_ratios=[50, 1], height_ratios=[1, 3], wspace=0.05, hspace=0.05) hist_axes = figure.add_subplot(grid_spec[0]) cluster_axes = figure.add_subplot(grid_spec[2]) colorbar_axes = figure.add_subplot(grid_spec[3]) precips_sum = precips.sum(axis=0) years_unique = data["Year"].unique() hist_axes.bar(years_unique, precips_sum, 1, ec="w", lw=2, color="0.5", alpha=0.5) hist_axes.set(xticks=[], ylabel="Total Yearly\nPrecip. (in)") hist_axes.set_xlim([years_min, years_max]) hist_axes.set_title( "Heat Map with Histogram\nMean Monthly Precipitation, 1894-2013\nSaint Francis, KS, USA", fontsize=24) xticks = precips.columns keptticks = xticks[::int(len(xticks)/36)] xticks = ['' for y in xticks] xticks[::int(len(xticks)/36)] = keptticks _ = sns.heatmap(precips, cmap=precips_cmap, ax=cluster_axes, linewidth=0, xticklabels=xticks, cbar_ax=colorbar_axes, cbar_kws={"label": "Precipitation\n(in)"}) ``` Our historgram gives a nice view of the average precipitation, and we notice immediately that 1923 is the year in this data set with the highest average. A quick google for "kansas rain 1923" lands us on this USGS page which discusses major floods along the Arkansas River: <blockquote> <strong>June 8-9, 1923</strong><br/><br/> In June 1923, the entire drainage area between Hutchinson and Arkansas City received excessive rains. On June 8 and 9, Wichita reported 7.06 inches, Newton 5.75 inches, and Arkansas City 2.06 inches. Excessive precipitation fell over all of the Little Arkansas, Ninnescah, and Chikaskia River Basins as well as the Arkansas River Valley, and major flooding occurred on all of the affected streams. Wichita and Arkansas City were severely damaged. In Wichita, 6 square miles were inundated. At Arkansas City, two lives were lost, and property damage was estimated in the millions (Kansas Water Resources Board, 1960). Flood stages on the Ninnescah were the highest known. </blockquote>
clustermap = sns.clustermap( precips, figsize=(19, 12), cbar_kws={"label": "Precipitation\n(F)"}, cmap=precips_cmap) _ = clustermap.ax_col_dendrogram.set_title( "Cluster Map\nMean Monthly Precipitation, 1894-2013\nSaint Francis, KS, USA", fontsize=20) clustermap = sns.clustermap( precips, z_score=1, figsize=(19, 12), cbar_kws={"label": "Normalized\nPrecipitation\n(in)"}) _ = clustermap.ax_col_dendrogram.set_title( "Normalized Cluster Map\nMean Monthly Precipitation, 1894-2013\nSaint Francis, KS, USA", fontsize=20) precips2 = data_raw.pivot("Year", "Month", "Precipitation (in)") precips2.columns = [str(x).zfill(2) + " - " + calendar.month_name[x] for x in precips2.columns] monthly_means = precips2.mean() precips2.head() axes = pd.tools.plotting.hist_frame(precips2, figsize=(16,12)) plt.text(-3.5, -20, "Precipitation (in)", fontsize=16) plt.text(-9.75, 155, "Counts", rotation="vertical", fontsize=16) _ = plt.suptitle("Precipitation Counts by Month, 1894-2013\nSaint Francis, KS, USA", fontsize=20) from scipy.interpolate import UnivariateSpline smooth_mean = UnivariateSpline(month_nums, list(monthly_means), s=0.5) means_xs = np.linspace(0, 13, 2000) means_ys = smooth_mean(means_xs) smooth_maxs = UnivariateSpline(month_nums, list(precips2.max()), s=1) maxs_xs = np.linspace(-5, 14, 2000) maxs_ys = smooth_maxs(maxs_xs) smooth_mins = UnivariateSpline(month_nums, list(precips2.min()), s=0.25) mins_xs = np.linspace(0, 13, 2000) mins_ys = smooth_mins(mins_xs) precips3 = data_raw[["Month", "Precipitation (in)"]] (figure, axes) = plt.subplots(figsize=(18,10)) axes.bar(month_nums, monthly_means, width=0.99, align="center", alpha=0.6) axes.scatter(precips3["Month"], precips3["Precipitation (in)"], s=2000, marker="_", alpha=0.6) axes.plot(means_xs, means_ys, "b", linewidth=6, alpha=0.6) axes.plot(maxs_xs, maxs_ys, "r", linewidth=6, alpha=0.2) axes.plot(mins_xs, mins_ys, "y", linewidth=6, alpha=0.5) axes.axis((0.5, 12.5, precips_inches.min(), precips_inches.max() + 0.25)) axes.set_title("Mean Monthly Precipitation from 1894-2013\nSaint Francis, KS, USA", fontsize=20) axes.set_xticks(month_nums) axes.set_xticklabels(month_names) _ = axes.set_ylabel("Precipitation (in)", fontsize=16) precips2.max() precips2.mean() precips2.min() precips2.std() (figure, axes) = plt.subplots(figsize=(18,10)) axes.bar(month_nums, monthly_means, width=0.99, align="center", alpha=0.6) axes.scatter(precips3["Month"], precips3["Precipitation (in)"], s=2000, marker="_", alpha=0.6) sns.boxplot(precips2, ax=axes) axes.axis((0.5, 12.5, precips_inches.min(), precips_inches.max() + 0.25)) axes.set_title("Mean Monthly Precipitation from 1894-2013\nSaint Francis, KS, USA", fontsize=20) axes.set_xticks(month_nums) axes.set_xticklabels(month_names) _ = axes.set_ylabel("Precipitation (in)", fontsize=16) sns.set(style="whitegrid") (figure, axes) = plt.subplots(figsize=(18, 10)) sns.violinplot(precips2, bw=0.2, lw=1, inner="stick") axes.set_title(("Violin Plots\nMean Monthly Precipitation from 1894-2013\n" "Saint Francis, KS, USA"), fontsize=20) axes.set_xticks(month_nums) axes.set_xticklabels(month_names) _ = axes.set_ylabel("Precipitation (in)", fontsize=16) sns.set(style="darkgrid") (figure, axes) = plt.subplots(figsize=(18, 10)) precips4 = data_raw[["Precipitation (in)", "Month"]] axes.set_xlim([-np.pi, np.pi]) axes.set_xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi]) axes.set_xticklabels([r"$-{\pi}$", r"$-\frac{\pi}{2}$", r"$0$", r"$\frac{\pi}{2}$", r"${\pi}$"]) axes.set_title("Andrews Curves for\nMean Monthly Precipitation, 1894-2013\nSaint Francis, KS, USA", fontsize=20) axes.set_xlabel(r"Data points mapped to lines in the range $[-{\pi},{\pi}]$", fontsize=16) axes.set_ylabel(r"$f_{x}(t)$", fontsize=16) axes = pd.tools.plotting.andrews_curves( precips4, class_column="Month", ax=axes, colormap=sns.cubehelix_palette(8, start=0.5, rot=-0.75, as_cmap=True)) axes.axis([-np.pi, np.pi] + [x * 1.025 for x in axes.axis()[2:]]) _ = axes.legend(labels=month_names, loc=(0, 0.67))
github/MasteringMatplotlib/mmpl-high-level.ipynb
moonbury/pythonanywhere
gpl-3.0
Create an :class:info &lt;mne.Info&gt; object.
# It is also possible to use info from another raw object. info = mne.create_info(ch_names=ch_names, sfreq=sfreq, ch_types=ch_types)
0.19/_downloads/04c2d1e64afcdd4e5032afb2212a74e5/plot_objects_from_arrays.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Create a dummy :class:mne.io.RawArray object
raw = mne.io.RawArray(data, info) # Scaling of the figure. # For actual EEG/MEG data different scaling factors should be used. scalings = {'mag': 2, 'grad': 2} raw.plot(n_channels=4, scalings=scalings, title='Data from arrays', show=True, block=True) # It is also possible to auto-compute scalings scalings = 'auto' # Could also pass a dictionary with some value == 'auto' raw.plot(n_channels=4, scalings=scalings, title='Auto-scaled Data from arrays', show=True, block=True)
0.19/_downloads/04c2d1e64afcdd4e5032afb2212a74e5/plot_objects_from_arrays.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
EpochsArray
event_id = 1 # This is used to identify the events. # First column is for the sample number. events = np.array([[200, 0, event_id], [1200, 0, event_id], [2000, 0, event_id]]) # List of three arbitrary events # Here a data set of 700 ms epochs from 2 channels is # created from sin and cos data. # Any data in shape (n_epochs, n_channels, n_times) can be used. epochs_data = np.array([[sin[:700], cos[:700]], [sin[1000:1700], cos[1000:1700]], [sin[1800:2500], cos[1800:2500]]]) ch_names = ['sin', 'cos'] ch_types = ['mag', 'mag'] info = mne.create_info(ch_names=ch_names, sfreq=sfreq, ch_types=ch_types) epochs = mne.EpochsArray(epochs_data, info=info, events=events, event_id={'arbitrary': 1}) picks = mne.pick_types(info, meg=True, eeg=False, misc=False) epochs.plot(picks=picks, scalings='auto', show=True, block=True)
0.19/_downloads/04c2d1e64afcdd4e5032afb2212a74e5/plot_objects_from_arrays.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
EvokedArray
nave = len(epochs_data) # Number of averaged epochs evoked_data = np.mean(epochs_data, axis=0) evokeds = mne.EvokedArray(evoked_data, info=info, tmin=-0.2, comment='Arbitrary', nave=nave) evokeds.plot(picks=picks, show=True, units={'mag': '-'}, titles={'mag': 'sin and cos averaged'}, time_unit='s')
0.19/_downloads/04c2d1e64afcdd4e5032afb2212a74e5/plot_objects_from_arrays.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Create epochs by windowing the raw data.
# The events are spaced evenly every 1 second. duration = 1. # create a fixed size events array # start=0 and stop=None by default events = mne.make_fixed_length_events(raw, event_id, duration=duration) print(events) # for fixed size events no start time before and after event tmin = 0. tmax = 0.99 # inclusive tmax, 1 second epochs # create :class:`Epochs <mne.Epochs>` object epochs = mne.Epochs(raw, events=events, event_id=event_id, tmin=tmin, tmax=tmax, baseline=None, verbose=True) epochs.plot(scalings='auto', block=True)
0.19/_downloads/04c2d1e64afcdd4e5032afb2212a74e5/plot_objects_from_arrays.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Create overlapping epochs using :func:mne.make_fixed_length_events (50 % overlap). This also roughly doubles the amount of events compared to the previous event list.
duration = 0.5 events = mne.make_fixed_length_events(raw, event_id, duration=duration) print(events) epochs = mne.Epochs(raw, events=events, tmin=tmin, tmax=tmax, baseline=None, verbose=True) epochs.plot(scalings='auto', block=True)
0.19/_downloads/04c2d1e64afcdd4e5032afb2212a74e5/plot_objects_from_arrays.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Extracting data from NEO file
# The example here uses the ExampleIO object for creating fake data. # For actual data and different file formats, consult the NEO documentation. reader = neo.io.ExampleIO('fakedata.nof') bl = reader.read(lazy=False)[0] # Get data from first (and only) segment seg = bl.segments[0] title = seg.file_origin ch_names = list() data = list() for ai, asig in enumerate(seg.analogsignals): # Since the data does not contain channel names, channel indices are used. ch_names.append('Neo %02d' % (ai + 1,)) # We need the ravel() here because Neo < 0.5 gave 1D, Neo 0.5 gives # 2D (but still a single channel). data.append(asig.rescale('V').magnitude.ravel()) data = np.array(data, float) sfreq = int(seg.analogsignals[0].sampling_rate.magnitude) # By default, the channel types are assumed to be 'misc'. info = mne.create_info(ch_names=ch_names, sfreq=sfreq) raw = mne.io.RawArray(data, info) raw.plot(n_channels=4, scalings={'misc': 1}, title='Data from NEO', show=True, block=True, clipping='clamp')
0.19/_downloads/04c2d1e64afcdd4e5032afb2212a74e5/plot_objects_from_arrays.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Creating a ScaleBar object The only required parameter for creating a ScaleBar object is dx. This is equal to a size of one pixel in real world. Value of this parameter depends on units of your CRS. Projected coordinate system (meters) The easiest way to add a scale bar is using a projected coordinate system with meters as units. Just set dx = 1:
nybb = gpd.read_file(gpd.datasets.get_path('nybb')) nybb = nybb.to_crs(32619) # Convert the dataset to a coordinate # system which uses meters ax = nybb.plot() ax.add_artist(ScaleBar(1))
doc/source/gallery/matplotlib_scalebar.ipynb
jorisvandenbossche/geopandas
bsd-3-clause
Geographic coordinate system (degrees) With a geographic coordinate system with degrees as units, dx should be equal to a distance in meters of two points with the same latitude (Y coordinate) which are one full degree of longitude (X) apart. You can calculate this distance by online calculator (e.g. the Great Circle calculator) or in geopandas.\ \ Firstly, we will create a GeoSeries with two points that have roughly the coordinates of NYC. They are located on the same latitude but one degree of longitude from each other. Their initial coordinates are specified in a geographic coordinate system (geographic WGS 84). They are then converted to a projected system for the calculation:
from shapely.geometry.point import Point points = gpd.GeoSeries([Point(-73.5, 40.5), Point(-74.5, 40.5)], crs=4326) # Geographic WGS 84 - degrees points = points.to_crs(32619) # Projected WGS 84 - meters
doc/source/gallery/matplotlib_scalebar.ipynb
jorisvandenbossche/geopandas
bsd-3-clause
After the conversion, we can calculate the distance between the points. The result slightly differs from the Great Circle Calculator but the difference is insignificant (84,921 and 84,767 meters):
distance_meters = points[0].distance(points[1])
doc/source/gallery/matplotlib_scalebar.ipynb
jorisvandenbossche/geopandas
bsd-3-clause
Finally, we are able to use geographic coordinate system in our plot. We set value of dx parameter to a distance we just calculated:
nybb = gpd.read_file(gpd.datasets.get_path('nybb')) nybb = nybb.to_crs(4326) # Using geographic WGS 84 ax = nybb.plot() ax.add_artist(ScaleBar(distance_meters))
doc/source/gallery/matplotlib_scalebar.ipynb
jorisvandenbossche/geopandas
bsd-3-clause
Using other units The default unit for dx is m (meter). You can change this unit by the units and dimension parameters. There is a list of some possible units for various values of dimension below: | dimension | units | | ----- |:-----:| | si-length | km, m, cm, um| | imperial-length |in, ft, yd, mi| |si-length-reciprocal|1/m, 1/cm| |angle|deg| In the following example, we will leave the dataset in its initial CRS which uses feet as units. The plot shows scale of 2 leagues (approximately 11 kilometers):
nybb = gpd.read_file(gpd.datasets.get_path('nybb')) ax = nybb.plot() ax.add_artist(ScaleBar(1, dimension="imperial-length", units="ft"))
doc/source/gallery/matplotlib_scalebar.ipynb
jorisvandenbossche/geopandas
bsd-3-clause
Customization of the scale bar
nybb = gpd.read_file(gpd.datasets.get_path('nybb')).to_crs(32619) ax = nybb.plot() # Position and layout scale1 = ScaleBar( dx=1, label='Scale 1', location='upper left', # in relation to the whole plot label_loc='left', scale_loc='bottom' # in relation to the line ) # Color scale2 = ScaleBar( dx=1, label='Scale 2', location='center', color='#b32400', box_color='yellow', box_alpha=0.8 # Slightly transparent box ) # Font and text formatting scale3 = ScaleBar( dx=1, label='Scale 3', font_properties={'family':'serif', 'size': 'large'}, # For more information, see the cell below scale_formatter=lambda value, unit: f'> {value} {unit} <' ) ax.add_artist(scale1) ax.add_artist(scale2) ax.add_artist(scale3)
doc/source/gallery/matplotlib_scalebar.ipynb
jorisvandenbossche/geopandas
bsd-3-clause
Model specification Here we set some specifications for the model: type, how it should be fitted, optimized and validated.
model_type = 'rf' # the classification algorithm tune_model = False # optimize hyperparameters cross_val_method = 'temporal' # cross-validation routine cost_fp = 1000 # preferably in euros! benefit_tp = 3000 class_weights = {0: cost_fp, 1: benefit_tp} # costs for fn and fp
notebooks/bdr-imbalanced-classification.ipynb
BigDataRepublic/bdr-analytics-py
apache-2.0
Cross-validation procedure To validate whether the model makes sensible predictions, we need to perform cross-validation. The exact procedure for this is specified below. Random cross-validation (set-aside a random sample for testing) is fast, but temporal cross-validation (set-aside a time period for testing) gives the most realistic results due to the resemblence of real-world model usage.
from sklearn.model_selection import StratifiedShuffleSplit, GridSearchCV, train_test_split #source: https://github.com/BigDataRepublic/bdr-analytics-py #! pip install -e git+ssh://git@github.com/BigDataRepublic/bdr-analytics.git#egg=bdranalytics-0.1 from bdranalytics.pipeline.encoders import WeightOfEvidenceEncoder from bdranalytics.model_selection.growingwindow import IntervalGrowingWindow from sklearn.metrics import average_precision_score, make_scorer, roc_auc_score if cross_val_method is 'random': # split train data into stratified random folds cv_dev = StratifiedShuffleSplit(test_size=0.1, train_size=0.1, n_splits=5, random_state=1) cv_test = StratifiedShuffleSplit(test_size=0.33, n_splits=1, random_state=2) elif cross_val_method is 'temporal': train_size = pd.Timedelta(days=365 * 4 ) # create a cross-validation routine for parameter tuning cv_dev = IntervalGrowingWindow(timestamps=timestamps, test_start_date=pd.datetime(year=2015, month=1, day=1), test_end_date=pd.datetime(year=2015, month=12, day=31), test_size=pd.Timedelta(days=30), train_size=train_size) # create a cross-validation routine for model evaluation cv_test = IntervalGrowingWindow(timestamps=timestamps, test_start_date=pd.datetime(year=2016, month=1, day=1), test_end_date=pd.datetime(year=2016, month=8, day=31), test_size=pd.Timedelta(days=2*30), train_size=train_size) # number of parallel jobs for cross-validation n_jobs = 1 # two functions for advanced performance evaluation metrics def roc_auc(y_true, y_pred): return roc_auc_score(pd.get_dummies(y_true), y_pred) roc_auc_scorer = make_scorer(roc_auc, needs_proba=True) def pr_auc(y_true, y_pred): return average_precision_score(pd.get_dummies(y_true), y_pred, average="micro") pr_auc_scorer = make_scorer(pr_auc, needs_proba=True) from sklearn.preprocessing import StandardScaler, Imputer from sklearn.pipeline import Pipeline from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier from sklearn.dummy import DummyClassifier from xgboost import XGBClassifier # convert date frame to bare X and y variables for the model pipeline y_col = 'target' X = df.copy().drop(y_col, axis=1) y = np.array(df[y_col]) n_features = X.shape[1] # define preprocessing steps preproc_steps = [('woe', WeightOfEvidenceEncoder(cols=high_capacity)), ('imputer', Imputer(missing_values='NaN', strategy='median', axis=0)), ('standardizer', StandardScaler(with_mean=True, with_std=True))] # specification of different model types and their defaults model_steps_dict = {'lr': [('lr', LogisticRegression(C=0.001, penalty='l2', tol=0.01, class_weight=class_weights))], 'rf': [('rf', RandomForestClassifier(n_estimators=400, max_features='auto', class_weight=class_weights))], 'gbc': [('gbc', GradientBoostingClassifier(n_estimators=400, max_depth=3))], 'xgb': [('xgb', XGBClassifier(scale_pos_weight=class_weights[1], n_estimators=100, max_depth=4))], 'dummy': [('dummy', DummyClassifier(strategy='prior'))] } # specification of the different model hyperparameters and tuning space model_params_grid = {'lr': {'lr__C': [1e-4, 1e-3, 1e-2, 1e-1]}, 'rf': {'rf__max_features': [3, n_features, np.sqrt(n_features)], 'rf__n_estimators': [10, 100, 1000]}, 'gbc': {'gbc__n_estimators': [100, 200]}, 'xgb': {'xgb__max_depth': [3,6,9], 'xgb__reg_alpha': [0,5,15], 'xgb__reg_lambda': [0,5,15], 'xgb__gamma' : [0,10,50,100]}, 'dummy': {}} # store the model step model_steps = model_steps_dict[model_type] # combine everything in one pipeline estimator = Pipeline(steps=(preproc_steps + model_steps)) print estimator
notebooks/bdr-imbalanced-classification.ipynb
BigDataRepublic/bdr-analytics-py
apache-2.0
Model parameter tuning If desired, we can optimize the model hyperparameters to get the best possible model.
# procedure depends on cross-validation type if cross_val_method is 'random': train_index = next(cv_test.split(X, y))[0] X_dev = X.iloc[train_index,:] y_dev = y[train_index] elif cross_val_method is 'temporal': X_dev = X y_dev = y # setting to include class weights in the gradient boosting model if model_type is 'gbc': sample_weights = np.array(map(lambda x: class_weights[x], y_dev)) fit_params = {'gbc__sample_weight': sample_weights} else: fit_params = {} # tune model with a parameter grid search if desired if tune_model: grid_search = GridSearchCV(estimator, cv=cv_dev, n_jobs=n_jobs, refit=False, param_grid=model_params_grid[model_type], scoring=pr_auc_scorer, fit_params=fit_params) grid_search.fit(X_dev, y_dev) # show grid search results display(pd.DataFrame(grid_search.cv_results_)) # set best parameters for estimator estimator.set_params(**grid_search.best_params_)
notebooks/bdr-imbalanced-classification.ipynb
BigDataRepublic/bdr-analytics-py
apache-2.0
Model validation The final test on the holdout.
y_pred_proba = [] # initialize empty predictions array y_true = [] # initialize empty ground-truth array # loop over the test folds for i_fold, (train_index, test_index) in enumerate(cv_test.split(X, y)): print "validation fold {:d}".format(i_fold) X_train = X.iloc[train_index,:] y_train = y[train_index] X_test = X.iloc[test_index,:] y_test = y[test_index] if model_type is 'gbc': sample_weights = map(lambda x: class_weights[x], y_train) fit_params = {'gbc__sample_weight': sample_weights} else: fit_params = {} # fit the model estimator.fit(X_train, y_train, **fit_params) # probability outputs for class 1 y_pred_proba.append(map(lambda x: x[1], estimator.predict_proba(X_test))) # store the true y labels for each fold y_true.append(np.array(y_test)) # postprocess the results y_true = np.concatenate(y_true) y_pred_proba = np.concatenate(y_pred_proba) y_pred_bin = (y_pred_proba > 0.5) * 1. # print some stats n_samples_test = len(y_true) n_pos_test = sum(y_true) n_neg_test = n_samples_test - n_pos_test print "events: {}".format(n_pos_test) print "p_no_event: {}".format(n_neg_test / n_samples_test) print "test accuracy: {}".format((np.equal(y_pred_bin, y_true) * 1.).mean())
notebooks/bdr-imbalanced-classification.ipynb
BigDataRepublic/bdr-analytics-py
apache-2.0
Receiver-operator characteristics Line is constructed by applying various threshold to the model output. Y-axis: proportion of events correctly identified, hit-rate X-axis: proportion of false positives, usually results in waste of resources Dotted line is guessing (no model). Blue line above the dotted line means there is information in the features.
from sklearn.metrics import roc_curve, auc fpr, tpr, thresholds = roc_curve(y_true, y_pred_proba, pos_label=1) roc_auc = auc(fpr, tpr) # plot ROC curve plt.figure() plt.plot(fpr, tpr, label="ROC curve (area = {:.2f})".format(roc_auc)) plt.plot([0, 1], [0, 1], 'k--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.0]) plt.xlabel('False positive rate') plt.ylabel('True positive rate') plt.title('Receiver-operating characteristic') plt.legend(loc="lower right") plt.show()
notebooks/bdr-imbalanced-classification.ipynb
BigDataRepublic/bdr-analytics-py
apache-2.0
Costs and benefits ROC optimization with cost matrix. Critical information: cost of FP and cost of FN (i.e. benefit of TP). Also used to train the model with class_weights.
def benefit(tpr, fpr): n_tp = tpr * n_pos_test # number of true positives (benefits) n_fp = fpr * n_neg_test # number of false positives (extra costs) fp_costs = n_fp * cost_fp tp_benefits = n_tp * benefit_tp return tp_benefits - fp_costs benefits = np.zeros_like(thresholds) for i, _ in enumerate(thresholds): benefits[i] = benefit(tpr[i], fpr[i]) i_max = np.argmax(benefits) print ("max benefits: {:.0f}k euros, tpr: {:.3f}, fpr: {:.3f}, threshold: {:.3f}" .format(benefits[i_max]/ 1e3, benefits[i_max]/ 1e3 / 8, tpr[i_max], fpr[i_max], thresholds[i_max])) plt.plot(thresholds, benefits) plt.xlim([0,1]) plt.ylim([0,np.max(benefits)]) plt.show() # recalibrate threshold based on benefits (optional, should still be around 0.5) y_pred_bin = (y_pred_proba > thresholds[i_max]) * 1.
notebooks/bdr-imbalanced-classification.ipynb
BigDataRepublic/bdr-analytics-py
apache-2.0
Precision-recall curve Another way to look at it. Note that models which perform well in PR-space are necessarily also dominating ROC-space. The opposite is not the case! Line is constructed by applying various threshold to the model output. Y-axis: proportion of events among all positives (precision) X-axis: proportion of events correctly identified (recall, hit rate)
from sklearn.metrics import precision_recall_curve precision, recall, thresholds = precision_recall_curve(y_true, y_pred_proba, pos_label=1) average_precision = average_precision_score(y_true, y_pred_proba, average="micro") baseline = n_pos_test / n_samples_test # plot PR curve plt.figure() plt.plot(recall, precision, label="PR curve (area = {:.2f})".format(average_precision)) plt.plot([0, 1], [baseline, baseline], 'k--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.0]) plt.xlabel('Recall') plt.ylabel('Precision') plt.title('Precision-recall curve') plt.legend(loc="lower right") plt.show() if model_type is 'dummy': print 'DummyClassifier only has endpoints in PR-curve'
notebooks/bdr-imbalanced-classification.ipynb
BigDataRepublic/bdr-analytics-py
apache-2.0
Classification report
from sklearn.metrics import classification_report target_names = ['no event','event'] print classification_report(y_true, y_pred_bin, target_names=target_names, digits=3)
notebooks/bdr-imbalanced-classification.ipynb
BigDataRepublic/bdr-analytics-py
apache-2.0
Confusion matrix
from sklearn.metrics import confusion_matrix confusion = pd.DataFrame(confusion_matrix(y_true, y_pred_bin), index=target_names, columns=target_names) sns.heatmap(confusion, annot=True, fmt="d") plt.xlabel('predicted label') plt.ylabel('true label')
notebooks/bdr-imbalanced-classification.ipynb
BigDataRepublic/bdr-analytics-py
apache-2.0
Accuracies at different classifier thresholds
from sklearn.metrics import accuracy_score thresholds = (np.arange(0,100,1) / 100.) acc = map(lambda thresh: accuracy_score(y_true, map(lambda prob: prob > thresh, y_pred_proba)), thresholds) plt.hist(acc, bins=20);
notebooks/bdr-imbalanced-classification.ipynb
BigDataRepublic/bdr-analytics-py
apache-2.0
Thresholds versus accuracy
plt.plot(thresholds, acc);
notebooks/bdr-imbalanced-classification.ipynb
BigDataRepublic/bdr-analytics-py
apache-2.0
Feature importance Note that these models are optimized to make accurate predictions, and not to make solid statistical inferences.
feature_labels = filter(lambda k: y_col not in k, df.columns.values) if model_type is 'lr': weights = estimator._final_estimator.coef_[0] elif model_type in ['rf','gbc']: weights = estimator._final_estimator.feature_importances_ elif model_type is 'dummy': print 'DummyClassifier does not have weights' weights = np.zeros(len(feature_labels)) feature_weights = pd.Series(weights, index=feature_labels) feature_weights.plot.barh(title='Feature importance', fontsize=8, figsize=(12,30), grid=True); from sklearn.ensemble.partial_dependence import plot_partial_dependence if model_type is 'gbc': preproc_pipe = Pipeline(steps=preproc_steps) X_transformed = preproc_pipe.fit_transform(X_dev, y_dev) plot_partial_dependence(estimator._final_estimator, X_transformed, features=range(n_features), feature_names=feature_labels, figsize=(12,180), n_cols=4, percentiles=(0.2,0.8)); else: print "No partial dependence plots available for this model type."
notebooks/bdr-imbalanced-classification.ipynb
BigDataRepublic/bdr-analytics-py
apache-2.0
As an example, consider the following multilayer perceptron with one hidden layer and a softmax output layer.
ninputs = 1000 nfeatures = 100 noutputs = 10 nhiddens = 50 rng = np.random.RandomState(0) x = T.dmatrix('x') wh = th.shared(rng.normal(0, 1, (nfeatures, nhiddens)), borrow=True) bh = th.shared(np.zeros(nhiddens), borrow=True) h = T.nnet.sigmoid(T.dot(x, wh) + bh) wy = th.shared(rng.normal(0, 1, (nhiddens, noutputs))) by = th.shared(np.zeros(noutputs), borrow=True) y = T.nnet.softmax(T.dot(h, wy) + by) predict = th.function([x], y)
libs/Theano/doc/library/d3viz/index.ipynb
rizar/attention-lvcsr
mit
The function predict outputs the probability of 10 classes. You can visualize it with pydotprint as follows:
from theano.printing import pydotprint import os if not os.path.exists('examples'): os.makedirs('examples') pydotprint(predict, 'examples/mlp.png') from IPython.display import Image Image('examples/mlp.png', width='80%')
libs/Theano/doc/library/d3viz/index.ipynb
rizar/attention-lvcsr
mit
To visualize it interactively, import the d3viz function from the d3viz module, which can be called as before:
import theano.d3viz as d3v d3v.d3viz(predict, 'examples/mlp.html')
libs/Theano/doc/library/d3viz/index.ipynb
rizar/attention-lvcsr
mit
Open visualization! When you open the output file mlp.html in your web-browser, you will see an interactive visualization of the compute graph. You can move the whole graph or single nodes via drag and drop, and zoom via the mouse wheel. When you move the mouse cursor over a node, a window will pop up that displays detailed information about the node, such as its data type or definition in the source code. When you left-click on a node and select Edit, you can change the predefined node label. If you are dealing with a complex graph with many nodes, the default node layout may not be perfect. In this case, you can press the Release node button in the top-left corner to automatically arrange nodes. To reset nodes to their default position, press the Reset nodes button. Profiling Theano allows function profiling via the profile=True flag. After at least one function call, the compute time of each node can be printed in text form with debugprint. However, analyzing complex graphs in this way can be cumbersome. d3viz can visualize the same timing information graphically, and hence help to spot bottlenecks in the compute graph more easily! To begin with, we will redefine the predict function, this time by using profile=True flag. Afterwards, we capture the runtime on random data:
predict_profiled = th.function([x], y, profile=True) x_val = rng.normal(0, 1, (ninputs, nfeatures)) y_val = predict_profiled(x_val) d3v.d3viz(predict_profiled, 'examples/mlp2.html')
libs/Theano/doc/library/d3viz/index.ipynb
rizar/attention-lvcsr
mit
Open visualization! When you open the HTML file in your browser, you will find an additional Toggle profile colors button in the menu bar. By clicking on it, nodes will be colored by their compute time, where red corresponds to a high compute time. You can read out the exact timing information of a node by moving the cursor over it. Different output formats Internally, d3viz represents a compute graph in the Graphviz DOT language, using the pydot package, and defines a front-end based on the d3.js library to visualize it. However, any other Graphviz front-end can be used, which allows to export graphs to different formats.
formatter = d3v.formatting.PyDotFormatter() pydot_graph = formatter(predict_profiled) pydot_graph.write_png('examples/mlp2.png'); pydot_graph.write_pdf('examples/mlp2.pdf'); Image('./examples/mlp2.png')
libs/Theano/doc/library/d3viz/index.ipynb
rizar/attention-lvcsr
mit
Here, we used the PyDotFormatter class to convert the compute graph into a pydot graph, and created a PNG and PDF file. You can find all output formats supported by Graphviz here. OpFromGraph nodes An OpFromGraph node defines a new operation, which can be called with different inputs at different places in the compute graph. Each OpFromGraph node defines a nested graph, which will be visualized accordingly by d3viz.
x, y, z = T.scalars('xyz') e = T.nnet.sigmoid((x + y + z)**2) op = th.OpFromGraph([x, y, z], [e]) e2 = op(x, y, z) + op(z, y, x) f = th.function([x, y, z], e2) d3v.d3viz(f, 'examples/ofg.html')
libs/Theano/doc/library/d3viz/index.ipynb
rizar/attention-lvcsr
mit
Open visualization! In this example, an operation with three inputs is defined, which is used to build a function that calls this operations twice, each time with different input arguments. In the d3viz visualization, you will find two OpFromGraph nodes, which correspond to the two OpFromGraph calls. When you double click on one of them, the nested graph appears with the correct mapping of its input arguments. You can move it around by drag and drop in the shaded area, and close it again by double-click. An OpFromGraph operation can be composed of further OpFromGraph operations, which will be visualized as nested graphs as you can see in the following example.
x, y, z = T.scalars('xyz') e = x * y op = th.OpFromGraph([x, y], [e]) e2 = op(x, y) + z op2 = th.OpFromGraph([x, y, z], [e2]) e3 = op2(x, y, z) + z f = th.function([x, y, z], [e3]) d3v.d3viz(f, 'examples/ofg2.html')
libs/Theano/doc/library/d3viz/index.ipynb
rizar/attention-lvcsr
mit
Helper Functions Determine data types
def get_type_lists(frame, rejects=['Id', 'SalePrice']): """Creates lists of numeric and categorical variables. :param frame: The frame from which to determine types. :param rejects: Variable names not to be included in returned lists. :return: Tuple of lists for numeric and categorical variables in the frame. """ nums, cats = [], [] for key, val in frame.types.items(): if key not in rejects: if val == 'enum': cats.append(key) else: nums.append(key) print('Numeric =', nums) print() print('Categorical =', cats) return nums, cats
09_matrix_factorization/src/py_part_9_kaggle_GLRM_example.ipynb
jphall663/GWU_data_mining
apache-2.0
Impute with GLRM
def glrm_num_impute(role, frame): """ Helper function for imputing numeric variables using GLRM. :param role: Role of frame to be imputed. :param frame: H2OFrame to be imputed. :return: H2OFrame of imputed numeric features. """ # count missing values in training data numeric columns print(role + ' missing:\n', [cnt for cnt in frame.nacnt() if cnt != 0.0]) # initialize GLRM matrix_complete_glrm = H2OGeneralizedLowRankEstimator( k=10, # create 10 features transform='STANDARDIZE', # <- seems very important gamma_x=0.001, # regularization on values in X gamma_y=0.05, # regularization on values in Y impute_original=True) # train GLRM matrix_complete_glrm.train(training_frame=frame, x=original_nums) # plot iteration history to ensure convergence matrix_complete_glrm.score_history().plot(x='iterations', y='objective', title='GLRM Score History') # impute numeric inputs by multiply the calculated xi and yj for the missing values in train num_impute = matrix_complete_glrm.predict(frame) # count missing values in imputed set print('imputed ' + role + ' missing:\n', [cnt for cnt in num_impute.nacnt() if cnt != 0.0]) return num_impute
09_matrix_factorization/src/py_part_9_kaggle_GLRM_example.ipynb
jphall663/GWU_data_mining
apache-2.0
Embed with GLRM
def glrm_cat_embed(frame): """ Helper function for embedding caetgorical variables using GLRM. :param frame: H2OFrame to be embedded. :return: H2OFrame of embedded categorical features. """ # initialize GLRM cat_embed_glrm = H2OGeneralizedLowRankEstimator( k=50, transform='STANDARDIZE', loss='Quadratic', regularization_x='Quadratic', regularization_y='L1', gamma_x=0.25, gamma_y=0.5) # train GLRM cat_embed_glrm.train(training_frame=frame, x=cats) # plot iteration history to ensure convergence cat_embed_glrm.score_history().plot(x='iterations', y='objective', title='GLRM Score History') # extracted embedded features cat_embed = h2o.get_frame(cat_embed_glrm._model_json['output']['representation_name']) return cat_embed
09_matrix_factorization/src/py_part_9_kaggle_GLRM_example.ipynb
jphall663/GWU_data_mining
apache-2.0
Import data
train = h2o.import_file('../../03_regression/data/train.csv') test = h2o.import_file('../../03_regression/data/test.csv') # bug fix - from Keston dummy_col = np.random.rand(test.shape[0]) test = test.cbind(h2o.H2OFrame(dummy_col)) cols = test.columns cols[-1] = 'SalePrice' test.columns = cols print(train.shape) print(test.shape) original_nums, cats = get_type_lists(train)
09_matrix_factorization/src/py_part_9_kaggle_GLRM_example.ipynb
jphall663/GWU_data_mining
apache-2.0
Split into to train and validation (before doing data prep!!!)
train, valid = train.split_frame([0.7], seed=12345) print(train.shape) print(valid.shape)
09_matrix_factorization/src/py_part_9_kaggle_GLRM_example.ipynb
jphall663/GWU_data_mining
apache-2.0
Impute numeric missing using GLRM matrix completion Training data
train_num_impute = glrm_num_impute('training', train) train_num_impute.head()
09_matrix_factorization/src/py_part_9_kaggle_GLRM_example.ipynb
jphall663/GWU_data_mining
apache-2.0
Validation data
valid_num_impute = glrm_num_impute('validation', valid)
09_matrix_factorization/src/py_part_9_kaggle_GLRM_example.ipynb
jphall663/GWU_data_mining
apache-2.0
Test data
test_num_impute = glrm_num_impute('test', test)
09_matrix_factorization/src/py_part_9_kaggle_GLRM_example.ipynb
jphall663/GWU_data_mining
apache-2.0
Embed categorical vars using GLRM Training data
train_cat_embed = glrm_cat_embed(train)
09_matrix_factorization/src/py_part_9_kaggle_GLRM_example.ipynb
jphall663/GWU_data_mining
apache-2.0
Validation data
valid_cat_embed = glrm_cat_embed(valid)
09_matrix_factorization/src/py_part_9_kaggle_GLRM_example.ipynb
jphall663/GWU_data_mining
apache-2.0
Test data
test_cat_embed = glrm_cat_embed(test)
09_matrix_factorization/src/py_part_9_kaggle_GLRM_example.ipynb
jphall663/GWU_data_mining
apache-2.0
Merge imputed and embedded frames
imputed_embedded_train = train[['Id', 'SalePrice']].cbind(train_num_impute).cbind(train_cat_embed) imputed_embedded_valid = valid[['Id', 'SalePrice']].cbind(valid_num_impute).cbind(valid_cat_embed) imputed_embedded_test = test[['Id', 'SalePrice']].cbind(test_num_impute).cbind(test_cat_embed)
09_matrix_factorization/src/py_part_9_kaggle_GLRM_example.ipynb
jphall663/GWU_data_mining
apache-2.0
Redefine numerics and explore
imputed_embedded_nums, cats = get_type_lists(imputed_embedded_train) print('Imputed and encoded numeric training data:') imputed_embedded_train.describe() print('--------------------------------------------------------------------------------') print('Imputed and encoded numeric validation data:') imputed_embedded_valid.describe() print('--------------------------------------------------------------------------------') print('Imputed and encoded numeric test data:') imputed_embedded_test.describe()
09_matrix_factorization/src/py_part_9_kaggle_GLRM_example.ipynb
jphall663/GWU_data_mining
apache-2.0
Train model on imputed, embedded features
h2o.show_progress() # turn on progress bars # Check log transform - looks good %matplotlib inline imputed_embedded_train['SalePrice'].log().as_data_frame().hist() # Execute log transform imputed_embedded_train['SalePrice'] = imputed_embedded_train['SalePrice'].log() imputed_embedded_valid['SalePrice'] = imputed_embedded_valid['SalePrice'].log() print(imputed_embedded_train[0:3, 'SalePrice'])
09_matrix_factorization/src/py_part_9_kaggle_GLRM_example.ipynb
jphall663/GWU_data_mining
apache-2.0
Train GLM on imputed, embedded inputs
alpha_opts = [0.01, 0.25, 0.5, 0.99] # always keep some L2 hyper_parameters = {"alpha":alpha_opts} # initialize grid search grid = H2OGridSearch( H2OGeneralizedLinearEstimator( family="gaussian", lambda_search=True, seed=12345), hyper_params=hyper_parameters) # train grid grid.train(y='SalePrice', x=imputed_embedded_nums, training_frame=imputed_embedded_train, validation_frame=imputed_embedded_valid) # show grid search results print(grid.show()) best = grid.get_grid()[0] print(best) # plot top frame values yhat_frame = imputed_embedded_valid.cbind(best.predict(imputed_embedded_valid)) print(yhat_frame[0:10, ['SalePrice', 'predict']]) # plot sorted predictions yhat_frame_df = yhat_frame[['SalePrice', 'predict']].as_data_frame() yhat_frame_df.sort_values(by='predict', inplace=True) yhat_frame_df.reset_index(inplace=True, drop=True) _ = yhat_frame_df.plot(title='Ranked Predictions Plot') # Shutdown H2O - this will erase all your unsaved frames and models in H2O h2o.cluster().shutdown(prompt=True)
09_matrix_factorization/src/py_part_9_kaggle_GLRM_example.ipynb
jphall663/GWU_data_mining
apache-2.0
The second line of this code uses the keyword if to tell Python that we want to make a choice. If the test that follows the if statement is true, the body of the if (i.e., the set of lines indented underneath it) is executed, and “greater” is printed. If the test is false, the body of the else is executed instead, and “not greater” is printed. Only one or the other is ever executed before continuing on with program execution to print “done”: We can also chain several tests together using elif, which is short for “else if”. The following Python code uses elif to print the sign of a number.
num = -3 if num > 0: print(num, 'is positive') elif num == 0: print(num, 'is zero') else: print(num, 'is negative')
teaching/stat_775_2021_fall/activities/activity-2021-09-15.ipynb
cgrudz/cgrudz.github.io
mit
Note that to test for equality we use a double equals sign == rather than a single equals sign = which is used to assign values. Along with the > and == operators we have already used for comparing values in our conditionals, there are a few more options to know about: &gt;: greater than &lt;: less than ==: equal to !=: does not equal &gt;=: greater than or equal to &lt;=: less than or equal to We can also combine tests using and and or. and is only true if both parts are true:
if (1 > 0) and (-1 >= 0): print('both parts are true') else: print('at least one part is false')
teaching/stat_775_2021_fall/activities/activity-2021-09-15.ipynb
cgrudz/cgrudz.github.io
mit
while or is true if at least one part is true:
if (1 < 0) or (1 >= 0): print('at least one test is true')
teaching/stat_775_2021_fall/activities/activity-2021-09-15.ipynb
cgrudz/cgrudz.github.io
mit
Activity 3: Checking our Data Now that we’ve seen how conditionals work, we can use them to check for the suspicious features we saw in our inflammation data. We are about to use functions provided by the numpy module again.
import numpy as np data = np.loadtxt("./swc-python/data/inflammation-01.csv", delimiter=",")
teaching/stat_775_2021_fall/activities/activity-2021-09-15.ipynb
cgrudz/cgrudz.github.io
mit
From the first couple of plots, we saw that maximum daily inflammation exhibits a strange behavior and raises one unit a day. Wouldn’t it be a good idea to detect such behavior and report it as suspicious? Let’s do that! However, instead of checking every single day of the study, let’s merely check if maximum inflammation in the beginning (day 0) and in the middle (day 20) of the study are equal to the corresponding day numbers.
max_inflammation_0 = np.max(data, axis=0)[0] max_inflammation_20 = np.max(data, axis=0)[20] if max_inflammation_0 == 0 and max_inflammation_20 == 20: print('Suspicious looking maxima!')
teaching/stat_775_2021_fall/activities/activity-2021-09-15.ipynb
cgrudz/cgrudz.github.io
mit
We also saw a different problem in the third dataset; the minima per day were all zero (looks like a healthy person snuck into our study). We can also check for this with an elif condition:
if max_inflammation_0 == 0 and max_inflammation_20 == 20: print('Suspicious looking maxima!') elif np.sum(np.min(data, axis=0)) == 0: print('Minima add up to zero!')
teaching/stat_775_2021_fall/activities/activity-2021-09-15.ipynb
cgrudz/cgrudz.github.io
mit
And if neither of these conditions are true, we can use else to give the all-clear:
if max_inflammation_0 == 0 and max_inflammation_20 == 20: print('Suspicious looking maxima!') elif np.sum(np.min(data, axis=0)) == 0: print('Minima add up to zero!') else: print('Seems OK!')
teaching/stat_775_2021_fall/activities/activity-2021-09-15.ipynb
cgrudz/cgrudz.github.io
mit
Tests (1/2)
s = '1001111011000010' lempel_ziv_complexity(s) # 1 / 0 / 01 / 11 / 10 / 110 / 00 / 010 %timeit lempel_ziv_complexity(s) lempel_ziv_complexity('1010101010101010') # 1, 0, 10, 101, 01, 010, 1010 lempel_ziv_complexity('1001111011000010000010') # 1, 0, 01, 11, 10, 110, 00, 010, 000 lempel_ziv_complexity('100111101100001000001010') # 1, 0, 01, 11, 10, 110, 00, 010, 000, 0101 %timeit lempel_ziv_complexity('100111101100001000001010') import random def random_string(size, alphabet="ABCDEFGHIJKLMNOPQRSTUVWXYZ"): return "".join(random.choices(alphabet, k=size)) def random_binary_sequence(size): return random_string(size, alphabet="01") random_string(100) random_binary_sequence(100) for (r, name) in zip( [random_string, random_binary_sequence], ["random strings in A..Z", "random binary sequences"] ): print("\nFor {}...".format(name)) for n in [10, 100, 1000, 10000, 100000]: print(" of sizes {}, Lempel-Ziv complexity runs in:".format(n)) %timeit lempel_ziv_complexity(r(n))
Short_study_of_the_Lempel-Ziv_complexity.ipynb
Naereen/Lempel-Ziv_Complexity
mit
We can start to see that the time complexity of this function seems to grow linearly as the size grows. Cython implementation As this blog post explains it, we can easily try to use Cython in a notebook cell. See the Cython documentation for more information.
%load_ext cython %%cython import cython ctypedef unsigned int DTYPE_t @cython.boundscheck(False) # turn off bounds-checking for entire function, quicker but less safe def lempel_ziv_complexity_cython(str sequence not None): """Lempel-Ziv complexity for a string, in simple Cython code (C extension).""" cdef set sub_strings = set() cdef str sub_str = "" cdef DTYPE_t n = len(sequence) cdef DTYPE_t ind = 0 cdef DTYPE_t inc = 1 while True: if ind + inc > len(sequence): break sub_str = sequence[ind : ind + inc] if sub_str in sub_strings: inc += 1 else: sub_strings.add(sub_str) ind += inc inc = 1 return len(sub_strings)
Short_study_of_the_Lempel-Ziv_complexity.ipynb
Naereen/Lempel-Ziv_Complexity
mit
Let try it!
s = '1001111011000010' lempel_ziv_complexity_cython(s) # 1 / 0 / 01 / 11 / 10 / 110 / 00 / 010 %timeit lempel_ziv_complexity(s) %timeit lempel_ziv_complexity_cython(s) lempel_ziv_complexity_cython('1010101010101010') # 1, 0, 10, 101, 01, 010, 1010 lempel_ziv_complexity_cython('1001111011000010000010') # 1, 0, 01, 11, 10, 110, 00, 010, 000 lempel_ziv_complexity_cython('100111101100001000001010') # 1, 0, 01, 11, 10, 110, 00, 010, 000, 0101
Short_study_of_the_Lempel-Ziv_complexity.ipynb
Naereen/Lempel-Ziv_Complexity
mit
Now for a test of the speed?
for (r, name) in zip( [random_string, random_binary_sequence], ["random strings in A..Z", "random binary sequences"] ): print("\nFor {}...".format(name)) for n in [10, 100, 1000, 10000, 100000]: print(" of sizes {}, Lempel-Ziv complexity in Cython runs in:".format(n)) %timeit lempel_ziv_complexity_cython(r(n))
Short_study_of_the_Lempel-Ziv_complexity.ipynb
Naereen/Lempel-Ziv_Complexity
mit
$\implies$ Yay! It seems faster indeed! but only x2 times faster... Numba implementation As this blog post explains it, we can also try to use Numba in a notebook cell.
from numba import jit @jit def lempel_ziv_complexity_numba(sequence : str) -> int: """Lempel-Ziv complexity for a sequence, in Python code using numba.jit() for automatic speedup (hopefully).""" sub_strings = set() n : int= len(sequence) ind : int = 0 inc : int = 1 while True: if ind + inc > len(sequence): break sub_str : str = sequence[ind : ind + inc] if sub_str in sub_strings: inc += 1 else: sub_strings.add(sub_str) ind += inc inc = 1 return len(sub_strings)
Short_study_of_the_Lempel-Ziv_complexity.ipynb
Naereen/Lempel-Ziv_Complexity
mit
Let try it!
s = '1001111011000010' lempel_ziv_complexity_numba(s) # 1 / 0 / 01 / 1110 / 1100 / 0010 %timeit lempel_ziv_complexity_numba(s) lempel_ziv_complexity_numba('1010101010101010') # 1, 0, 10, 101, 01, 010, 1010 lempel_ziv_complexity_numba('1001111011000010000010') # 1, 0, 01, 11, 10, 110, 00, 010, 000 9 lempel_ziv_complexity_numba('100111101100001000001010') # 1, 0, 01, 11, 10, 110, 00, 010, 000, 0101 %timeit lempel_ziv_complexity_numba('100111101100001000001010')
Short_study_of_the_Lempel-Ziv_complexity.ipynb
Naereen/Lempel-Ziv_Complexity
mit
$\implies$ Well... It doesn't seem that much faster from the naive Python code. We specified the signature when calling @numba.jit, and used the more appropriate data structure (string is probably the smaller, numpy array are probably faster). But even these tricks didn't help that much. I tested, and without specifying the signature, the fastest approach is using string, compared to using lists or numpy arrays. Note that the @jit-powered function is compiled at runtime when first being called, so the signature used for the first call is determining the signature used by the compile function Tests (2/2) To test more robustly, let us generate some (uniformly) random binary sequences.
from numpy.random import binomial def bernoulli(p, size=1): """One or more samples from a Bernoulli of probability p.""" return binomial(1, p, size) bernoulli(0.5, 20)
Short_study_of_the_Lempel-Ziv_complexity.ipynb
Naereen/Lempel-Ziv_Complexity
mit
That's probably not optimal, but we can generate a string with:
''.join(str(i) for i in bernoulli(0.5, 20)) def random_binary_sequence(n, p=0.5): """Uniform random binary sequence of size n, with rate of 0/1 being p.""" return ''.join(str(i) for i in bernoulli(p, n)) random_binary_sequence(50) random_binary_sequence(50, p=0.1) random_binary_sequence(50, p=0.25) random_binary_sequence(50, p=0.5) random_binary_sequence(50, p=0.75) random_binary_sequence(50, p=0.9)
Short_study_of_the_Lempel-Ziv_complexity.ipynb
Naereen/Lempel-Ziv_Complexity
mit
And so, this function can test to check that the three implementations (naive, Cython-powered, Numba-powered) always give the same result.
def tests_3_functions(n, p=0.5, debug=True): s = random_binary_sequence(n, p=p) c1 = lempel_ziv_complexity(s) if debug: print("Sequence s = {} ==> complexity C = {}".format(s, c1)) c2 = lempel_ziv_complexity_cython(s) c3 = lempel_ziv_complexity_numba(s) assert c1 == c2 == c3, "Error: the sequence {} gave different values of the Lempel-Ziv complexity from 3 functions ({}, {}, {})...".format(s, c1, c2, c3) return c1 tests_3_functions(5) tests_3_functions(20) tests_3_functions(50) tests_3_functions(500) tests_3_functions(5000)
Short_study_of_the_Lempel-Ziv_complexity.ipynb
Naereen/Lempel-Ziv_Complexity
mit
Benchmarks On two example of strings (binary sequences), we can compare our three implementation.
%timeit lempel_ziv_complexity('100111101100001000001010') %timeit lempel_ziv_complexity_cython('100111101100001000001010') %timeit lempel_ziv_complexity_numba('100111101100001000001010') %timeit lempel_ziv_complexity('10011110110000100000101000100100101010010111111011001111111110101001010110101010') %timeit lempel_ziv_complexity_cython('10011110110000100000101000100100101010010111111011001111111110101001010110101010') %timeit lempel_ziv_complexity_numba('10011110110000100000101000100100101010010111111011001111111110101001010110101010')
Short_study_of_the_Lempel-Ziv_complexity.ipynb
Naereen/Lempel-Ziv_Complexity
mit
Let check the time used by all the three functions, for longer and longer sequences:
%timeit tests_3_functions(10, debug=False) %timeit tests_3_functions(20, debug=False) %timeit tests_3_functions(40, debug=False) %timeit tests_3_functions(80, debug=False) %timeit tests_3_functions(160, debug=False) %timeit tests_3_functions(320, debug=False) def test_cython(n): s = random_binary_sequence(n) c = lempel_ziv_complexity_cython(s) return c %timeit test_cython(10) %timeit test_cython(20) %timeit test_cython(40) %timeit test_cython(80) %timeit test_cython(160) %timeit test_cython(320) %timeit test_cython(640) %timeit test_cython(1280) %timeit test_cython(2560) %timeit test_cython(5120) %timeit test_cython(10240) %timeit test_cython(20480)
Short_study_of_the_Lempel-Ziv_complexity.ipynb
Naereen/Lempel-Ziv_Complexity
mit