text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# R Bootcamp Part 4
## ggplot2
One of the sad facts about (most) economic research papers is that they don't always have the most aesthetically pleasing figures. For many data visualization applications or our own work we might want to have more control over the visuals and step them up a notch, making sure they convey useful information and have informative labels/captions. This is where the **ggplot2** package comes in.
We started off using **R's** built-in plot function, which let us produce scatterplots and construct histograms of all sorts of variables. However, it doesn't look the best and has some ugly naming conventions. **ggplot2** will give us complete control over our figure and allow us to get as in depth with it as we want.
**ggplot2** is part of the **tidyverse** package, so we'll need to load that in before we get started.
For today, let's load a few packages and read in a dataset on sleep quality and time allocation for 706 individuals. This dataset is saved to the section folder as `sleep75.dta`.
```
library(tidyverse)
library(haven)
sleepdata <- read_dta("sleep75.dta")
set.seed("12345") # sets the random seed so we get the same results later on from our random draws
```
# ggplot2 Basic Syntax
Let's start by getting familiar with the basic syntax of __ggplot2__. It's syntax is a little bit different than some of the functions we've used before, but once we figure it out it makes thing nice and easy as we make more and more professional-looking figures. It also plays nicely with pipes!
To start a plot, we start with the function
## `ggplot()`
This function initializes an empty plot and passes data to other plots that we'll add on top. We can also use this function to define our dataset or specify what our x and y variables are.
Try starting a new plot by running `ggplot()` below:
Okay, so not the most impressive graphic yet.
We get a little bit more if we specify our data and our x/y variables. To specify the data, we add the argument `data = "dataname"` to the `ggplot()` function.
To specify which variable is on the x axis and which is on the y, we use the `aes(x= "xvar", y= "yvar")` argument. `aes()` is short for "aesthetics" and allows us to automatically pass these variables along as our x and y variables for the plots we add.
Let's say we're interested in using our `sleepdata` to see the relationship between age and hourly wage in our sample:
`ggplot(data = sleepdata, aes(x = age, y = hrwage))`
That is a start! Now we have labels on both of our axes corresponding to the assigned variable, and a grid corresponding to possible values of those variables. This makes sense, as we told **R** with `aes()` what our x variable and y variable are, and it then automatically sets up tick marks based on our data.
We will add geometries (sets of points, histograms, lines, etc.) by adding what we call "layers" using a `+` after our `ggplot()` function. Let's take a look at a few of the options.
## Scatterplots
Now let's add some points! If we want to get a sense of how age and hourly wage vary in our data, we can do that by just plotting the points. We add (x,y) points using the funciton
##`geom_point()`
Since we already declared our two variables, all we need to add `+ geom_point()` to our existing code:
`ggplot(data = sleepdata, aes(x = age, y = hrwage)) +
geom_point()`
And we get a a plot of all our points (note that we were warned that there are some missing values that get dropped).
### Labels
Sometimes we might want to change the labels from the variable names to a more descriptive label, and possibly add a title. We can do that! We do this by adding the `labs()` function to our plot.
`ggplot(data = sleepdata, aes(x = age, y = hrwage)) +
geom_point() +
labs(title = "Relationship between Age and Hourly Wage",
subtitle = "Nonmissing Sample",
caption = "Note: prepared using Wooldridge's sleep75 data.",
x = "Age (years)",
y = "Hourly Wage ($)")`
Let's take a look at what we added to `labs()`.
* First, `title` gives us the main title at the top.
* `subtitle` gives us another line in a smaller font below the main title.
* `caption` adds a note at the bottom of the plot
* `x` and `y` correspond to our x and y labels, respectively.
* We can specify as many/few of these elements as we want, but just make sure to separate them by commas
### Changing Points
What if we want to change the color/shape/transparency of our points? We can do that by adding optimal arguments to `geom_point()`.
`ggplot(data = sleepdata, aes(x = age, y = hrwage)) +
geom_point(colour = "blue", alpha = 0.4, size = 0.8) +
labs(title = "Relationship between Age and Hourly Wage",
subtitle = "Nonmissing Sample",
x = "Age (years)",
y = "Hourly Wage ($)")`
By adding `colour="blue"` we changed the color to blue. There are [a toooooon](http://sape.inf.usi.ch/sites/default/files/ggplot2-colour-names.png) of named colors that we could use instead (this gets really useful when we start splitting our data by group levels).
`alpha = 0.4` is changing the transparency of our points to 40%. `size = 0.8` is reducing the size of the points to 80% of their original size.
### Splitting by Groups
What if we wanted to change the color of our points according to whether the individual is male or not? We can do that by adding an `aes()` to geom_point!
`ggplot(data = sleepdata, aes(x = age, y = hrwage)) +
geom_point(aes(colour = factor(male))) +
labs(title = "Relationship between Age and Hourly Wage",
subtitle = "Nonmissing Sample, by Gender",
x = "Age (years)",
y = "Hourly Wage ($)")`
By adding an aesthestic to our `geom_point` we can set the color to be determined by the value of $male$. By default, the zero value (i.e. female) gets a red color while a 1 value (female) gets a light green. We specify the variable as a `factor()` so that ggplot knows it is a discrete variable. What if we instead wanted to change color on a continuous scale?
`ggplot(data = sleepdata, aes(x = age, y = hrwage)) +
geom_point(aes(colour = age)) +
labs(title = "Relationship between Age and Hourly Wage",
subtitle = "Nonmissing Sample, by Age",
x = "Age (years)",
y = "Hourly Wage ($)")`
Here the color is now a function of our continuous variable $age$, taking increasingly lighter values for higher ages.
(note that __ggplot2__ lets you specify the color scale or color levels if you want, as well as nitpick the labels in the legend. In reality we can change anything that appears in the plot - we just have to choose the right option).
One thing to note is that we can make other options conditional on variables in our data frame too. What if we wanted the shape of our points to depend on union participation, the color to vary with gender, and the size of the points to depend on the total minutes worked per week? We can do all that - even if it might look real gross:
`ggplot(data = sleepdata, aes(x = age, y = hrwage)) +
geom_point(aes(colour = factor(male), shape = factor(union), size = totwrk)) +
labs(title = "Relationship between Age and Hourly Wage",
subtitle = "Nonmissing Sample, too many things going on",
x = "Age (years)",
y = "Hourly Wage ($)")`
While the above example is cluttered, it shows how we can take a simple scatterplot and use it to convey additional information in just one plot.
## Lines
We can add lines to our figure in a couple different ways. First, if we wanted to connect all the points in our data with a line, we would use the
## `geom_line()`
layer. For example, let's say we want to plot the mean hourly wage for each year of age in our data, this time dropping the NA values so ggplot doesn't give us a warning:
`sleepdata %>%
group_by(age) %>%
drop_na(age, hrwage) %>%
summarise(hrwage = mean(hrwage)) %>%
ggplot(aes(x=age, y = hrwage)) +
geom_line()`
```
```
We can also add points (average wage for each age) just by adding another layer!
`sleepdata %>%
group_by(age) %>%
drop_na(age, hrwage) %>%
summarise(hrwage = mean(hrwage)) %>%
ggplot(aes(x=age, y = hrwage)) +
geom_line()+
geom_point(colour = "gray40", alpha = 0.3)`
What if instead we wanted to add a vertical, horizontal, or sloped line in our plot? We use the layers `vline()`, `hline()`, and `abline()` for that.
`vline()` is simple and really only needs the `xintercept` argument. Similarly, `hline` takes the `yintercept` argument. `abline` requires us to specify both a `slope` and an `intercept`.
Let's say we wanted to add lines to the previous set of points showing the average age (`geom_vline`), median hourly wage (`geom_hline`), and a dashed 45* line through the intersection of these two lines.
`mean_age <- mean(sleepdata$age, na.rm = TRUE)
med_wage <- median(sleepdata$hrwage, na.rm = TRUE)
sleepdata %>%
group_by(age) %>%
drop_na(age, hrwage) %>%
summarise(hrwage = mean(hrwage)) %>%
ggplot(aes(x=age, y = hrwage)) +
geom_point(colour = "gray40", alpha = 0.3) +
geom_vline(xintercept = mean_age, colour = "orchid4") +
geom_hline(yintercept = med_wage, colour = "steelblue") +
geom_abline(intercept = -34.5, slope = 1, colour = "grey60", linetype = "dashed")`
## Histograms and Distributions
Sometimes we want to get information about one variable on its own. We can use __ggplot2__ to make histograms as well as predicted distributions!
We use the function
## `geom_histogram()`
to produce histograms. To get a basic histogram of $age$, we can run
`ggplot(data = sleepdata, aes(x = age)) +
geom_histogram()`
Notice that __ggplot2__ chooses a bin width by default, but we can change this by adding `binwidth`. We can also add labels as before and change color based on group membership.
Note that if we want to change color, we now have two different options. `colour` changes the outline color, while `fill` changes the interior color.
`ggplot(data = sleepdata, aes(x = age)) +
geom_histogram(binwidth = 10, colour = "seagreen4") +
labs(title = "Age Histogram",
x = "Age (years)",
y = "Count")`
`ggplot(data = sleepdata, aes(x = age)) +
geom_histogram(binwidth = 10, fill = "midnightblue") +
labs(title = "Age Histogram",
x = "Age (years)",
y = "Count")`
`ggplot(data = sleepdata, aes(x = age)) +
geom_histogram(binwidth = 10, colour = "grey60", fill = "darkolivegreen1") +
labs(title = "Age Histogram",
x = "Age (years)",
y = "Count")`
`
### Stacking/Multiple Histograms
Like with points/lines, we can create separate histograms on the same plot based on levels of another variable.
`ggplot(data = sleepdata, aes(x = age)) +
geom_histogram(aes(fill = factor(male)), position = "identity",
alpha = 0.3, binwidth = 10) +
labs(title = "Age Histogram",
subtitle = "By Gender",
x = "Age (years)",
y = "Count")`
Notice that now we've had to include the `position = "identity"` argument in `geom_histogram()` to tell R that we want the position of each level of male to be its count. By default, R wants to stack the two which results in incorrect frequencies for the two groups.
Other adjustments are
* `alpha = 0.3` sets the transparency so that both histograms are visible - this can be tweaked to your liking
* `bins = 10` replaces the `binwidth` argument and tells R the number of bins we want (automatically setting the width to produce them), rather than the width of each bin (and adjusting the number of bins accordingly).
This works well, but we might want to tweak the legend. The [**Legends**](#legends) section down below goes over this in more detail and for other plot types, but we can customize the legend with `scale_fill_manual()`:
`ggplot(data = sleepdata, aes(x = age)) +
geom_histogram(aes(fill = factor(male)), position = "identity",
alpha = 0.3, binwidth = 10) +
labs(title = "Age Histogram",
subtitle = "By Gender",
x = "Age (years)",
y = "Count") +
scale_fill_manual(name = "Gender",
labels = c("Female", "Male"),
values = c("navy", "red"))`
What if we wanted to get a sense of the estimated distribution of age rather than look at the histogram? We can do that with the
## `geom_density()`
function!
`ggplot(data = sleepdata, aes(x = age)) +
geom_density(fill = "gray60", colour= "navy") +
labs(title = "Age Density",
x = "Age (years)",
y = "Density")`
`ggplot(data = sleepdata, aes(x = age)) +
geom_density(aes(colour = factor(male))) +
labs(title = "Age Density",
x = "Age (years)",
y = "Density")`
## Plotting Regression Lines
One cool thing that we can do with __ggplot2__ is produce a simple linear regression line directly in our plot! We use the
## `geom_smooth(method = "lm")`
layer for that. Note that you don't have to run a regression before calling ``gplot` - including a `geom_smooth` layer will run the simple linear regression of $y$ on $x$ for you.
`wagereg <- lm(hrwage ~ age, data = sleepdata)
summary(wagereg)`
`ggplot(data = sleepdata, aes(x=age, y = hrwage)) +
geom_point()+
geom_smooth(method = "lm")`
Notice that by default it gives us the 95% confidence interval too! We can change the confidence interval using the `level` argument and the color of the CI band with `fill` and the line with `color`:
`ggplot(data = sleepdata, aes(x=age, y = hrwage)) +
geom_point()+
geom_smooth(method = "lm", color = "steelblue", fill = "navy", level = 0.99)`
# Themes
Before we dive into more individualized adjustments, let's take a look at some of the default themes that come in ggplot2. You can access all of these themes just by including it with a `+` in your plot. A few examples include:
* `theme_gray()`
* `theme_bw()`
* `theme_linedraw()`
* `theme_light()`
* `theme_dark()`:
* `theme_minimal()`
* `theme_classic()`
* `theme_void()`
Try adding some of these themes to the following plot to see which you like.
`ggplot(data = sleepdata, aes(x = age, y = hrwage)) +
geom_point() +
labs(title = "Relationship between Age and Hourly Wage",
subtitle = "Nonmissing Sample",
caption = "Note: prepared using Wooldridge's sleep75 data.",
x = "Age (years)",
y = "Hourly Wage ($)")`
## ggthemes
Loading the package [ggthemes](https://mran.microsoft.com/snapshot/2017-02-04/web/packages/ggthemes/vignettes/ggthemes.html) gets us a bunch more theme options:
* theme_base: a theme resembling the default base graphics in R. See also theme_par.
* theme_calc: a theme based on LibreOffice Calc.
* theme_economist: a theme based on the plots in the The Economist magazine.
* theme_excel: a theme replicating the classic ugly gray charts in Excel
* theme_few: theme from Stephen Few’s “Practical Rules for Using Color in Charts”.
* theme_fivethirtyeight: a theme based on the plots at fivethirtyeight.com.
* theme_gdocs: a theme based on Google Docs.
* theme_hc: a theme based on Highcharts JS.
* theme_par: a theme that uses the current values of the base graphics parameters in par.
* theme_pander: a theme to use with the pander package.
* theme_solarized: a theme using the solarized color palette.
* theme_stata: themes based on Stata graph schemes.
* theme_tufte: a minimal ink theme based on Tufte’s The Visual Display of Quantitative Information.
* theme_wsj: a theme based on the plots in the The Wall Street Journal.
## Custom Themes
In addition to using a pre-built theme, you can create custom themes and alter [just about every setting imaginable](https://ggplot2.tidyverse.org/reference/theme.html)! While you can change individual settings in every plot, you can also define a custom theme (i.e. in your preamble) and then call it by name later on.
For example, here's one very slightly adapted from one of Ed Rubin's custom themes (who also has a [tremendous set of R notes available on his website](http://edrub.in/teaching.html)):
`custom_theme <- theme(
legend.position = "bottom", # place legend at the bottom
panel.background = element_rect(fill = NA), # change background color to white from grey
axis.ticks = element_line(color = "grey95", size = 0.3), # make axis tick marks the same color as grid lines
panel.grid.major = element_line(color = "grey95", size = 0.3), # change color of major grid lines (lines at displayed values)
panel.grid.minor = element_line(color = "grey95", size = 0.3), #change color of minor grid lines (lines between displayed values)
plot.caption = element_text(hjust = 0, face = "italic"), # left align bottom caption, make italic
legend.key = element_blank()) # no legend key`
`ggplot(data = sleepdata, aes(x = age, y = hrwage)) +
geom_point() +
labs(title = "Relationship between Age and Hourly Wage",
subtitle = "Nonmissing Sample",
caption = "Note: prepared using Wooldridge's sleep75 data.",
x = "Age (years)",
y = "Hourly Wage ($)")+
custom_theme`
# More Adjustments
## Changing Limits
To change limits of a plot without modifying the axes, add `xlim(min, max)` and `ylim(min, max)` where the arguments are numbers of the minimum and maximum values desired.
## Customizing Axes (Tick Marks, Limits, etc.)
To customize an axis, we'll use the `scale_x` and `scale_y` groups of functions. To customize a discrete axis, use `scale_x_discrete()` or `scale_y_discrete()`, and for a continuous variable use `scale_x_continuous()` or `scale_y_continuous()`. All four functions use the following main (but optional) arguments:
`(name, breaks, labels, limits)`
* **name** works the same as `labs` to add a label to the axis
* **breaks** controls where all the breaks are. Set to `NULL` to hide all ticks, or specify the breaks you want in a vector with `c()`.
* **labels** lets you replace the default tick mark labels with custom ones - again specify `NULL` or a custom vector
* **limits** lets you set the data range. This expects a character vector with two elements: `c(min, max)`
#### Number of Breaks
Alternatively you can use the `n.breaks` argument in any of the above functions in place of `breaks` if all you want to do is increase the number of breaks and don't care where those breaks occur.
<a id = "legends"></a>
## Legends
### Removing Some or All Legends
If you want to remove the entire legend, use
`theme(legend.position="none")`
If you want to instead remove the legend element for one aesthetic at a time, we can add it to the `guides()` option. For example, we can disable the fill colors from appearing in the legend with
`guide(fill = FALSE)`
Let's see an example: we can turn off the legend from our male/female scatterplot with:
`ggplot(data = sleepdata, aes(x = age, y = hrwage)) +
geom_point(aes(colour = factor(male))) +
labs(title = "Relationship between Age and Hourly Wage",
subtitle = "Nonmissing Sample, by Gender",
x = "Age (years)",
y = "Hourly Wage ($)") +
guides(colour = FALSE)`
### Customizing Legends
We can customize the legends for the respective element using the `scale_ELM_manual()` family of functions, where **ELM** is one of
* **colour**
* **fill**
* **size**
* **shape**
* **linetype**
* **alpha**
* **discrete**
There are a [ton of different options](https://ggplot2.tidyverse.org/reference/scale_manual.html) that we can customize for each scale.
For example, if we include both colour and shape elements in our male/female scatterplot, we can change the shapes with `scale_shape_manual()` and modify only that legend with:
`ggplot(data = sleepdata, aes(x = age, y = hrwage)) +
geom_point(aes(colour = factor(male), shape = factor(male)) +
labs(title = "Relationship between Age and Hourly Wage",
subtitle = "Nonmissing Sample, by Gender",
x = "Age (years)",
y = "Hourly Wage ($)") +
scale_shape_manual(name = "Gender", labels = c("Female", "Male"), values = c(23, 24))`
And we can combine this with turning off the legend for color too:
`ggplot(data = sleepdata, aes(x = age, y = hrwage)) +
geom_point(aes(colour = factor(male), shape = factor(male))) +
labs(title = "Relationship between Age and Hourly Wage",
subtitle = "Nonmissing Sample, by Gender",
x = "Age (years)",
y = "Hourly Wage ($)")+
scale_shape_manual(name = "Gender", labels = c("Female", "Male"), values = c(23, 24)) +
guides(colour = FALSE)`
# Summary
**ggplot2** is great for producing professional-looking figures and is capable of doing [a whole lot more](https://ggplot2.tidyverse.org/) than what's outlined here. You can use it to plot other types of geometric objects, make maps and analyze spatial data, create boxplots or heatmaps, and so much more!
### Plug
Check out the (free) book by Garrett Grolemund and Hadley Wickham [R for Data Science](https://r4ds.had.co.nz/) for a more in-depth dive into ggplot and the rest of the tidyverse package.
| github_jupyter |
# Homework 09
**Brief Honor Code**. Do the homework on your own. You may discuss ideas with your classmates, but DO NOT copy the solutions from someone else or the Internet. If stuck, discuss with TA.
**Note**: The expected figures are provided so you can check your solutions.
**1**. (20 points)
Find the gradient and Hessian for the following equation
$$
f(x, y) = 1 + 2x + 3y + 4x^2 + 2xy + y^2
$$
- Plot the contours of this function using `matplotlib` in the box $-10 \le x \le 10$ and $-10 \le y \le 10$ using a $100 \times 100$ grid.
- Then plot the gradient vectors using the `quiver` function on top of the contour plot using a $10 \times 10$ grid. Are the gradients orthogonal to the contours?
Hint: Use `numpy.meshgrid`, `matplotlib.contour` and `matplotllib.quiver`.

```
```
**2**. (30 points)
This exercise is about using Newton's method to find the cube roots of unity - find $z$ such that $z^3 = 1$. From the fundamental theorem of algebra, we know there must be exactly 3 complex roots since this is a degree 3 polynomial.
We start with Euler's equation
$$
e^{ix} = \cos x + i \sin x
$$
Raising $e^{ix}$ to the $n$th power where $n$ is an integer, we get from Euler's formula with $nx$ substituting for $x$
$$
(e^{ix})^n = e^{i(nx)} = \cos nx + i \sin nx
$$
Whenever $nx$ is an integer multiple of $2\pi$, we have
$$
\cos nx + i \sin nx = 1
$$
So
$$
e^{2\pi i \frac{k}{n}}
$$
is a root of 1 whenever $k/n = 0, 1, 2, \ldots$.
So the cube roots of unity are $1, e^{2\pi i/3}, e^{4\pi i/3}$.

While we can do this analytically, the idea is to use Newton's method to find these roots, and in the process, discover some rather perplexing behavior of Newton's method.
Newton's method for functions of complex variables - stability and basins of attraction. (30 points)
1. Write a function with the following function signature `newton(z, f, fprime, max_iter=100, tol=1e-6)` where
- `z` is a starting value (a complex number e.g. ` 3 + 4j`)
- `f` is a function of `z`
- `fprime` is the derivative of `f`
The function will run until either max_iter is reached or the absolute value of the Newton step is less than tol. In either case, the function should return the number of iterations taken and the final value of `z` as a tuple (`i`, `z`).
2. Define the function `f` and `fprime` that will result in Newton's method finding the cube roots of 1. Find 3 starting points that will give different roots, and print both the start and end points.
Write the following two plotting functions to see some (pretty) aspects of Newton's algorithm in the complex plane.
3. The first function `plot_newton_iters(f, fprime, n=200, extent=[-1,1,-1,1], cmap='hsv')` calculates and stores the number of iterations taken for convergence (or max_iter) for each point in a 2D array. The 2D array limits are given by `extent` - for example, when `extent = [-1,1,-1,1]` the corners of the plot are `(-i, -i), (1, -i), (1, i), (-1, i)`. There are `n` grid points in both the real and imaginary axes. The argument `cmap` specifies the color map to use - the suggested defaults are fine. Finally plot the image using `plt.imshow` - make sure the axis ticks are correctly scaled. Make a plot for the cube roots of 1.

4. The second function `plot_newton_basins(f, fprime, n=200, extent=[-1,1,-1,1], cmap='jet')` has the same arguments, but this time the grid stores the identity of the root that the starting point converged to. Make a plot for the cube roots of 1 - since there are 3 roots, there should be only 3 colors in the plot.

```
```
**3**. (20 points)
Consider the following function on $\mathbb{R}^2$:
$$
f(x_1,x_2) = -x_1x_2e^{-\frac{(x_1^2+x_2^2)}{2}}
$$
- Find the minimum under the constraint
$$g(x) = x_1^2+x_2^2 \leq 10$$
and
$$h(x) = 2x_1 + 3x_2 = 5$$ using `scipy.optimize.minimize`.
- Plot the function contours using `matplotlib`, showing the constraints $g$ and $h$ and indicate the constrained minimum with an `X`.

```
```
**4** (30 points)
Find solutions to $x^3 + 4x^2 -3 = x$.
- Write a function to find brackets, assuming roots are always at least 1 unit apart and that the roots lie between -10 and 10
- For each bracket, find the enclosed root using
- a bisection method
- Newton-Raphson (no guarantee to stay within brackets)
- Use the end points of the bracket as starting points for the bisection methods and the midpoint for Newton-Raphson.
- Use the companion matrix and characteristic polynomial to find the solutions
- Plot the function and its roots (marked with a circle) in a window just large enough to contain all roots.
Use a tolerance of 1e-6.

```
```
| github_jupyter |
```
import matplotlib
matplotlib.rcParams['pdf.fonttype'] = 42
matplotlib.rcParams['ps.fonttype'] = 42
import matplotlib.pyplot as plt
import matplotlib.backends.backend_pdf as pdf
import numpy as np
import pandas as pd
# import scipy.stats
import colors as EL
%matplotlib inline
savename = './figures/Fig2.pdf'
df = pd.read_csv('./data/trajectories/cleaned_animal_analyses_acclimation.csv')
df = df[df['dead'] != 'yes']
assert len(df) == len(df["animal_ID"].unique()), "Animal IDs are not unique!"
display(df.columns)
# Split dataframe by species
aegypti = df[df['species'].str.upper() == 'AEDES AEGYPTI'].copy()
albopictus = df[df['species'].str.upper() == 'AEDES ALBOPICTUS'].copy()
arabiensis = df[df['species'].str.upper() == 'ANOPHELES ARABIENSIS'].copy()
coluzzii = df[df['species'].str.upper() == 'ANOPHELES GAMBIAE'].copy()
quinque = df[df['species'].str.upper() == 'CULEX QUINQUEFASCIATUS'].copy()
tarsalis = df[df['species'].str.upper() == 'CULEX TARSALIS'].copy()
display(tarsalis.sample(5))
print('aedes aegypti')
display(aegypti.describe())
print('aedes albopictus')
display(albopictus.describe())
print('culex q.')
display(quinque.describe())
print('culex tarsalis')
display(tarsalis.describe())
print('anopheles arabiensis')
display(arabiensis.describe())
print('anopheles coluzzii')
display(coluzzii.describe())
def plot_values(fig, ax, value, ylim):
aegypti_v = aegypti[value].tolist()
albopictus_v = albopictus[value].tolist()
arabiensis_v = arabiensis[value].tolist()
coluzzii_v = coluzzii[value].tolist()
tarsalis_v = tarsalis[value].tolist()
quinque_v = quinque[value].tolist()
check = aegypti_v + albopictus_v + arabiensis_v + coluzzii_v + tarsalis_v + quinque_v
print('Min:', min(check), 'Max:', max(check))
aegypti_v = [x for x in aegypti_v if str(x) != 'nan']
albopictus_v = [x for x in albopictus_v if str(x) != 'nan']
arabiensis_v = [x for x in arabiensis_v if str(x) != 'nan']
coluzzii_v = [x for x in coluzzii_v if str(x) != 'nan']
tarsalis_v = [x for x in tarsalis_v if str(x) != 'nan']
quinque_v = [x for x in quinque_v if str(x) != 'nan']
if len(tarsalis_v) == 0:
tarsalis_v = [0]
data = [aegypti_v, albopictus_v, arabiensis_v, coluzzii_v, quinque_v, tarsalis_v]
parts = ax.violinplot(data, showmeans=False, showmedians=False, showextrema=False)
jitter = [np.random.normal(scale=0.1, size=len(aegypti_v)),
np.random.normal(scale=0.1, size=len(albopictus_v)),
np.random.normal(scale=0.1, size=len(arabiensis_v)),
np.random.normal(scale=0.1, size=len(coluzzii_v)),
np.random.normal(scale=0.1, size=len(quinque_v)),
np.random.normal(scale=0.1, size=len(tarsalis_v))]
colors = [EL.aegypti, EL.albopictus, EL.arabiensis, EL.coluzzii,
EL.culex_q, EL.culex_t]
markers = [EL.aegypti_marker, EL.albopictus_marker,
EL.arabiensis_marker, EL.coluzzii_marker,
EL.culex_q_marker, EL.culex_t_marker]
markersizes = [EL.aegypti_markersize, EL.albopictus_markersize,
EL.arabiensis_markersize, EL.coluzzii_markersize,
EL.culex_q_markersize, EL.culex_t_markersize]
for i, pc in enumerate(parts['bodies']):
pc.set_facecolor(colors[i])
pc.set_alpha(0.25)
for i, (j, d) in enumerate(zip(jitter, data)):
j = [x+i+1 for x in j]
ax.scatter(j, d, alpha=0.75, color=colors[i], zorder=5, s=4,
marker='o', clip_on=False, lw=0)
ax.set_ylim(ylim[0], ylim[-1])
ax.set_yticks(ylim)
ax.set_yticklabels(ylim)
# Color the x axis labels by species
ax.set_xlim(0.25, 6+0.75)
ax.set_xticks([])
# Add a black bar for the mean of each dataset
ch = 0.15
for i, datum in enumerate(data):
ax.plot([i+1-ch, i+1+ch], [np.mean(datum)]*2, color="k",
alpha=0.75, lw=2, zorder=20, clip_on=False, solid_capstyle='round')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.spines['bottom'].set_visible(False)
return fig, ax
fig = plt.figure(figsize = (6, 7.75))
ax1 = fig.add_subplot(5, 2, 1)
ax2 = fig.add_subplot(5, 2, 2)
ax3 = fig.add_subplot(5, 2, 3)
ax4 = fig.add_subplot(5, 2, 4)
ax5 = fig.add_subplot(5, 2, 5)
ax6 = fig.add_subplot(5, 2, 6)
ax7 = fig.add_subplot(5, 2, 7)
ax8 = fig.add_subplot(5, 2, 8)
ax9 = fig.add_subplot(5, 2, 9)
ax10 = fig.add_subplot(5, 2, 10)
fig, ax1 = plot_values(fig, ax1, 'time_move_p', ylim=np.arange(0, 101, 50))
fig, ax2 = plot_values(fig, ax2, 'total_dist_m', ylim=np.arange(0, 7, 2))
fig, ax3 = plot_values(fig, ax3, 'avg_speed_BL', ylim=np.arange(0, 4, 1))
fig, ax4 = plot_values(fig, ax4, 'max_speed_BL', ylim=np.arange(0, 25, 8))
fig, ax5 = plot_values(fig, ax5, 'mean_speed_first_BL', ylim=np.arange(0, 4, 1))
fig, ax6 = plot_values(fig, ax6, 'diff_speed_first_last_BL', ylim=np.arange(-3, 4, 3))
fig, ax7 = plot_values(fig, ax7, 'sharp_turns_p', ylim=np.arange(0, 21, 10))
fig, ax8 = plot_values(fig, ax8, 'max_still_sec', ylim=np.arange(0, 901, 300))
fig, ax9 = plot_values(fig, ax9, 'spirals', ylim=np.arange(0, 51, 25))
fig, ax10 = plot_values(fig, ax10, 'continuous', ylim=np.arange(0, 301, 100))
# Remove padding and margins from the figure and all its subplots
plt.margins(0,0)
plt.subplots_adjust(top=1, bottom=0, right=1, left=0, hspace=0.6, wspace=0.2)
plt.show()
# Save the Matplotlib figure as a PDF file:
pp = pdf.PdfPages(savename, keep_empty=False)
pp.savefig(fig)
pp.close()
```
| github_jupyter |
```
# default_exp series.predict
```
# series.predict
> Methods for predicting MRI series types using a previously trained `RandomForestClassifier` trained with `scikit-learn`.
```
#export
from dicomtools.basics import *
#export
path = Path('dicomtools/models')
_model_path = path/'mr-brain-series-select-rf.skl'
_y_names = pd.Index([
't1',
't2',
'swi',
'dwi',
'other',
'flair',
'adc',
'loc',
'spgr',
'mra'
])
_features = ['MRAcquisitionType', 'AngioFlag', 'SliceThickness', 'RepetitionTime',
'EchoTime', 'EchoTrainLength', 'PixelSpacing', 'ContrastBolusAgent',
'InversionTime', 'DiffusionBValue', 'seq_E', 'seq_EP', 'seq_G',
'seq_GR', 'seq_I', 'seq_IR', 'seq_M', 'seq_P', 'seq_R', 'seq_S',
'seq_SE', 'var_E', 'var_K', 'var_MP', 'var_MTC', 'var_N', 'var_O',
'var_OSP', 'var_P', 'var_S', 'var_SK', 'var_SP', 'var_SS', 'var_TOF',
'opt_1', 'opt_2', 'opt_A', 'opt_ACC_GEMS', 'opt_B', 'opt_C', 'opt_D',
'opt_E', 'opt_EDR_GEMS', 'opt_EPI_GEMS', 'opt_F', 'opt_FAST_GEMS',
'opt_FC', 'opt_FC_FREQ_AX_GEMS', 'opt_FC_SLICE_AX_GEMS',
'opt_FILTERED_GEMS', 'opt_FR_GEMS', 'opt_FS', 'opt_FSA_GEMS',
'opt_FSI_GEMS', 'opt_FSL_GEMS', 'opt_FSP_GEMS', 'opt_FSS_GEMS', 'opt_G',
'opt_I', 'opt_IFLOW_GEMS', 'opt_IR', 'opt_IR_GEMS', 'opt_L', 'opt_M',
'opt_MP_GEMS', 'opt_MT', 'opt_MT_GEMS', 'opt_NPW', 'opt_P', 'opt_PFF',
'opt_PFP', 'opt_PROP_GEMS', 'opt_R', 'opt_RAMP_IS_GEMS', 'opt_S',
'opt_SAT1', 'opt_SAT2', 'opt_SAT_GEMS', 'opt_SEQ_GEMS', 'opt_SP',
'opt_T', 'opt_T2FLAIR_GEMS', 'opt_TRF_GEMS', 'opt_VASCTOF_GEMS',
'opt_VB_GEMS', 'opt_W', 'opt_X', 'opt__']
#export
def _get_preds(clf, df, features, y_names=_y_names):
y_pred = clf.predict(df[features])
y_prob = clf.predict_proba(df[features])
preds = pd.Series(y_names.take(y_pred))
probas = pd.Series([y_prob[i][pred] for i, pred in enumerate(y_pred)])
return pd.DataFrame({'seq_pred': preds, 'pred_proba': probas})
#export
def predict_from_df(df, features=_features, thresh=0.8, model_path=_model_path, clf=None):
"Predict series from `df[features]` at confidence threshold `p >= thresh`"
if 'plane' not in df.columns:
df1 = preprocess(df)
labels = extract_labels(df1)
df1 = df1.join(labels[['plane', 'contrast', 'seq_label']])
else:
df1 = df.copy()
if not clf:
clf = load(model_path)
df1 = df1.join(_get_preds(clf, df1, features))
filt = df1['pred_proba'] < thresh
df1['seq_pred'][filt] = 'unknown'
return df1
#export
def predict_from_folder(path, **kwargs):
"Read DICOMs into a `pandas.DataFrame` from `path` then predict series"
_, df = get_dicoms(path)
return predict_from_df(df, **kwargs)
```
| github_jupyter |
# Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using [VGGNet](https://arxiv.org/pdf/1409.1556.pdf) trained on the [ImageNet dataset](http://www.image-net.org/) as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from [the CS231n course notes](http://cs231n.github.io/transfer-learning/#tf).
## Pretrained VGGNet
We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. This code is already included in 'tensorflow_vgg' directory, sdo you don't have to clone it.
This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. **You'll need to clone the repo into the folder containing this notebook.** Then download the parameter file using the next cell.
```
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
```
## Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the [TensorFlow inception tutorial](https://www.tensorflow.org/tutorials/image_retraining).
```
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
```
## ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the `vgg16` module from `tensorflow_vgg`. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from [the source code](https://github.com/machrisaa/tensorflow-vgg/blob/master/vgg16.py)):
```
self.conv1_1 = self.conv_layer(bgr, "conv1_1")
self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
self.pool1 = self.max_pool(self.conv1_2, 'pool1')
self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
self.pool2 = self.max_pool(self.conv2_2, 'pool2')
self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
self.pool3 = self.max_pool(self.conv3_3, 'pool3')
self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
self.pool4 = self.max_pool(self.conv4_3, 'pool4')
self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
self.pool5 = self.max_pool(self.conv5_3, 'pool5')
self.fc6 = self.fc_layer(self.pool5, "fc6")
self.relu6 = tf.nn.relu(self.fc6)
```
So what we want are the values of the first fully connected layer, after being ReLUd (`self.relu6`). To build the network, we use
```
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
```
This creates the `vgg` object, then builds the graph with `vgg.build(input_)`. Then to get the values from the layer,
```
feed_dict = {input_: images}
codes = sess.run(vgg.relu6, feed_dict=feed_dict)
```
```
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
```
Below I'm running images through the VGG network in batches.
> **Exercise:** Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).
```
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
# TODO: Build the vgg network here
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
# Image batch to pass to VGG network
images = np.concatenate(batch)
# TODO: Get the values from the relu6 layer of the VGG network
feed_dict = {input_: images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
```
## Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
```
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
```
### Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
> **Exercise:** From scikit-learn, use [LabelBinarizer](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelBinarizer.html) to create one-hot encoded vectors from the labels.
```
from sklearn import preprocessing
lb = preprocessing.LabelBinarizer()
labels_vecs = lb.fit_transform(labels)
labels_lookup = lb.classes_
labels_lookup
```
Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use [`StratifiedShuffleSplit`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedShuffleSplit.html) from scikit-learn.
You can create the splitter like so:
```
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
```
Then split the data with
```
splitter = ss.split(x, y)
```
`ss.split` returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use `next(splitter)` to get the indices. Be sure to read the [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedShuffleSplit.html) and the [user guide](http://scikit-learn.org/stable/modules/cross_validation.html#random-permutations-cross-validation-a-k-a-shuffle-split).
> **Exercise:** Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.
```
from sklearn.model_selection import StratifiedShuffleSplit
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
for train_idx, test_idx in ss.split(codes, labels_vecs):
train_x, train_y = codes[train_idx], labels_vecs[train_idx]
half = len(test_idx) // 2
val_x, val_y = codes[test_idx[:half]], labels_vecs[test_idx[:half]]
test_x, test_y = codes[test_idx[half:]], labels_vecs[test_idx[half:]]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
```
If you did it right, you should see these sizes for the training sets:
```
Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)
```
### Classifier layers
Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.
> **Exercise:** With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.
```
import math
num_inputs = codes.shape[1]
num_hidden = 4096
num_outputs = labels_vecs.shape[1]
inputs_ = tf.placeholder(tf.float32, shape=[None, num_inputs])
labels_ = tf.placeholder(tf.int64, shape=[None, num_outputs])
# TODO: Classifier layers and operations
#two layers in our classifier network: a hidden layer with num_hidden units, and an output layer
#with num_outputs (= 5) units
layer1_W = tf.Variable(tf.truncated_normal([num_inputs, num_hidden], stddev=math.sqrt(2.0/num_inputs)))
layer1_bias = tf.Variable(tf.zeros([num_hidden]))
layer1 = tf.nn.relu(tf.add(tf.matmul(inputs_, layer1_W), layer1_bias))
layer2_W = tf.Variable(tf.truncated_normal([num_hidden, num_outputs], stddev=math.sqrt(2.0/num_hidden)))
layer2_bias = tf.Variable(tf.zeros([num_outputs]))
layer2 = tf.add(tf.matmul(inputs_, layer2_W), layer2_bias)
logits = tf.identity(layer2, name='logits')
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=labels_))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
```
### Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
```
def get_batches(x, y, n_batches=10):
""" Return a generator that yields batches from arrays x and y. """
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
```
### Training
Here, we'll train the network.
> **Exercise:** So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the `get_batches` function I wrote before to get your batches like `for x, y in get_batches(train_x, train_y)`. Or write your own!
```
epochs = 100
saver = tf.train.Saver()
with tf.Session() as sess:
# TODO: Your training code here
sess.run(tf.global_variables_initializer())
val_cost, val_acc = sess.run((cost, accuracy), feed_dict={inputs_:val_x, labels_:val_y})
print("Starting, val. cost = {:f}, val. accuracy = {:f}".format(val_cost, val_acc))
for epoch in range(epochs):
for x, y in get_batches(train_x, train_y):
feed = {inputs_:x, labels_:y}
sess.run(optimizer, feed_dict=feed)
val_cost, val_acc = sess.run((cost, accuracy), feed_dict={inputs_:val_x, labels_:val_y})
print("After epoch {}/{}, val. cost = {:f}, val. accuracy = {:f}".format(epoch, epochs, val_cost, val_acc))
saver.save(sess, "checkpoints/flowers.ckpt")
```
### Testing
Below you see the test accuracy. You can also see the predictions returned for images.
```
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
```
Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
```
test_img_path = 'flower_photos/daisy/3440366251_5b9bdf27c9_m.jpg'
test_img = plt.imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
```
| github_jupyter |
```
# -*- coding: utf-8 -*-
"""
EVCのためのEV-GMMを構築します. そして, 適応学習する.
詳細 : https://pdfs.semanticscholar.org/cbfe/71798ded05fb8bf8674580aabf534c4dbb8bc.pdf
This program make EV-GMM for EVC. Then, it make adaptation learning.
Check detail : https://pdfs.semanticscholar.org/cbfe/71798ded05fb8bf8674580abf534c4dbb8bc.pdf
"""
from __future__ import division, print_function
import os
from shutil import rmtree
import argparse
import glob
import pickle
import time
import numpy as np
from numpy.linalg import norm
from sklearn.decomposition import PCA
from sklearn.mixture import GMM # sklearn 0.20.0から使えない
from sklearn.preprocessing import StandardScaler
import scipy.signal
import scipy.sparse
%matplotlib inline
import matplotlib.pyplot as plt
import IPython
from IPython.display import Audio
import soundfile as sf
import wave
import pyworld as pw
import librosa.display
from dtw import dtw
import warnings
warnings.filterwarnings('ignore')
"""
Parameters
__Mixtured : GMM混合数
__versions : 実験セット
__convert_source : 変換元話者のパス
__convert_target : 変換先話者のパス
"""
# parameters
__Mixtured = 40
__versions = 'pre-stored0.1.1'
__convert_source = 'input/EJM10/V01/T01/TIMIT/000/*.wav'
__convert_target = 'adaptation/EJM04/V01/T01/ATR503/A/*.wav'
# settings
__same_path = './utterance/' + __versions + '/'
__output_path = __same_path + 'output/EJM04/' # EJF01, EJF07, EJM04, EJM05
Mixtured = __Mixtured
pre_stored_pickle = __same_path + __versions + '.pickle'
pre_stored_source_list = __same_path + 'pre-source/**/V01/T01/**/*.wav'
pre_stored_list = __same_path + "pre/**/V01/T01/**/*.wav"
#pre_stored_target_list = "" (not yet)
pre_stored_gmm_init_pickle = __same_path + __versions + '_init-gmm.pickle'
pre_stored_sv_npy = __same_path + __versions + '_sv.npy'
save_for_evgmm_covarXX = __output_path + __versions + '_covarXX.npy'
save_for_evgmm_covarYX = __output_path + __versions + '_covarYX.npy'
save_for_evgmm_fitted_source = __output_path + __versions + '_fitted_source.npy'
save_for_evgmm_fitted_target = __output_path + __versions + '_fitted_target.npy'
save_for_evgmm_weights = __output_path + __versions + '_weights.npy'
save_for_evgmm_source_means = __output_path + __versions + '_source_means.npy'
for_convert_source = __same_path + __convert_source
for_convert_target = __same_path + __convert_target
converted_voice_npy = __output_path + 'sp_converted_' + __versions
converted_voice_wav = __output_path + 'sp_converted_' + __versions
mfcc_save_fig_png = __output_path + 'mfcc3dim_' + __versions
f0_save_fig_png = __output_path + 'f0_converted' + __versions
converted_voice_with_f0_wav = __output_path + 'sp_f0_converted' + __versions
EPSILON = 1e-8
class MFCC:
"""
MFCC() : メル周波数ケプストラム係数(MFCC)を求めたり、MFCCからスペクトルに変換したりするクラス.
動的特徴量(delta)が実装途中.
ref : http://aidiary.hatenablog.com/entry/20120225/1330179868
"""
def __init__(self, frequency, nfft=1026, dimension=24, channels=24):
"""
各種パラメータのセット
nfft : FFTのサンプル点数
frequency : サンプリング周波数
dimension : MFCC次元数
channles : メルフィルタバンクのチャンネル数(dimensionに依存)
fscale : 周波数スケール軸
filterbankl, fcenters : フィルタバンク行列, フィルタバンクの頂点(?)
"""
self.nfft = nfft
self.frequency = frequency
self.dimension = dimension
self.channels = channels
self.fscale = np.fft.fftfreq(self.nfft, d = 1.0 / self.frequency)[: int(self.nfft / 2)]
self.filterbank, self.fcenters = self.melFilterBank()
def hz2mel(self, f):
"""
周波数からメル周波数に変換
"""
return 1127.01048 * np.log(f / 700.0 + 1.0)
def mel2hz(self, m):
"""
メル周波数から周波数に変換
"""
return 700.0 * (np.exp(m / 1127.01048) - 1.0)
def melFilterBank(self):
"""
メルフィルタバンクを生成する
"""
fmax = self.frequency / 2
melmax = self.hz2mel(fmax)
nmax = int(self.nfft / 2)
df = self.frequency / self.nfft
dmel = melmax / (self.channels + 1)
melcenters = np.arange(1, self.channels + 1) * dmel
fcenters = self.mel2hz(melcenters)
indexcenter = np.round(fcenters / df)
indexstart = np.hstack(([0], indexcenter[0:self.channels - 1]))
indexstop = np.hstack((indexcenter[1:self.channels], [nmax]))
filterbank = np.zeros((self.channels, nmax))
for c in np.arange(0, self.channels):
increment = 1.0 / (indexcenter[c] - indexstart[c])
# np,int_ は np.arangeが[0. 1. 2. ..]となるのをintにする
for i in np.int_(np.arange(indexstart[c], indexcenter[c])):
filterbank[c, i] = (i - indexstart[c]) * increment
decrement = 1.0 / (indexstop[c] - indexcenter[c])
# np,int_ は np.arangeが[0. 1. 2. ..]となるのをintにする
for i in np.int_(np.arange(indexcenter[c], indexstop[c])):
filterbank[c, i] = 1.0 - ((i - indexcenter[c]) * decrement)
return filterbank, fcenters
def mfcc(self, spectrum):
"""
スペクトルからMFCCを求める.
"""
mspec = []
mspec = np.log10(np.dot(spectrum, self.filterbank.T))
mspec = np.array(mspec)
return scipy.fftpack.realtransforms.dct(mspec, type=2, norm="ortho", axis=-1)
def delta(self, mfcc):
"""
MFCCから動的特徴量を求める.
現在は,求める特徴量フレームtをt-1とt+1の平均としている.
"""
mfcc = np.concatenate([
[mfcc[0]],
mfcc,
[mfcc[-1]]
]) # 最初のフレームを最初に、最後のフレームを最後に付け足す
delta = None
for i in range(1, mfcc.shape[0] - 1):
slope = (mfcc[i+1] - mfcc[i-1]) / 2
if delta is None:
delta = slope
else:
delta = np.vstack([delta, slope])
return delta
def imfcc(self, mfcc, spectrogram):
"""
MFCCからスペクトルを求める.
"""
im_sp = np.array([])
for i in range(mfcc.shape[0]):
mfcc_s = np.hstack([mfcc[i], [0] * (self.channels - self.dimension)])
mspectrum = scipy.fftpack.idct(mfcc_s, norm='ortho')
# splrep はスプライン補間のための補間関数を求める
tck = scipy.interpolate.splrep(self.fcenters, np.power(10, mspectrum))
# splev は指定座標での補間値を求める
im_spectrogram = scipy.interpolate.splev(self.fscale, tck)
im_sp = np.concatenate((im_sp, im_spectrogram), axis=0)
return im_sp.reshape(spectrogram.shape)
def trim_zeros_frames(x, eps=1e-7):
"""
無音区間を取り除く.
"""
T, D = x.shape
s = np.sum(np.abs(x), axis=1)
s[s < 1e-7] = 0.
return x[s > eps]
def analyse_by_world_with_harverst(x, fs):
"""
WORLD音声分析合成器で基本周波数F0,スペクトル包絡,非周期成分を求める.
基本周波数F0についてはharvest法により,より精度良く求める.
"""
# 4 Harvest with F0 refinement (using Stonemask)
frame_period = 5
_f0_h, t_h = pw.harvest(x, fs, frame_period=frame_period)
f0_h = pw.stonemask(x, _f0_h, t_h, fs)
sp_h = pw.cheaptrick(x, f0_h, t_h, fs)
ap_h = pw.d4c(x, f0_h, t_h, fs)
return f0_h, sp_h, ap_h
def wavread(file):
"""
wavファイルから音声トラックとサンプリング周波数を抽出する.
"""
wf = wave.open(file, "r")
fs = wf.getframerate()
x = wf.readframes(wf.getnframes())
x = np.frombuffer(x, dtype= "int16") / 32768.0
wf.close()
return x, float(fs)
def preEmphasis(signal, p=0.97):
"""
MFCC抽出のための高域強調フィルタ.
波形を通すことで,高域成分が強調される.
"""
return scipy.signal.lfilter([1.0, -p], 1, signal)
def alignment(source, target, path):
"""
タイムアライメントを取る.
target音声をsource音声の長さに合うように調整する.
"""
# ここでは814に合わせよう(targetに合わせる)
# p_p = 0 if source.shape[0] > target.shape[0] else 1
#shapes = source.shape if source.shape[0] > target.shape[0] else target.shape
shapes = source.shape
align = np.array([])
for (i, p) in enumerate(path[0]):
if i != 0:
if j != p:
temp = np.array(target[path[1][i]])
align = np.concatenate((align, temp), axis=0)
else:
temp = np.array(target[path[1][i]])
align = np.concatenate((align, temp), axis=0)
j = p
return align.reshape(shapes)
"""
pre-stored学習のためのパラレル学習データを作る。
時間がかかるため、利用できるlearn-data.pickleがある場合はそれを利用する。
それがない場合は一から作り直す。
"""
timer_start = time.time()
if os.path.exists(pre_stored_pickle):
print("exist, ", pre_stored_pickle)
with open(pre_stored_pickle, mode='rb') as f:
total_data = pickle.load(f)
print("open, ", pre_stored_pickle)
print("Load pre-stored time = ", time.time() - timer_start , "[sec]")
else:
source_mfcc = []
#source_data_sets = []
for name in sorted(glob.iglob(pre_stored_source_list, recursive=True)):
print(name)
x, fs = sf.read(name)
f0, sp, ap = analyse_by_world_with_harverst(x, fs)
mfcc = MFCC(fs)
source_mfcc_temp = mfcc.mfcc(sp)
#source_data = np.hstack([source_mfcc_temp, mfcc.delta(source_mfcc_temp)]) # static & dynamic featuers
source_mfcc.append(source_mfcc_temp)
#source_data_sets.append(source_data)
total_data = []
i = 0
_s_len = len(source_mfcc)
for name in sorted(glob.iglob(pre_stored_list, recursive=True)):
print(name, len(total_data))
x, fs = sf.read(name)
f0, sp, ap = analyse_by_world_with_harverst(x, fs)
mfcc = MFCC(fs)
target_mfcc = mfcc.mfcc(sp)
dist, cost, acc, path = dtw(source_mfcc[i%_s_len], target_mfcc, dist=lambda x, y: norm(x - y, ord=1))
#print('Normalized distance between the two sounds:' + str(dist))
#print("target_mfcc = {0}".format(target_mfcc.shape))
aligned = alignment(source_mfcc[i%_s_len], target_mfcc, path)
#target_data_sets = np.hstack([aligned, mfcc.delta(aligned)]) # static & dynamic features
#learn_data = np.hstack((source_data_sets[i], target_data_sets))
learn_data = np.hstack([source_mfcc[i%_s_len], aligned])
total_data.append(learn_data)
i += 1
with open(pre_stored_pickle, 'wb') as output:
pickle.dump(total_data, output)
print("Make, ", pre_stored_pickle)
print("Make pre-stored time = ", time.time() - timer_start , "[sec]")
"""
全事前学習出力話者からラムダを推定する.
ラムダは適応学習で変容する.
"""
S = len(total_data)
D = int(total_data[0].shape[1] / 2)
print("total_data[0].shape = ", total_data[0].shape)
print("S = ", S)
print("D = ", D)
timer_start = time.time()
if os.path.exists(pre_stored_gmm_init_pickle):
print("exist, ", pre_stored_gmm_init_pickle)
with open(pre_stored_gmm_init_pickle, mode='rb') as f:
initial_gmm = pickle.load(f)
print("open, ", pre_stored_gmm_init_pickle)
print("Load initial_gmm time = ", time.time() - timer_start , "[sec]")
else:
initial_gmm = GMM(n_components = Mixtured, covariance_type = 'full')
initial_gmm.fit(np.vstack(total_data))
with open(pre_stored_gmm_init_pickle, 'wb') as output:
pickle.dump(initial_gmm, output)
print("Make, ", initial_gmm)
print("Make initial_gmm time = ", time.time() - timer_start , "[sec]")
weights = initial_gmm.weights_
source_means = initial_gmm.means_[:, :D]
target_means = initial_gmm.means_[:, D:]
covarXX = initial_gmm.covars_[:, :D, :D]
covarXY = initial_gmm.covars_[:, :D, D:]
covarYX = initial_gmm.covars_[:, D:, :D]
covarYY = initial_gmm.covars_[:, D:, D:]
fitted_source = source_means
fitted_target = target_means
"""
SVはGMMスーパーベクトルで、各pre-stored学習における出力話者について平均ベクトルを推定する。
GMMの学習を見てみる必要があるか?
"""
timer_start = time.time()
if os.path.exists(pre_stored_sv_npy):
print("exist, ", pre_stored_sv_npy)
sv = np.load(pre_stored_sv_npy)
print("open, ", pre_stored_sv_npy)
print("Load pre_stored_sv time = ", time.time() - timer_start , "[sec]")
else:
sv = []
for i in range(S):
gmm = GMM(n_components = Mixtured, params = 'm', init_params = '', covariance_type = 'full')
gmm.weights_ = initial_gmm.weights_
gmm.means_ = initial_gmm.means_
gmm.covars_ = initial_gmm.covars_
gmm.fit(total_data[i])
sv.append(gmm.means_)
sv = np.array(sv)
np.save(pre_stored_sv_npy, sv)
print("Make pre_stored_sv time = ", time.time() - timer_start , "[sec]")
"""
各事前学習出力話者のGMM平均ベクトルに対して主成分分析(PCA)を行う.
PCAで求めた固有値と固有ベクトルからeigenvectorsとbiasvectorsを作る.
"""
timer_start = time.time()
#source_pca
source_n_component, source_n_features = sv[:, :, :D].reshape(S, Mixtured*D).shape
# 標準化(分散を1、平均を0にする)
source_stdsc = StandardScaler()
# 共分散行列を求める
source_X_std = source_stdsc.fit_transform(sv[:, :, :D].reshape(S, Mixtured*D))
# PCAを行う
source_cov = source_X_std.T @ source_X_std / (source_n_component - 1)
source_W, source_V_pca = np.linalg.eig(source_cov)
print(source_W.shape)
print(source_V_pca.shape)
# データを主成分の空間に変換する
source_X_pca = source_X_std @ source_V_pca
print(source_X_pca.shape)
#target_pca
target_n_component, target_n_features = sv[:, :, D:].reshape(S, Mixtured*D).shape
# 標準化(分散を1、平均を0にする)
target_stdsc = StandardScaler()
#共分散行列を求める
target_X_std = target_stdsc.fit_transform(sv[:, :, D:].reshape(S, Mixtured*D))
#PCAを行う
target_cov = target_X_std.T @ target_X_std / (target_n_component - 1)
target_W, target_V_pca = np.linalg.eig(target_cov)
print(target_W.shape)
print(target_V_pca.shape)
# データを主成分の空間に変換する
target_X_pca = target_X_std @ target_V_pca
print(target_X_pca.shape)
eigenvectors = source_X_pca.reshape((Mixtured, D, S)), target_X_pca.reshape((Mixtured, D, S))
source_bias = np.mean(sv[:, :, :D], axis=0)
target_bias = np.mean(sv[:, :, D:], axis=0)
biasvectors = source_bias.reshape((Mixtured, D)), target_bias.reshape((Mixtured, D))
print("Do PCA time = ", time.time() - timer_start , "[sec]")
"""
声質変換に用いる変換元音声と目標音声を読み込む.
"""
timer_start = time.time()
source_mfcc_for_convert = []
source_sp_for_convert = []
source_f0_for_convert = []
source_ap_for_convert = []
fs_source = None
for name in sorted(glob.iglob(for_convert_source, recursive=True)):
print("source = ", name)
x_source, fs_source = sf.read(name)
f0_source, sp_source, ap_source = analyse_by_world_with_harverst(x_source, fs_source)
mfcc_source = MFCC(fs_source)
#mfcc_s_tmp = mfcc_s.mfcc(sp)
#source_mfcc_for_convert = np.hstack([mfcc_s_tmp, mfcc_s.delta(mfcc_s_tmp)])
source_mfcc_for_convert.append(mfcc_source.mfcc(sp_source))
source_sp_for_convert.append(sp_source)
source_f0_for_convert.append(f0_source)
source_ap_for_convert.append(ap_source)
target_mfcc_for_fit = []
target_f0_for_fit = []
target_ap_for_fit = []
for name in sorted(glob.iglob(for_convert_target, recursive=True)):
print("target = ", name)
x_target, fs_target = sf.read(name)
f0_target, sp_target, ap_target = analyse_by_world_with_harverst(x_target, fs_target)
mfcc_target = MFCC(fs_target)
#mfcc_target_tmp = mfcc_target.mfcc(sp_target)
#target_mfcc_for_fit = np.hstack([mfcc_t_tmp, mfcc_t.delta(mfcc_t_tmp)])
target_mfcc_for_fit.append(mfcc_target.mfcc(sp_target))
target_f0_for_fit.append(f0_target)
target_ap_for_fit.append(ap_target)
# 全部numpy.arrrayにしておく
source_data_mfcc = np.array(source_mfcc_for_convert)
source_data_sp = np.array(source_sp_for_convert)
source_data_f0 = np.array(source_f0_for_convert)
source_data_ap = np.array(source_ap_for_convert)
target_mfcc = np.array(target_mfcc_for_fit)
target_f0 = np.array(target_f0_for_fit)
target_ap = np.array(target_ap_for_fit)
print("Load Input and Target Voice time = ", time.time() - timer_start , "[sec]")
"""
適応話者学習を行う.
つまり,事前学習出力話者から目標話者の空間を作りだす.
適応話者文数ごとにfitted_targetを集めるのは未実装.
"""
timer_start = time.time()
epoch=1000
py = GMM(n_components = Mixtured, covariance_type = 'full')
py.weights_ = weights
py.means_ = target_means
py.covars_ = covarYY
fitted_target = None
for i in range(len(target_mfcc)):
print("adaptation = ", i+1, "/", len(target_mfcc))
target = target_mfcc[i]
for x in range(epoch):
print("epoch = ", x)
predict = py.predict_proba(np.atleast_2d(target))
y = np.sum([predict[:, i: i + 1] * (target - biasvectors[1][i])
for i in range(Mixtured)], axis = 1)
gamma = np.sum(predict, axis = 0)
left = np.sum([gamma[i] * np.dot(eigenvectors[1][i].T,
np.linalg.solve(py.covars_, eigenvectors[1])[i])
for i in range(Mixtured)], axis=0)
right = np.sum([np.dot(eigenvectors[1][i].T,
np.linalg.solve(py.covars_, y)[i])
for i in range(Mixtured)], axis = 0)
weight = np.linalg.solve(left, right)
fitted_target = np.dot(eigenvectors[1], weight) + biasvectors[1]
py.means_ = fitted_target
print("Load Input and Target Voice time = ", time.time() - timer_start , "[sec]")
"""
変換に必要なものを残しておく.
"""
np.save(save_for_evgmm_covarXX, covarXX)
np.save(save_for_evgmm_covarYX, covarYX)
np.save(save_for_evgmm_fitted_source, fitted_source)
np.save(save_for_evgmm_fitted_target, fitted_target)
np.save(save_for_evgmm_weights, weights)
np.save(save_for_evgmm_source_means, source_means)
```
| github_jupyter |
```
import geopandas as gpd
import os, sys, time
import pandas as pd
import numpy as np
from osgeo import ogr
from rtree import index
from shapely import speedups
import networkx as nx
import shapely.ops
from shapely.geometry import LineString, MultiLineString, MultiPoint, Point
from geopy.distance import vincenty
from boltons.iterutils import pairwise
import matplotlib.pyplot as plt
from shapely.wkt import loads,dumps
data_path = r'C:\Users\charl\Documents\GOST\NetClean'
def load_osm_data(data_path,country):
osm_path = os.path.join(data_path,'{}.osm.pbf'.format(country))
driver=ogr.GetDriverByName('OSM')
return driver.Open(osm_path)
def fetch_roads(data_path, country):
data = load_osm_data(data_path,country)
sql_lyr = data.ExecuteSQL("SELECT osm_id,highway FROM lines WHERE highway IS NOT NULL")
roads=[]
for feature in sql_lyr:
if feature.GetField('highway') is not None:
osm_id = feature.GetField('osm_id')
shapely_geo = loads(feature.geometry().ExportToWkt())
if shapely_geo is None:
continue
highway=feature.GetField('highway')
roads.append([osm_id,highway,shapely_geo])
if len(roads) > 0:
road_gdf = gpd.GeoDataFrame(roads,columns=['osm_id','infra_type','geometry'],crs={'init': 'epsg:4326'})
if 'residential' in road_gdf.infra_type.unique():
print('residential included')
else:
print('residential excluded')
return road_gdf
else:
print('No roads in {}'.format(country))
def line_length(line, ellipsoid='WGS-84'):
"""Length of a line in meters, given in geographic coordinates
Adapted from https://gis.stackexchange.com/questions/4022/looking-for-a-pythonic-way-to-calculate-the-length-of-a-wkt-linestring#answer-115285
Arguments:
line {Shapely LineString} -- a shapely LineString object with WGS-84 coordinates
ellipsoid {String} -- string name of an ellipsoid that `geopy` understands (see
http://geopy.readthedocs.io/en/latest/#module-geopy.distance)
Returns:
Length of line in meters
"""
if line.geometryType() == 'MultiLineString':
return sum(line_length(segment) for segment in line)
return sum(
vincenty(tuple(reversed(a)), tuple(reversed(b)), ellipsoid=ellipsoid).kilometers
for a, b in pairwise(line.coords)
)
def get_all_intersections(shape_input):
# =============================================================================
# # Initialize Rtree
# =============================================================================
idx_inters = index.Index()
# =============================================================================
# # Load data
# =============================================================================
all_data = dict(zip(list(shape_input.osm_id),list(shape_input.geometry)))
idx_osm = shape_input.sindex
# =============================================================================
# # Find all the intersecting lines to prepare for cutting
# =============================================================================
count = 0
inters_done = {}
new_lines = []
for key1, line in all_data.items():
infra_line = shape_input.at[shape_input.index[shape_input['osm_id']==key1].tolist()[0],'infra_type']
intersections = shape_input.iloc[list(idx_osm.intersection(line.bounds))]
intersections = dict(zip(list(intersections.osm_id),list(intersections.geometry)))
# Remove line1
if key1 in intersections: intersections.pop(key1)
# Find intersecting lines
for key2,line2 in intersections.items():
# Check that this intersection has not been recorded already
if (key1, key2) in inters_done or (key2, key1) in inters_done:
continue
# Record that this intersection was saved
inters_done[(key1, key2)] = True
# Get intersection
if line.intersects(line2):
# Get intersection
inter = line.intersection(line2)
# Save intersecting point
if "Point" == inter.type:
idx_inters.insert(0, inter.bounds, inter)
count += 1
elif "MultiPoint" == inter.type:
for pt in inter:
idx_inters.insert(0, pt.bounds, pt)
count += 1
## =============================================================================
## # cut lines where necessary and save all new linestrings to a list
## =============================================================================
hits = [n.object for n in idx_inters.intersection(line.bounds, objects=True)]
if len(hits) != 0:
# try:
out = shapely.ops.split(line, MultiPoint(hits))
new_lines.append([{'geometry': LineString(x), 'osm_id':key1,'infra_type':infra_line} for x in out.geoms])
# except:
# new_lines.append([{'geometry': line, 'osm_id':key1,
# infra_type:infra_line}])
else:
new_lines.append([{'geometry': line, 'osm_id':key1,
'infra_type':infra_line}])
# Create one big list and treat all the cutted lines as unique lines
flat_list = []
all_data = {}
#item for sublist in new_lines for item in sublist
i = 1
for sublist in new_lines:
if sublist is not None:
for item in sublist:
item['id'] = i
flat_list.append(item)
i += 1
all_data[i] = item
# =============================================================================
# # Transform into geodataframe and add coordinate system
# =============================================================================
full_gpd = gpd.GeoDataFrame(flat_list,geometry ='geometry')
full_gpd['country'] = country
full_gpd.crs = {'init' :'epsg:4326'}
return full_gpd
def get_nodes(x):
return list(x.geometry.coords)[0],list(x.geometry.coords)[-1]
%%time
destfolder = r'C:\Users\charl\Documents\GOST\NetClean\processed'
country = 'YEM'
roads_raw = fetch_roads(data_path,country)
accepted_road_types = ['primary',
'primary_link',
'motorway',
'motorway_link'
'secondary',
'secondary_link',
'tertiary',
'tertiary_link',
'trunk',
'trunk_link',
'residential',
'unclassified',
'road',
'track',
'service',
'services'
]
roads_raw = roads_raw.loc[roads_raw.infra_type.isin(accepted_road_types)]
roads = get_all_intersections(roads_raw)
roads['key'] = ['edge_'+str(x+1) for x in range(len(roads))]
np.arange(1,len(roads)+1,1)
nodes = gpd.GeoDataFrame(roads.apply(lambda x: get_nodes(x),axis=1).apply(pd.Series))
nodes.columns = ['u','v']
roads['length'] = roads.geometry.apply(lambda x : line_length(x))
#G = ox.gdfs_to_graph(all_nodes,roads)
roads.rename(columns={'geometry':'Wkt'}, inplace=True)
roads = pd.concat([roads,nodes],axis=1)
roads.to_csv(os.path.join(destfolder, '%s_combo.csv' % country))
roads.infra_type.value_counts()
```
| github_jupyter |
# Autoencoder
A CCN based autoencoder.
Steps:
1. build an autoencoder
2. cluster code
## Load dataset
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import random
import tensorflow as tf
from tensorflow.keras import layers
from autoencoder_utils import show_samples, show_loss, show_mse, show_reconstructed_signals, show_reconstruction_errors
from keras_utils import ModelSaveCallback
from orientation_indipendend_transformation import orientation_independent_transformation
random.seed(42)
np.random.seed(42)
def load_dataset():
data = pd.read_csv("./datasets/our2/dataset_50_2.5.csv", header=None, names=range(750))
labels = pd.read_csv("./datasets/our2/dataset_labels_50_2.5.csv", header=None, names=["user", "model", "label"])
return data, labels
def print_stats(ds: pd.DataFrame):
print("Shape", ds.shape)
print("Columns", ds.columns)
X_df_reference, y_df_reference = load_dataset()
print_stats(X_df_reference)
print_stats(y_df_reference)
```
## Preprocessing
```
def get_label(x):
return x[2]
def restructure(x):
return x.reshape(-1, 6, 125)
def normalize(x):
min_val = np.max(x)
max_val = np.min(x)
x = (x - min_val) / (max_val - min_val)
return x
```
Preoprocess the dataset with the following steps:
1. From pandas dataframe to numpy arrays
2. Restructure plain data vector from (750,) to (6,125) where first 3 vectors are accelerometer's x, y, z components while second 3 vectors are gyroscope's x, y, z components
3. Merge labels if needed
4. Normalize data
5. Split the dataset in train and test set
6. Disregard useless informations in y
```
X_df, y_df = X_df_reference.copy(), y_df_reference.copy()
# *** MERGE LABELS
# Merge sit and stand labels
sit_or_stand_filter = (y_df["label"] == "sit") | (y_df["label"] == "stand")
y_df["label"].loc[sit_or_stand_filter] = "no_activity"
# Merge stairs activity
#stairsdown_or_stairsup_filter = (y_df["label"] == "stairsdown") | (y_df["label"] == "stairsup")
#y_df["label"].loc[stairsdown_or_stairsup_filter] = "stairs"
# *** SHUFFLE
X_shuffled_df = X_df.sample(frac=1)
y_shuffled_df = y_df.reindex(X_shuffled_df.index)
# *** TRAIN AND TEST
but_last_user_indicies = ~((y_df['user'] == "b") | (y_df['user'] == "b"))
X_train_df = X_shuffled_df.loc[but_last_user_indicies]
X_test_df = X_shuffled_df.loc[~but_last_user_indicies]
y_train_df = y_shuffled_df.loc[but_last_user_indicies]
y_test_df = y_shuffled_df.loc[~but_last_user_indicies]
print("X_train_df =", len(X_train_df))
print("X_test_df =", len(X_test_df))
print("y_train_df =", len(y_train_df))
print("y_test_df =", len(y_test_df))
assert len(X_train_df) == len(y_train_df), "X train and y train do not contain same number of samples"
assert len(X_test_df) == len(y_test_df), "X test and y test do not contain same number of samples"
# 1. Back to numpy
X_train = X_train_df.loc[:].to_numpy()
X_test = X_test_df.loc[:].to_numpy()
X_train_oit = orientation_independent_transformation(np.reshape(X_train, (-1, 6, 125)))
X_test_oit = orientation_independent_transformation(np.reshape(X_test, (-1, 6, 125)))
y_train = y_train_df.loc[:].to_numpy()
y_test = y_test_df.loc[:].to_numpy()
y_train_hot = pd.get_dummies(y_train_df['label']).to_numpy()
y_test_hot = pd.get_dummies(y_test_df['label']).to_numpy()
# 2. Restructure the array
X_train = restructure(X_train)
X_test = restructure(X_test)
# 3. Normalize
# NB: we do not normalize beacause the distance between points and reconstructed points
# is reduced but the signal is not well represented
#X_train = normalize(X_train)
#X_test = normalize(X_test)
# 5. Keep only label
y_train = np.array(list(map(get_label, y_train)))
y_test = np.array(list(map(get_label, y_test)))
print(X_train.shape)
print(y_train.shape)
```
Labels
```
classes = np.unique(y_train)
num_classes = len(np.unique(y_train))
print(f"Classes = {classes}")
print(f"Num classes = {num_classes}")
assert X_train.shape == (y_train.shape[0], 6, 125), f"Invalid shape of X_train: {X_train.shape}"
assert y_train.shape == (X_train.shape[0],), f"Invalid shape of y_train: {y_train.shape}"
assert X_test.shape == (y_test.shape[0], 6, 125), f"Invalid shape of X_test: {X_test.shape}"
assert y_test.shape == (X_test.shape[0],), f"Invalid shape of y_test: {y_test.shape}"
assert y_train_hot.shape == (y_train.shape[0],num_classes), f"Invalid shape of y_train_hot: {y_train_hot.shape}"
assert y_test_hot.shape == (y_test.shape[0],num_classes), f"Invalid shape of y_test_hot: {y_test_hot.shape}"
```
Plot some samples
```
show_samples(X_train_oit, y_train, n=1, is_random=False)
```
## Data Exploration
```
print("Users", y_df["user"].unique())
print("Models", y_df["model"].unique())
print("Classes", y_df["label"].unique())
```
Fraction of samples per label
```
print(y_df.groupby(["label"])["label"].count() / y_df["label"].count())
```
Fraction of samples per user
```
print(y_df.groupby(["user"])["user"].count() / y_df["user"].count())
```
Fraction of samples per model
```
print(y_df.groupby(["model"])["model"].count() / y_df["model"].count())
```
Number of samples per user i and fraction of samples per class for user i
```
y_df_i = y_df.loc[y_df["user"] == "i"]
num_samples_i = y_df_i["label"].count()
fraction_of_samples_per_class_i = y_df_i.groupby(["label"])["label"].count() / y_df_i["label"].count()
print(num_samples_i)
print(fraction_of_samples_per_class_i)
```
## Model (autoencoder)
```
show_samples(X_train_oit, y_train, n=1, is_random=False)
DATA_SHAPE = X_train_oit.shape[1:]
CODE_SIZE=60 # PREV VALUES 30
def build_encoder(data_shape, code_size):
inputs = tf.keras.Input(data_shape)
X = layers.Flatten()(inputs)
outputs = layers.Dense(code_size, activation="sigmoid")(X)
return tf.keras.Model(inputs=inputs, outputs=outputs)
def build_decoder(data_shape, code_size):
inputs = tf.keras.Input((code_size,))
X = layers.Dense(np.prod(data_shape), activation=None)(inputs)
outputs = layers.Reshape(data_shape)(X)
return tf.keras.Model(inputs=inputs, outputs=outputs)
def build_autoencoder(encoder, decoder):
inputs = tf.keras.Input(DATA_SHAPE) # input
codes = encoder(inputs) # build the code with the encoder
outputs = decoder(codes) # reconstruction the signal with the decoder
return tf.keras.Model(inputs=inputs, outputs=outputs)
encoder = build_encoder(DATA_SHAPE, CODE_SIZE)
decoder = build_decoder(DATA_SHAPE, CODE_SIZE)
autoencoder = build_autoencoder(encoder, decoder)
optimizer = "adam"
loss = "mse"
model_filename = 'autoencoder_network.hdf5'
last_finished_epoch = None
epochs=150
batch_size=128
early_stopping_callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=2)
save_model_checkpoint_callback = ModelSaveCallback(model_filename)
callbacks = [save_model_checkpoint_callback, early_stopping_callback]
autoencoder.compile(optimizer=optimizer, loss=loss)
history = autoencoder.fit(
x=X_train_oit, y=X_train_oit,
epochs=epochs,
validation_data=(X_test_oit, X_test_oit),
batch_size=batch_size,
callbacks=callbacks,
verbose=1,
initial_epoch=last_finished_epoch or 0)
encoder.save("encoder.h5")
show_loss(history)
show_mse(autoencoder, X_test_oit)
show_reconstructed_signals(X_test_oit, encoder, decoder, n=1)
show_reconstruction_errors(X_test_oit, encoder, decoder, n=1)
```
## KNN classifier
```
from sklearn.neighbors import KNeighborsClassifier
# prepare the codes
codes = encoder.predict(X_train_oit)
assert codes.shape[1:] == (CODE_SIZE,), f"Predicted codes shape must be equal to code size, but {codes.shape[1:]} != {(CODE_SIZE,)}"
# create the k-neighbors calssifier
n_neighbors = num_classes
metric = "euclidean"
nbrs = KNeighborsClassifier(n_neighbors=n_neighbors, metric=metric)
# fit the model using the codes
nbrs.fit(codes, y_train)
print("Classes =", nbrs.classes_)
print("X_test_oit[i] = y_true \t y_pred \t with probs [...]")
print()
for i in range(20):
x = X_test_oit[i]
y = y_test[i]
c = encoder.predict(x[np.newaxis, :])[0]
[lab] = nbrs.predict(c[np.newaxis, :])
[probs] = nbrs.predict_proba(c[np.newaxis, :])
print(f"X_test[{i}] = {y}\t {lab} \t with probs {probs}")
from sklearn.metrics import classification_report
codes = encoder.predict(X_test_oit)
y_true = y_test
y_pred = nbrs.predict(codes)
print(classification_report(y_true=y_true, y_pred=y_pred))
```
## KMeans classifier
```
from sklearn.cluster import KMeans
from sklearn.preprocessing import LabelEncoder
import numpy as np
le = LabelEncoder()
le.fit(y_test)
# train
codes = encoder.predict(X_train_oit)
kmeans = KMeans(n_clusters=num_classes, random_state=0)
kmeans.fit(codes)
# evaluate
codes = encoder.predict(X_test_oit)
y_true = y_test
y_pred = le.inverse_transform(kmeans.predict(codes))
print(classification_report(y_true=y_true, y_pred=y_pred))
```
## NN classifier
```
def build_nn(code_size):
inputs = tf.keras.Input((code_size,))
X = inputs
X = layers.Dense(100, activation="relu")(X)
X = layers.Dropout(0.01)(X)
X = layers.Dense(num_classes, activation="softmax")(X)
outputs = X
return tf.keras.Model(inputs=inputs, outputs=outputs)
codes_train = encoder.predict(X_train_oit)
codes_test = encoder.predict(X_test_oit)
nn_model = build_nn(CODE_SIZE)
adam_optimizer = tf.keras.optimizers.Adam()
loss_funct = tf.keras.losses.CategoricalCrossentropy()
early_stopping_callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=2)
callbacks = [early_stopping_callback]
nn_model.compile(optimizer=adam_optimizer, loss=loss_funct, metrics = ["accuracy"])
nn_model.summary()
history = nn_model.fit(x=codes_train, y=y_train_hot,
epochs=50,
validation_data=(codes_test, y_test_hot),
batch_size=32,
callbacks=callbacks,
verbose=1)
from sklearn.metrics import accuracy_score
loss, accuracy = nn_model.evaluate(codes_test, y_test_hot)
print("LOSS =", loss)
print("ACCURACY =", accuracy)
show_loss(history)
```
| github_jupyter |
# Download Publicly Available Hematopoietic Dataset
**Gregory Way, 2018**
Here, I download [GSE24759](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE24759) which is associated with [Novershtern et al. 2011](https://doi.org/10.1016/j.cell.2011.01.004).
This dataset includes 211 samples consisting of 38 distinct hematopoietic states in various stages of differentiation.
We hypothesized that our constructed feature identified through our interpret compression approach would have higher activation patterns in Monocytes.
```
import os
import csv
import pandas as pd
from sklearn import preprocessing
from scripts.utils import download_geo
base_url = 'ftp://ftp.ncbi.nlm.nih.gov/geo/series/GSE24nnn/GSE24759/suppl/'
name = 'GSE24759_data.sort.txt.gz'
directory = 'download'
download_geo(base_url, name, directory)
path = 'download/GSE24759_data.sort.txt.gz'
! sha256sum $path
```
## Process the Data
```
# Load Additional File 3
geo_df = pd.read_table(path)
print(geo_df.shape)
geo_df.head(2)
```
## Update Gene Names
```
# Load curated gene names from versioned resource
commit = '721204091a96e55de6dcad165d6d8265e67e2a48'
url = 'https://raw.githubusercontent.com/cognoma/genes/{}/data/genes.tsv'.format(commit)
gene_df = pd.read_table(url)
# Only consider protein-coding genes
gene_df = (
gene_df.query("gene_type == 'protein-coding'")
)
symbol_to_entrez = dict(zip(gene_df.symbol,
gene_df.entrez_gene_id))
# Add alternative symbols to entrez mapping dictionary
gene_df = gene_df.dropna(axis='rows', subset=['synonyms'])
gene_df.synonyms = gene_df.synonyms.str.split('|')
all_syn = (
gene_df.apply(lambda x: pd.Series(x.synonyms), axis=1)
.stack()
.reset_index(level=1, drop=True)
)
# Name the synonym series and join with rest of genes
all_syn.name = 'all_synonyms'
gene_with_syn_df = gene_df.join(all_syn)
# Remove rows that have redundant symbols in all_synonyms
gene_with_syn_df = (
gene_with_syn_df
# Drop synonyms that are duplicated - can't be sure of mapping
.drop_duplicates(['all_synonyms'], keep=False)
# Drop rows in which the symbol appears in the list of synonyms
.query('symbol not in all_synonyms')
)
# Create a synonym to entrez mapping and add to dictionary
synonym_to_entrez = dict(zip(gene_with_syn_df.all_synonyms,
gene_with_syn_df.entrez_gene_id))
symbol_to_entrez.update(synonym_to_entrez)
# Load gene updater
commit = '721204091a96e55de6dcad165d6d8265e67e2a48'
url = 'https://raw.githubusercontent.com/cognoma/genes/{}/data/updater.tsv'.format(commit)
updater_df = pd.read_table(url)
old_to_new_entrez = dict(zip(updater_df.old_entrez_gene_id,
updater_df.new_entrez_gene_id))
# Update the symbol column to entrez_gene_id
geo_map = geo_df.A_Desc.replace(symbol_to_entrez)
geo_map = geo_map.replace(old_to_new_entrez)
geo_df.index = geo_map
geo_df.index.name = 'entrez_gene_id'
geo_df = geo_df.drop(['A_Name', 'A_Desc'], axis='columns')
geo_df = geo_df.loc[geo_df.index.isin(symbol_to_entrez.values()), :]
```
## Scale Data and Output to File
```
# Scale RNAseq data using zero-one normalization
geo_scaled_zeroone_df = preprocessing.MinMaxScaler().fit_transform(geo_df.transpose())
geo_scaled_zeroone_df = (
pd.DataFrame(geo_scaled_zeroone_df,
columns=geo_df.index,
index=geo_df.columns)
.sort_index(axis='columns')
.sort_index(axis='rows')
)
geo_scaled_zeroone_df.columns = geo_scaled_zeroone_df.columns.astype(str)
geo_scaled_zeroone_df = geo_scaled_zeroone_df.loc[:, ~geo_scaled_zeroone_df.columns.duplicated(keep='first')]
os.makedirs('data', exist_ok=True)
geo_scaled_zeroone_df.columns = geo_scaled_zeroone_df.columns.astype(str)
geo_scaled_zeroone_df = geo_scaled_zeroone_df.loc[:, ~geo_scaled_zeroone_df.columns.duplicated(keep='first')]
file = os.path.join('data', 'GSE24759_processed_matrix.tsv.gz')
geo_scaled_zeroone_df.to_csv(file, sep='\t', compression='gzip')
geo_scaled_zeroone_df.head()
```
## Process Cell-Type Classification
Data acquired from Supplementary Table 1 of [Novershtern et al. 2011](https://doi.org/10.1016/j.cell.2011.01.004)
```
cell_class = {
# Hematopoietic Stem Cells
'HSC1': ['HSC', 'Non Monocyte'],
'HSC2': ['HSC', 'Non Monocyte'],
'HSC3': ['HSC', 'Non Monocyte'],
# Myeloid Progenitors
'CMP': ['Myeloid', 'Non Monocyte'],
'MEP': ['Myeloid', 'Non Monocyte'],
'GMP': ['Myeloid', 'Non Monocyte'],
# Erythroid Populations
'ERY1': ['Erythroid', 'Non Monocyte'],
'ERY2': ['Erythroid', 'Non Monocyte'],
'ERY3': ['Erythroid', 'Non Monocyte'],
'ERY4': ['Erythroid', 'Non Monocyte'],
'ERY5': ['Erythroid', 'Non Monocyte'],
# Megakaryocytic Populations
'MEGA1': ['Megakaryocytic', 'Non Monocyte'],
'MEGA2': ['Megakaryocytic', 'Non Monocyte'],
# Granulocytic Populations
'GRAN1': ['Granulocytic', 'Non Monocyte'],
'GRAN2': ['Granulocytic', 'Non Monocyte'],
'GRAN3': ['Granulocytic', 'Non Monocyte'],
# Monocyte Population (Note MONO1 is a CFU-M)
'MONO1': ['Monocyte', 'Non Monocyte'],
'MONO2': ['Monocyte', 'Monocyte'],
# Basophil Population
'BASO1': ['Basophil', 'Non Monocyte'],
# Eosinophil Population
'EOS2': ['Eosinophil', 'Non Monocyte'],
# B Lymphoid Progenitors
'PRE_BCELL2': ['B Lymphoid Progenitor', 'Non Monocyte'],
'PRE_BCELL3': ['B Lymphoid Progenitor', 'Non Monocyte'],
# Naive Lymphoid Progenitors
'BCELLA1': ['Naive Lymphoid', 'Non Monocyte'],
'TCELLA6': ['Naive Lymphoid', 'Non Monocyte'],
'TCELLA2': ['Naive Lymphoid', 'Non Monocyte'],
# Differentiated B Cells
'BCELLA2': ['Differentiated B Cell', 'Non Monocyte'],
'BCELLA3': ['Differentiated B Cell', 'Non Monocyte'],
'BCELLA4': ['Differentiated B Cell', 'Non Monocyte'],
# Differentiated T Cells
'TCELLA7': ['Differentiated T Cell', 'Non Monocyte'],
'TCELLA8': ['Differentiated T Cell', 'Non Monocyte'],
'TCELLA1': ['Differentiated T Cell', 'Non Monocyte'],
'TCELLA3': ['Differentiated T Cell', 'Non Monocyte'],
'TCELLA4': ['Differentiated T Cell', 'Non Monocyte'],
# Natural Killer Population
'NKA1': ['NK Cell', 'Non Monocyte'],
'NKA2': ['NK Cell', 'Non Monocyte'],
'NKA3': ['NK Cell', 'Non Monocyte'],
'NKA4': ['NK Cell', 'Non Monocyte'],
# Dendritic Cell
'DENDA1': ['Dendritic', 'Non Monocyte'],
'DENDA2': ['Dendritic', 'Non Monocyte'],
}
# Write data to file
cell_class_df = (
pd.DataFrame.from_dict(cell_class)
.transpose()
.reset_index()
.rename(columns={'index': 'label', 0: 'classification', 1: 'monocyte'})
)
cell_class_df.head()
file = os.path.join('results', 'cell-type-classification.tsv')
cell_class_df.to_csv(file, sep='\t', index=False)
```
| github_jupyter |
# FAIRSEQ
from https://fairseq.readthedocs.io/en/latest/getting_started.html
"Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks." It provides reference implementations of various sequence-to-sequence models making our life much more easier!
## Installation
```
! pip3 install fairseq
```
## Downloading some data and required scripts
```
! bash data/prepare-wmt14en2fr.sh
```
## Pretrained Model Evaluation
Let's first see how to evaluate a pretrained model in fairseq. We'll download a pretrained model along with it's vocabulary
```
! curl https://dl.fbaipublicfiles.com/fairseq/models/wmt14.v2.en-fr.fconv-py.tar.bz2 | tar xvjf -
```
We have written a script to do it, but as a fun example, let's do it in Jupyter Notebook for fun
```
sentence = 'Why is it rare to discover new marine mammal species ?'
%%bash -s "$sentence"
SCRIPTS=data/mosesdecoder/scripts
TOKENIZER=$SCRIPTS/tokenizer/tokenizer.perl
CLEAN=$SCRIPTS/training/clean-corpus-n.perl
NORM_PUNC=$SCRIPTS/tokenizer/normalize-punctuation.perl
REM_NON_PRINT_CHAR=$SCRIPTS/tokenizer/remove-non-printing-char.perl
BPEROOT=data/subword-nmt
BPE_TOKENS=40000
src=en
tgt=fr
echo $1 | \
perl $NORM_PUNC $src | \
perl $REM_NON_PRINT_CHAR | \
perl $TOKENIZER -threads 8 -a -l $src > temp_tokenized.out
prep=wmt14.en-fr.fconv-py
BPE_CODE=$prep/bpecodes
python $BPEROOT/apply_bpe.py -c $BPE_CODE < temp_tokenized.out > final_result.out
rm temp_tokenized.out
cat final_result.out
rm final_result.out
```
Let's now look at the very cool interactive feature of fairseq. Open shell, cd to this directory and type the copy the following command:
```
%%bash
MODEL_DIR=wmt14.en-fr.fconv-py
echo "Why is it rare to discover new marine mam@@ mal species ?" | fairseq-interactive \
--path $MODEL_DIR/model.pt $MODEL_DIR \
--beam 1 --source-lang en --target-lang fr
```
This generation script produces three types of outputs: a line prefixed with O is a copy of the original source sentence; H is the hypothesis along with an average log-likelihood; and P is the positional score per token position, including the end-of-sentence marker which is omitted from the text. Let's do this in bash again
```
! echo "Why is it rare to discover new marine mam@@ mal species ?" | sed -r 's/(@@ )|(@@ ?$)//g'
```
All Good! Now let's train a new model
## Training
### Data Preprocessing
Fairseq contains example pre-processing scripts for several translation datasets: IWSLT 2014 (German-English), WMT 2014 (English-French) and WMT 2014 (English-German). We will work with a part of WMT 2014 like we did in the previous section
To pre-process and binarize the IWSLT dataset run <code>bash prepare-wmt14en2fr.sh</code> like we did for the previous section. This will download the data, tokenize it, perform byte pair encoding and do a test train split on the data.
To Binaize the data, we do the following:
```
%%bash
TEXT=data/wmt14_en_fr
fairseq-preprocess --source-lang en --target-lang fr \
--trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \
--destdir data-bin/wmt14_en_fr --thresholdtgt 5 --thresholdsrc 5 \
--workers 1
```
Ofcourse, we cannot see what is inside the binary line, but let's check what is in the dictionary
```
! ls data-bin/wmt14_en_fr/
! head -5 data-bin/wmt14_en_fr/dict.en.txt
! head -5 data-bin/wmt14_en_fr/dict.fr.txt
```
## Model
Fairseq provides a lot of predefined architectures to choose from. For English-French, we will choose an architecure known to work well for the problem. In the next section, we will see how to define custom models in Fairseq
```
! mkdir -p fairseq_models/checkpoints/fconv_wmt_en_fr
! fairseq-train data-bin/wmt14_en_fr \
--lr 0.5 --clip-norm 0.1 --dropout 0.1 --max-tokens 3000 \
--criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
--lr-scheduler fixed --force-anneal 50 \
--arch fconv_wmt_en_fr --save-dir fairseq_models/checkpoints/fconv_wmt_en_fr
! ls data-bin
```
## Generating and Checking BLEU for our model
```
! pip3 install sacrebleu
! mkdir -p fairseq_models/logs
%%bash
fairseq-generate data-bin/wmt14_en_fr \
--path fairseq_models/checkpoints/fconv_wmt_en_fr/checkpoint_best.pt \
--beam 1 --batch-size 128 --remove-bpe --sacrebleu >> fairseq_models/logs/our_model.out
! head -10 fairseq_models/logs/our_model.out
! tail -2 fairseq_models/logs/our_model.out
```
### Generating and Checking BLEU for the large Pretrained Model
```
! curl https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.newstest2014.tar.bz2 | tar xvjf - -C data-bin
%%bash
fairseq-generate data-bin/wmt14.en-fr.newstest2014 \
--path wmt14.en-fr.fconv-py/model.pt \
--beam 1 --batch-size 128 --remove-bpe --sacrebleu >> fairseq_models/logs/pretrained_model.out
! head -10 fairseq_models/logs/pretrained_model.out
! tail -2 fairseq_models/logs/pretrained_model.out
```
## Writing A Custom Model in FAIRSEQ
We will extend fairseq by adding a new FairseqModel that encodes a source sentence with an LSTM and then passes the final hidden state to a second LSTM that decodes the target sentence (without attention).
### Building an Encoder and Decoder
In this section we’ll define a simple LSTM Encoder and Decoder. All Encoders should implement the FairseqEncoder interface and Decoders should implement the FairseqDecoder interface. These interfaces themselves extend torch.nn.Module, so FairseqEncoders and FairseqDecoders can be written and used in the same ways as ordinary PyTorch Modules.
### Encoder
Our Encoder will embed the tokens in the source sentence, feed them to a torch.nn.LSTM and return the final hidden state.
### Decoder
Our Decoder will predict the next word, conditioned on the Encoder’s final hidden state and an embedded representation of the previous target word – which is sometimes called input feeding or teacher forcing. More specifically, we’ll use a torch.nn.LSTM to produce a sequence of hidden states that we’ll project to the size of the output vocabulary to predict each target word
## Registering the Model
Now that we’ve defined our Encoder and Decoder we must register our model with fairseq using the register_model() function decorator. Once the model is registered we’ll be able to use it with the existing Command-line Tools.
All registered models must implement the BaseFairseqModel interface. For sequence-to-sequence models (i.e., any model with a single Encoder and Decoder), we can instead implement the FairseqModel interface.
Create a small wrapper class in the same file and register it in fairseq with the name 'simple_lstm':
Finally let’s define a named architecture with the configuration for our model. This is done with the register_model_architecture() function decorator. Thereafter this named architecture can be used with the --arch command-line argument, e.g., --arch tutorial_simple_lstm
```
import fairseq
import os
fairseq_path = os.path.dirname(fairseq.__file__)
fairseq_path = os.path.join(fairseq_path, 'models')
print(fairseq_path)
%%bash -s "$fairseq_path"
cp fairseq_models/custom_models/simple_lstm.py $1
%%bash -s "$fairseq_path"
ls $1 | grep lstm
```
## Training Our Custom Model
```
! mkdir -p fairseq_models/checkpoints/tutorial_simple_lstm
%%bash
fairseq-train data-bin/wmt14_en_fr \
--arch tutorial_simple_lstm \
--encoder-dropout 0.2 --decoder-dropout 0.2 \
--optimizer adam --lr 0.005 --lr-shrink 0.5 \
--max-epoch 50 \
--max-tokens 12000 --save-dir fairseq_models/checkpoints/tutorial_simple_lstm
%%bash
fairseq-generate data-bin/wmt14_en_fr \
--path fairseq_models/checkpoints/tutorial_simple_lstm/checkpoint_best.pt \
--beam 1 --batch-size 128 --remove-bpe --sacrebleu >> fairseq_models/logs/custom_model.out
!head -10 fairseq_models/logs/custom_model.out
!tail -2 fairseq_models/logs/custom_model.out
```
| github_jupyter |
## Network Traffic Dataset for malicious attack
This dataset of network traffic flow is generated by CICFlowMeter, indicate whether the traffic is malicious attack (Bot) or not (Benign).
CICFlowMeter - network traffic flow generator generates 69 statistical features such as Duration, Number of packets, Number of bytes, Length of packets, etc are also calculated separately in the forward and reverse direction.
The output of the application is the CSV file format with two columns labeled for each flow, namely Benign or Bot.
The dataset has been organized per day, for each day the raw data including the network traffic (Pcaps) and event logs (windows and Ubuntu event Logs) per machine
are recorded. Download the dataset from the below wget command line provided and rename as Network_traffic.
```
! wget -O Network_Traffic.csv https://cse-cic-ids2018.s3.ca-central-1.amazonaws.com/Processed+Traffic+Data+for+ML+Algorithms/Friday-02-03-2018_TrafficForML_CICFlowMeter.csv
```
## Install Libraries
```
! pip install pandas --user
! pip install imblearn --user
```
## Restart Notebook Kernel
```
from IPython.display import display_html
display_html("<script>Jupyter.notebook.kernel.restart()</script>",raw=True)
```
## Import Libraries
```
import tensorflow as tf
import pandas as pd
import numpy as np
import tempfile
import os
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder,MinMaxScaler
from sklearn.model_selection import KFold
from imblearn.combine import SMOTETomek
from imblearn.over_sampling import RandomOverSampler
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
```
### Declare Variables
```
lstZerodrp = ['Timestamp', 'BwdPSHFlags', 'FwdURGFlags', 'BwdURGFlags', 'CWEFlagCount', 'FwdBytsbAvg', 'FwdPktsbAvg',
'FwdBlkRateAvg', 'BwdBytsbAvg',
'BwdBlkRateAvg', 'BwdPktsbAvg']
lstScaledrp = ['FwdPSHFlags', 'FINFlagCnt', 'SYNFlagCnt', 'RSTFlagCnt', 'PSHFlagCnt', 'ACKFlagCnt', 'URGFlagCnt',
'ECEFlagCnt']
DATA_FILE = "Network_Traffic.csv"
def read_dataFile():
"""
Reads data file and returns dataframe result
"""
chunksize = 100000
chunk_list = []
missing_values = ["n/a", "na", "--", "Infinity", "infinity", "Nan", "NaN"]
for chunk in pd.read_csv(DATA_FILE, chunksize=chunksize, na_values=missing_values):
chunk_list.append(chunk)
# break
dataFrme = pd.concat(chunk_list)
lstcols = []
for i in dataFrme.columns:
i = str(i).replace(' ', '').replace('/', '')
lstcols.append(i)
dataFrme.columns = lstcols
dfAllCpy = dataFrme.copy()
dataFrme = dataFrme.drop(lstZerodrp, axis=1)
return dataFrme
```
## Network Traffic Input Dataset
### Attribute Information
Features extracted from the captured traffic using CICFlowMeter-V3 = 69
After removal of noise/unwarranted features, number of feature columns chosen: 10
Features: FlowDuration,BwdPktLenMax,FlowIATStd,FwdPSHFlags,BwdPktLenMean,FlowIATMean,BwdIATMean,
FwdSegSizeMin,InitBwdWinByts,BwdPktLenMin
Flows labelled: Bot or Benign
```
read_dataFile().head()
def preprocess_na(dataFrme):
"""
Removing NA values
"""
na_lst = dataFrme.columns[dataFrme.isna().any()].tolist()
for j in na_lst:
dataFrme[j].fillna(0, inplace=True)
return dataFrme
def create_features_label(dataFrme):
"""
Create independent and Dependent Features
"""
columns = dataFrme.columns.tolist()
# Filter the columns to remove data we do not want
columns = [c for c in columns if c not in ["Label"]]
# Store the variable we are predicting
target = "Label"
# Define a random state
state = np.random.RandomState(42)
X = dataFrme[columns]
Y = dataFrme[target]
return X, Y
def label_substitution(dataFrme):
"""
Label substitution : 'Benign'as 0, 'Bot'as 1
"""
dictLabel = {'Benign': 0, 'Bot': 1}
dataFrme['Label'] = dataFrme['Label'].map(dictLabel)
LABELS = ['Benign', 'Bot']
count_classes = pd.value_counts(dataFrme['Label'], sort=True)
# Get the Benign and the Bot values
Benign = dataFrme[dataFrme['Label'] == 0]
Bot = dataFrme[dataFrme['Label'] == 1]
return dataFrme
def handle_class_imbalance(X,Y):
"""
Handle Class imbalancement
"""
# os_us = SMOTETomek(ratio=0.5)
# X_res, y_res = os_us.fit_sample(X, Y)
ros = RandomOverSampler(random_state=50)
X_res, y_res = ros.fit_sample(X, Y)
ibtrain_X = pd.DataFrame(X_res,columns=X.columns)
ibtrain_y = pd.DataFrame(y_res,columns=['Label'])
return ibtrain_X,ibtrain_y
def correlation_features(ibtrain_X):
"""
Feature Selection - Correlation Ananlysis
"""
corr = ibtrain_X.corr()
cor_columns = np.full((corr.shape[0],), True, dtype=bool)
for i in range(corr.shape[0]):
for j in range(i + 1, corr.shape[0]):
if corr.iloc[i, j] >= 0.9:
if cor_columns[j]:
cor_columns[j] = False
dfcorr_features = ibtrain_X[corr.columns[cor_columns]]
return dfcorr_features
def top_ten_features(dfcorr_features,ibtrain_X,ibtrain_y):
feat_X = dfcorr_features
feat_y = ibtrain_y['Label']
#apply SelectKBest class to extract top 10 best features
bestfeatures = SelectKBest(score_func=f_classif, k=10)
fit = bestfeatures.fit(feat_X,feat_y)
dfscores = pd.DataFrame(fit.scores_)
dfcolumns = pd.DataFrame(feat_X.columns)
#concat two dataframes for better visualization
featureScores = pd.concat([dfcolumns,dfscores],axis=1)
featureScores.columns = ['Features','Score'] #naming the dataframe columns
final_feature = featureScores.nlargest(10,'Score')['Features'].tolist()
final_feature.sort()
sort_fn = final_feature
dictLabel1 = {'Benign':0,'Bot':1}
ibtrain_y['Label']= ibtrain_y['Label'].map(dictLabel1)
selected_X = ibtrain_X[sort_fn]
selected_Y = ibtrain_y['Label']
return selected_X,selected_Y,sort_fn
def normalize_data(selected_X, selected_Y):
"""
Normalize data
"""
scaler = MinMaxScaler(feature_range=(0, 1))
selected_X = pd.DataFrame(scaler.fit_transform(selected_X), columns=selected_X.columns, index=selected_X.index)
trainX, testX, trainY, testY = train_test_split(selected_X, selected_Y, test_size=0.25)
print('-----------------------------------------------------------------')
print("## Final features and Data pre-process for prediction")
print('-----------------------------------------------------------------')
print(testX)
return trainX, testX, trainY, testY
tf.logging.set_verbosity(tf.logging.INFO)
'''Reads data file and returns dataframe result'''
dataFrme = read_dataFile()
''' Removing NA values'''
dataFrme = preprocess_na(dataFrme)
'''Create independent and Dependent Features'''
X, Y = create_features_label(dataFrme)
'''Label substitution : 'Benign'as 0, 'Bot'as 1'''
dataFrme = label_substitution(dataFrme)
'''Handle Class imbalancement'''
ibtrain_X, ibtrain_y = handle_class_imbalance(X, Y)
'''Feature Selection - Correlation Ananlysis'''
dfcorr_features = correlation_features(ibtrain_X)
'''Feature Selection - SelectKBest : Return best 10 features'''
selected_X, selected_Y, final_feature = top_ten_features(dfcorr_features, ibtrain_X, ibtrain_y)
'''Normalize data '''
trainX, testX, trainY, testY = normalize_data(selected_X, selected_Y)
```
## Definition of Serving Input Receiver Function
```
def make_feature_cols():
input_columns = [tf.feature_column.numeric_column(k,dtype=tf.dtypes.float64) for k in final_feature]
return input_columns
feature_columns = make_feature_cols()
inputs = {}
for feat in feature_columns:
inputs[feat.name] = tf.placeholder(shape=[None], dtype=feat.dtype)
serving_input_receiver_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(inputs)
```
## Train and Save Network Traffic Model
```
TF_DATA_DIR = os.getenv("TF_DATA_DIR", "/tmp/data/")
TF_MODEL_DIR = os.getenv("TF_MODEL_DIR", "network/")
TF_EXPORT_DIR = os.getenv("TF_EXPORT_DIR", "network/")
x1 = np.asarray(trainX[final_feature])
y1 = np.asarray(trainY)
x2 = np.asarray(testX[final_feature])
y2 = np.asarray(testY)
def formatFeatures(features):
formattedFeatures = {}
numColumns = features.shape[1]
for i in range(0, numColumns):
formattedFeatures[final_feature[i]] = features[:, i]
return formattedFeatures
trainingFeatures = formatFeatures(x1)
trainingCategories = y1
testFeatures = formatFeatures(x2)
testCategories = y2
# Train Input Function
def train_input_fn():
dataset = tf.data.Dataset.from_tensor_slices((trainingFeatures, y1))
dataset = dataset.batch(32).repeat(1000)
return dataset
# Test Input Function
def eval_input_fn():
dataset = tf.data.Dataset.from_tensor_slices((testFeatures, y2))
return dataset.batch(32).repeat(1000)
# Provide list of GPUs should be used to train the model
distribution=tf.distribute.experimental.ParameterServerStrategy()
print('Number of devices: {}'.format(distribution.num_replicas_in_sync))
# Configuration of training model
config = tf.estimator.RunConfig(train_distribute=distribution, model_dir=TF_MODEL_DIR, save_summary_steps=100, save_checkpoints_steps=1000)
# Build 3 layer DNN classifier
model = tf.estimator.DNNClassifier(hidden_units=[13,65,110],
feature_columns=feature_columns,
model_dir=TF_MODEL_DIR,
n_classes=2, config=config
)
export_final = tf.estimator.FinalExporter(TF_EXPORT_DIR, serving_input_receiver_fn=serving_input_receiver_fn)
train_spec = tf.estimator.TrainSpec(input_fn=train_input_fn,
max_steps=1000)
eval_spec = tf.estimator.EvalSpec(input_fn=eval_input_fn,
steps=100,
exporters=export_final,
throttle_secs=1,
start_delay_secs=1)
result = tf.estimator.train_and_evaluate(model, train_spec, eval_spec)
print(result)
print('Training finished successfully')
```
## Update storageUri in network_kfserving.yaml with pvc-name
```
pvcname = !(echo $HOSTNAME | sed 's/.\{2\}$//')
pvc = "workspace-"+pvcname[0]
! sed -i "s/nfs/$pvc/g" network_kfserving.yaml
! cat network_kfserving.yaml
```
## Serving Network Traffic Model using kubeflow kfserving
```
!kubectl apply -f network_kfserving.yaml -n anonymous
!kubectl get inferenceservices -n anonymous
```
#### Note:
Wait for inference service READY=\"True\"
## Predict data from serving after setting INGRESS_IP
### Note - Use one of preprocessed row values from Data pre-process from prediction output cell
```
! curl -v -H "Host: network-model.anonymous.example.com" http://10.23.222.166:31380/v1/models/network-model:predict -d '{"signature_name":"predict","instances":[{"BwdPktLenMax":[0.158904] , "BwdPktLenMean":[0.039736] , "BwdPktLenMin":[0.00000], "FlowDuration":[0.053778] , "FlowIATMax":[0.053262] , "FwdPktLenMin":[0.0] , "FwdSegSizeMin":[0.454545] , "InitBwdWinByts":[1.0] , "Protocol":[0.0] , "RSTFlagCnt":[0.003357]}]}'
```
## Delete kfserving model & Clean up of stored models
```
!kubectl delete -f network_kfserving.yaml
!rm -rf /mnt/network
pvcname = !(echo $HOSTNAME | sed 's/.\{2\}$//')
pvc = "workspace-"+pvcname[0]
! sed -i "s/$pvc/nfs/g" network_kfserving.yaml
! cat network_kfserving.yaml
```
| github_jupyter |
## Imports
```
import numpy as np
import pandas as pd
import os
import random, re, math
import tensorflow as tf, tensorflow.keras.backend as K
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers
from kaggle_datasets import KaggleDatasets
from random import seed
from random import randint
import cv2
from matplotlib import pyplot as plt
import seaborn as sns
from keras import Sequential
from tensorflow.keras.layers import Dense, Dropout, Input
!pip install efficientnet
import efficientnet.tfkeras as efn
```
## TPU Setup
```
AUTO = tf.data.experimental.AUTOTUNE
# Detect hardware, return appropriate distribution strategy
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection. No parameters necessary if TPU_NAME environment variable is set. On Kaggle this is always the case.
print('Running on TPU ', tpu.master())
except ValueError:
tpu = None
if tpu:
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
else:
strategy = tf.distribute.get_strategy() # default distribution strategy in Tensorflow. Works on CPU and single GPU.
print("REPLICAS: ", strategy.num_replicas_in_sync)
# Data access
GCS_DS_PATH = KaggleDatasets().get_gcs_path()
```
## Get Data
Due to the overlap between the multiple diseases and the scab and rust categories we will try to compensate for the difficulty in differentiating between these categories by using a series of binary classifiers. Each classifier will be trained on a different subset of binary classifications from the original four. The important distinction between a one vs all approach is that we will have separate classifiers for rust, including multiple diseases, and rust only. We do the same for scab. This will hopefully beneficial because of the overlap between multiple diseases and each individual disease. Then we will use a final multiclass classifier that will use the output of the first six as input and will attempt to classifiy the actual class.
```
CSV_PATH = "/kaggle/input/plant-pathology-2020-fgvc7/"
BATCH_SIZE = 8 * strategy.num_replicas_in_sync
IMG_SIZE = 768
EPOCHS = 25
VERBOSE = 1
SHOW_GRAPHS = True
train_csv = pd.read_csv(CSV_PATH + 'train.csv')
test_csv = pd.read_csv(CSV_PATH + 'test.csv')
sub = pd.read_csv(CSV_PATH + 'sample_submission.csv')
# Note that we have to shuffle here and not anywhere else
train_csv = pd.read_csv(CSV_PATH + 'train.csv')
train_csv = train_csv.sample(frac=1).reset_index(drop=True)
test_csv = pd.read_csv(CSV_PATH + 'test.csv')
# Get full image paths
train_paths = train_csv.image_id.apply(lambda image: GCS_DS_PATH + '/images/' + image + '.jpg').values
test_paths = test_csv.image_id.apply(lambda image: GCS_DS_PATH + '/images/' + image + '.jpg').values
def decodeImage(filename, label=None, img_size=(IMG_SIZE, IMG_SIZE)):
bits = tf.io.read_file(filename)
image = tf.image.decode_jpeg(bits, channels = 3)
image = (tf.cast(image, tf.float32) / 127.5) - 1
# A few images are rotated
if image.shape != [1365, 2048, 3]:
image = tf.image.rot90(image)
image = tf.image.resize(image, img_size)
if label is None:
return image
else:
return image, label
def dataAugment(image, label=None, seed=2020):
image = tf.image.random_flip_left_right(image, seed=seed)
image = tf.image.random_flip_up_down(image, seed=seed)
if label is None:
return image
else:
return image, label
def getClassifierData(paths, labels, batch_size=BATCH_SIZE):
ds = (
tf.data.Dataset
.from_tensor_slices((train_paths, labels))
.map(decodeImage, num_parallel_calls=AUTO)
.map(dataAugment, num_parallel_calls=AUTO)
#.repeat() # Using repeat leads to inconsistent results when predicting this set (counts are always off)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
return ds
def calcWeights(df, label):
counts = df[label].value_counts()
return {0 : counts[1] / sum(counts),
1 : counts[0] / sum(counts)}
# C1 is healthy vs not healthy
train_csv['C1_Label'] = train_csv.apply(lambda row: 0 if row['healthy'] == 1 else 1, axis=1)
# C2 is rust vs not rust
train_csv['C2_Label'] = train_csv.apply(lambda row: 0 if row['rust'] == 1 else 1, axis=1)
# C3 is scab vs not scab
train_csv['C3_Label'] = train_csv.apply(lambda row: 0 if row['scab'] == 1 else 1, axis=1)
# C4 is both diseases vs one or none
train_csv['C4_Label'] = train_csv.apply(lambda row: 0 if row['multiple_diseases'] == 1 else 1, axis=1)
# C5 rust or both vs scab or none
train_csv['C5_Label'] = train_csv.apply(lambda row: 0 if row['multiple_diseases'] == 1 or row['rust'] == 1 else 1, axis=1)
# C6 rust or both vs scab or none
train_csv['C6_Label'] = train_csv.apply(lambda row: 0 if row['multiple_diseases'] == 1 or row['scab'] == 1 else 1, axis=1)
if SHOW_GRAPHS:
train_csv.head(10)
test_dataset = (
tf.data.Dataset
.from_tensor_slices(test_paths)
.map(decodeImage, num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
)
# Left stuff for validation data in in case we want to use later
training_dict = {'C1' : {'Train' : getClassifierData(train_paths, train_csv['C1_Label'].values.reshape(-1,1)),
#'Val' : getClassifierData(val_data, 'C1_Label', batch_size=BATCH_SIZE),
'Weights' : calcWeights(train_csv, 'C1_Label')},
'C2' : {'Train' : getClassifierData(train_paths, train_csv['C2_Label'].values.reshape(-1,1)),
#'Val' : getClassifierData(val_data, 'C2_Label', batch_size=BATCH_SIZE),
'Weights' : calcWeights(train_csv, 'C2_Label')},
'C3' : {'Train' : getClassifierData(train_paths, train_csv['C3_Label'].values.reshape(-1,1)),
#'Val' : getClassifierData(val_data, 'C3_Label', batch_size=BATCH_SIZE),
'Weights' : calcWeights(train_csv, 'C3_Label')},
'C4' : {'Train' : getClassifierData(train_paths, train_csv['C4_Label'].values.reshape(-1,1)),
#'Val' : getClassifierData(val_data, 'C4_Label', batch_size=BATCH_SIZE),
'Weights' : calcWeights(train_csv, 'C4_Label')},
'C5' : {'Train' : getClassifierData(train_paths, train_csv['C5_Label'].values.reshape(-1,1)),
#'Val' : getClassifierData(val_data, 'C5_Label', batch_size=BATCH_SIZE),
'Weights' : calcWeights(train_csv, 'C5_Label')},
'C6' : {'Train' : getClassifierData(train_paths, train_csv['C6_Label'].values.reshape(-1,1)),
#'Val' : getClassifierData(val_data, 'C6_Label', batch_size=BATCH_SIZE),
'Weights' : calcWeights(train_csv, 'C6_Label')},
'Test' : test_dataset
}
```
## Data Exploration
```
seed(8)
IMG_PATH = CSV_PATH + 'images/'
STEPS = train_csv.shape[0] // BATCH_SIZE
SHOW_IMAGES = False
NUM_PER_COL = 2
NUM_PER_ROW = 5
def showImages(df, num_per_col, num_per_row):
fig = plt.figure(figsize=(num_per_row*30, num_per_col*30))
for i in range(0, num_per_col * num_per_row):
plt.subplot(num_per_col, num_per_row, i+1)
image_id = df.iloc[randint(0, len(df)-1)]['image_id']
image = cv2.imread(IMG_PATH + image_id + '.jpg', cv2.IMREAD_UNCHANGED)
image = cv2.resize(image, (int(IMG_SIZE / 2), int(IMG_SIZE / 2)))
plt.imshow(image)
plt.axis('off')
plt.tight_layout()
plt.show()
healthy_plants = train_csv[train_csv['healthy'] == 1]
if SHOW_IMAGES:
showImages(healthy_plants, NUM_PER_COL, NUM_PER_ROW)
scab_plants = train_csv[train_csv['scab'] == 1]
if SHOW_IMAGES:
showImages(scab_plants, NUM_PER_COL, NUM_PER_ROW)
rust_plants = train_csv[train_csv['rust'] == 1]
if SHOW_IMAGES:
showImages(rust_plants, NUM_PER_COL, NUM_PER_ROW)
multiple_plants = train_csv[train_csv['multiple_diseases'] == 1]
if SHOW_IMAGES:
showImages(multiple_plants, NUM_PER_COL, NUM_PER_ROW)
if SHOW_GRAPHS:
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
labels = ['healthy', 'rust', 'scab', 'multiple_diseases']
counts = [train_csv['healthy'].value_counts()[1],
train_csv['rust'].value_counts()[1],
train_csv['scab'].value_counts()[1],
train_csv['multiple_diseases'].value_counts()[1]]
ax.bar(labels, counts)
plt.show()
if SHOW_GRAPHS:
fig = plt.figure(figsize=(15,35))
plt.subplot(621)
ax = sns.countplot(x='C1_Label', data=train_csv, order=[0, 1])
ax.set_title('C1 Train')
ax.set_xticklabels(['Healthy', 'Not Healthy'])
ax.set(xlabel='Disease')
plt.subplot(622)
ax = sns.countplot(x='C1_Label', data=train_csv, order=[0, 1])
ax.set_title('C1 Test')
ax.set_xticklabels(['Healthy', 'Not Healthy'])
ax.set(xlabel='Disease')
plt.subplot(623)
ax = sns.countplot(x='C2_Label', data=train_csv, order=[0, 1])
ax.set_title('C2 Train')
ax.set_xticklabels(['Rust', 'Not Rust'])
ax.set(xlabel='Disease')
plt.subplot(624)
ax = sns.countplot(x='C2_Label', data=train_csv, order=[0, 1])
ax.set_title('C2 Test')
ax.set_xticklabels(['Rust', 'Not Rust'])
ax.set(xlabel='Disease')
plt.subplot(625)
ax = sns.countplot(x='C3_Label', data=train_csv, order=[0, 1])
ax.set_title('C3 Train')
ax.set_xticklabels(['Scab', 'Not Scab'])
ax.set(xlabel='Disease')
plt.subplot(626)
ax = sns.countplot(x='C3_Label', data=train_csv, order=[0, 1])
ax.set_title('C3 Test')
ax.set_xticklabels(['Scab', 'Not Scab'])
ax.set(xlabel='Disease')
plt.subplot(627)
ax = sns.countplot(x='C4_Label', data=train_csv, order=[0, 1])
ax.set_title('C4 Train')
ax.set_xticklabels(['Multiple', 'Not Multiple'])
ax.set(xlabel='Disease')
plt.subplot(628)
ax = sns.countplot(x='C4_Label', data=train_csv, order=[0, 1])
ax.set_title('C4 Test')
ax.set_xticklabels(['Multiple', 'Not Multiple'])
ax.set(xlabel='Disease')
plt.subplot(629)
ax = sns.countplot(x='C5_Label', data=train_csv, order=[0, 1])
ax.set_title('C5 Train')
ax.set_xticklabels(['Rust or Multiple', 'Scab or None'])
ax.set(xlabel='Disease')
plt.subplot(6,2,10)
ax = sns.countplot(x='C5_Label', data=train_csv, order=[0, 1])
ax.set_title('C5 Test')
ax.set_xticklabels(['Rust or Multiple', 'Scab or None'])
ax.set(xlabel='Disease')
plt.subplot(6,2,11)
ax = sns.countplot(x='C6_Label', data=train_csv, order=[0, 1])
ax.set_title('C6 Train')
ax.set_xticklabels(['Scab or Multiple', 'Rust or None'])
ax.set(xlabel='Disease')
plt.subplot(6,2,12)
ax = sns.countplot(x='C6_Label', data=train_csv, order=[0, 1])
ax.set_title('C6 Test')
ax.set_xticklabels(['Scab or Multiple', 'Rust or None'])
ax.set(xlabel='Disease')
plt.show()
```
## Model
```
LR_START = 0.00001
LR_MAX = 0.0001 * strategy.num_replicas_in_sync
LR_MIN = 0.00001
LR_RAMPUP_EPOCHS = 15
LR_SUSTAIN_EPOCHS = 3
LR_EXP_DECAY = .8
def lrfn(epoch):
if epoch < LR_RAMPUP_EPOCHS:
lr = (LR_MAX - LR_START) / LR_RAMPUP_EPOCHS * epoch + LR_START
elif epoch < LR_RAMPUP_EPOCHS + LR_SUSTAIN_EPOCHS:
lr = LR_MAX
else:
lr = (LR_MAX - LR_MIN) * LR_EXP_DECAY**(epoch - LR_RAMPUP_EPOCHS - LR_SUSTAIN_EPOCHS) + LR_MIN
return lr
lr_callback = tf.keras.callbacks.LearningRateScheduler(lrfn, verbose=False)
if SHOW_GRAPHS:
rng = [i for i in range(EPOCHS)]
y = [lrfn(x) for x in rng]
plt.plot(rng, y)
print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
# We have to disable the steps due to the way the training datasets are set up now
def get_model():
base_model = efn.EfficientNetB7(weights='imagenet', include_top=False,
pooling='avg', input_shape=(IMG_SIZE, IMG_SIZE, 3))
x = base_model.output
output = Dense(1, activation="sigmoid")(x)
return Model(inputs=base_model.input, outputs=output)
def fitModel(label, training_dict, lr_callback=lr_callback, epochs=EPOCHS, steps=STEPS, verbose=VERBOSE):
with strategy.scope():
model = get_model()
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(
training_dict[label]['Train'],
#steps_per_epoch = steps,
callbacks = [lr_callback],
#class_weight = training_dict[label]['Weights'] # Class weights still aren't working
epochs = epochs,
verbose = verbose
)
return model
def getPredictions(label, training_dict, steps=STEPS, verbose=VERBOSE):
model = fitModel(label, training_dict)
train_predictions = model.predict(training_dict[label]['Train'], #steps = steps,
use_multiprocessing = True, verbose = verbose)
test_predictions = model.predict(training_dict['Test'], #steps = steps,
use_multiprocessing = True, verbose = verbose)
# Releases tpu memory from model so other models can run
tf.tpu.experimental.initialize_tpu_system(tpu)
return train_predictions, test_predictions
predictions = [getPredictions('C1', training_dict),
getPredictions('C2', training_dict),
getPredictions('C3', training_dict),
getPredictions('C4', training_dict),
getPredictions('C5', training_dict),
getPredictions('C6', training_dict)]
train_predictions, test_predictions = zip(*predictions)
train_predictions = np.hstack(train_predictions)
test_predictions = np.hstack(test_predictions)
inputs = Input(shape=(train_predictions.shape[1],))
dense1 = Dense(200, activation='relu')(inputs)
dropout = Dropout(0.3)(dense1)
dense2 = Dense(50, activation='relu')(dropout)
outputs = Dense(4, activation='softmax')(dense2)
model_final = Model(inputs=inputs, outputs=outputs)
model_final.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model_final.fit(
train_predictions,
train_csv[['healthy', 'multiple_diseases', 'rust', 'scab']].values,
epochs = EPOCHS
)
```
## Predictions
```
sub_predictions = model_final.predict(test_predictions)
sub_predictions = pd.DataFrame(data=sub_predictions, columns=['healthy', 'multiple_diseases', 'rust', 'scab'])
sub_predictions = pd.concat([test_csv, sub_predictions], axis=1)
sub_predictions
sub_predictions.to_csv('submission.csv', index=False)
```
# Conclusion
Seems like using the split classification at the start helped but due to the way the datasets have to be set up we lose out on a lot of processing power. In the end our results are about the same as similar notebooks despite the changes so the benefit of the split classification was about equal to the performance increase of the lost processing power, these are both potentially negligible though. Bear in mind doing this only makes sense in a multiclass classification problem where classes overlap.
## Resources
For this and for future reference
https://www.kaggle.com/ateplyuk/fork-of-plant-2020-tpu-915e9c <- A lot of things came from here
#### TPU Perfomance Optimizations
* https://www.kaggle.com/ateplyuk/fork-of-plant-2020-tpu-915e9c
* https://www.kaggle.com/mgornergoogle/five-flowers-with-keras-and-xception-on-tpu
* https://www.kaggle.com/docs/tpu
* https://codelabs.developers.google.com/codelabs/keras-flowers-data/#4
* https://www.tensorflow.org/guide/data_performance
* https://codelabs.developers.google.com/codelabs/keras-flowers-tpu/#3
#### Bayesian Optimization
* https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_08_4_bayesian_hyperparameter_opt.ipynb
* https://github.com/fmfn/BayesianOptimization
#### NN Architecture
* https://towardsdatascience.com/guide-to-choosing-hyperparameters-for-your-neural-networks-38244e87dafe
#### Pretrained NNs
* https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_09_1_keras_transfer.ipynb
* https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_09_2_popular_transfer.ipynb
* https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_09_3_transfer_cv.ipynb
* https://github.com/qubvel/efficientnet
* https://cs231n.github.io/transfer-learning/
* https://towardsdatascience.com/transfer-learning-from-pre-trained-models-f2393f124751
#### Data Augmentation
* https://www.pyimagesearch.com/2019/07/08/keras-imagedatagenerator-and-data-augmentation/
#### TF Dataset Resources
* https://www.tensorflow.org/tutorials/load_data/csv
* https://www.tensorflow.org/guide/data_performance
| github_jupyter |
**Testes com Slicing**
O limite inicial do slicing é sempre fechado enquanto que o final é sempre aberto
```
l = [20,30,10,40]
print(l[1:3])
```
O começo e o fim do intervalo não são necessários... é possível específicar um parametro de passo
```
print(l[::2])
```
Com o numpy :
```
import numpy as np
a = np.array(l)
a
print(a[1:3])
print(a[::2])
```
Copiando elementos de uma lista python
```
l2 = l[:]
l2[0] = 999
print(l)
print(l2)
```
No array numpy nao e possivel fazer dessa maneira ( so e criada uma referencia para a variavel inicial por questao de performance)
```
a1=a[:]
a1[0]=999
print(a)
print(a1)
```
Para copiar como numpy array e preciso utilizar o copy
```
a1=a.copy()
a1[0]=809890
print(a)
print(a1)
```
<br>
**MATRIZES**
Levando em conta a matriz mxn (m= num de linhas e n= num de colunas) abaixo :

Representação em python
```
mat = [[5,4,7],[0,3,4],[0,0,6]]
for line in mat:
print(line)
```
acessando o elemento 1,1 (3)
```
print(mat[1][1])
```
obtendo a coluna 1 (4,3,0) utilizando um list comprehension do python :
```
[linha[1] for linha in mat]
```
**Explicação de comprehension :**<br>**"List comprehensions are used for creating new lists from other iterables. As list comprehensions returns lists, they consist of brackets containing the expression, which is executed for each element along with the for loop to iterate over each element."**
**Utilizando NUMPY**
```
import numpy as np
matNumpy=np.array(mat)
matNumpy
```
acessando o elemento 1,1(3)
```
matNumpy[1][1]
```
Repetindo a comprehension para obter a coluna 1
```
[linha[1] for linha in matNumpy]
```
**Reshape com numpy :**
Supondo um vetor com 20 elementos :
```
vet = np.arange(0,20)
print(vet)
```
Vamos transforma-lo em uma matriz 5,4 com o reshape do numpy
```
matNumpy1= np.reshape(vet,(5,4))
print(matNumpy1)
```
<br>**Operacoes com matrizes (numpy) :**
supondo 2 matrizes m1 e m2 :
```
m1 = np.array([[1,2,3],[4,5,6]])
m2 = np.array([[7,8,9],[10,11,12]])
print(m1)
print()
print(m2)
```
temos :
```
m2 / m1
```
com arredondamento
```
np.matrix.round(m2/m1)
m2*10
m2+5
m2*m1
m1 ** 2
```
**Visualizando dados com Matplotlib**
```
import matplotlib.pyplot as plt
# senao executar a linha abaixo jupyter nao plota o grafico
%matplotlib inline
arrayvalores = np.array([100,30,55,67.300,110,33,57,90,39])
plt.plot(arrayvalores)
```
algumas opções de customizacao
```
plt.plot(arrayvalores,c="Red",ls="--",marker='s')
```
plotando mais de um conjunto de valores :
```
valoresa=np.array([10,25,60,29,33])
valoresb=np.array([30,465,0,329,3])
plt.plot(valoresa,c="Red",ls="--",marker='s',label="Valores A")
plt.plot(valoresb,c="Blue",marker='^',label="Valores B",ms='10')
#plt.legend() #gera a legenda no canto superior direito
plt.legend(loc='upper left')
plt.show()
```
Inserindo elementos em 1 array
```
arraya=np.array([1,2,3,4,5,6])
np.insert(arraya,1,10)
```
Em um array multi dimensional
```
arrayb=np.array([[1,2],[3,4]])
print(arrayb)
#inserindo sem o eixo
print(np.insert(arrayb,1,10))
#inserindo com o parametro de eixo (1=coluna)
print(np.insert(arrayb,1,10,axis=1))
```
Appendando um array
```
arraya=np.array([1,2,3])
np.append(arraya,[4,5,6])
```
appendando em array multidimensional
```
arrayb=np.array([[1,2],[3,4]])
print(arrayb)
#apendando sem o eixo
print(np.append(arrayb,[[5,6]]))
#apendando com o parametro de eixo (0=linha)
print(np.append(arrayb,[[5,6]],axis=0))
# apendando como parametro de eixo = a coluna (axis=1) o formato do dado tem que ser diferente
print(np.append(arrayb,[[5],[6]],axis=1))
```
Removendo elementos do array
```
a = np.array([[1,2],[3,4],[5,6]])
print(a)
#deletando elementos da segunda linha
np.delete(a,1,axis=0)
a = np.array([[1,2],[3,4],[5,6]])
print(a)
#deletando elementos da segunda coluna
np.delete(a,1,axis=1)
```
Removendo elementos usando um slice
```
a = np.array([[1,2],[3,4],[5,6],[7,8]])
#remove as linhas [1,2] e [5,6]
np.delete(a,np.s_[::2],axis=0)
```
Repetindo elementos de um array
```
a = np.array([[1,2],[3,4]])
#repetindo as linhas 2 vezes
np.repeat(a,2,axis=0)
#repetindo as colunas 2 vezes
np.repeat(a,2,axis=1)
#repetindo sem axis
np.repeat(a,2)
```
Repetindo arrays com tile
```
a = np.array([[1,2],[3,4]])
np.tile(a,2)
```
Split de um array
```
a = np.array([[1,2,3],[4,5,6]])
#Dividindo o array em 2 linhas
np.array_split(a,2,axis=0)
#Dividindo o array em 3 colunas
np.array_split(a,3,axis=1)
```
Gerandos arrays de 0s ou 1s
```
#gerando uma matriz 3x3 de zeros
np.zeros((3,3))
#gerando uma matriz 3x3 de "uns"
np.ones((3,3))
```
Criando uma matriz identidade
```
np.eye(4)
```
Indexação booleana
Tomando como base o seguinte exemplo:
```
exemplo=np.array([[1,2,3],[4,5,6],[7,8,9]])
# retornando os elementos maiores que quatro
maioresquequetro = exemplo[exemplo>4]
maioresquequetro
# retornando os indices dos elementos maiores que quatro
imaioresquequatro = (exemplo>4)
imaioresquequatro
```
<br>**Carregando um arquivo texto com numpy**
```
col1,col2,col3 = np.loadtxt('res/dados.tsf',skiprows=1,unpack=True)
#Layout original do arquivo
#coluna1 coluna2 coluna3
#0002 4.5 30
#.9 3 44.2
#35 17.4 1.1
#33 2 11
#dados carregados nas variaveis
print(col1)
print(col2)
print(col3)
```
Quando o arquivo nao estiver 100% preenchido e possivel utilizar o genfromtxt
```
#layout original do arquivo
#coluna1 coluna2 coluna3
#0002 4.5 30
#.9 3 44.2
#35 MISSING MISSING
#33 2 11
arrayfromtext= np.genfromtxt('res/dados_incomplete.tsf',skip_header=1,filling_values=9999) # todos os missing sao substituidos por 9999
print(arrayfromtext)
```
<br> Juntando arrays com concatenate
```
arrayA = np.array([[1,2],[3,4]])
arrayB = np.array([[5,6]])
np.concatenate((arrayA,arrayB),axis=0)
# para adicionar no eixo de colunas(1) e preciso que b tenha o mesmo numero de linhas de A por isso transposta do arrayB (arrayB.T)
np.concatenate((arrayA,arrayB.T),axis=1)
```
<br>Embaralhando uma sequencia de numeros
```
arraya=np.arange(20)
print(arraya)
np.random.shuffle(arraya)
print(arraya)
```
<br>Gerando arrays com linspace ( valores uniformemente distribuidos )
```
#gerando array com 5 valores uniformemente distribuidos entre os numeros 2 e 3
np.linspace(2.0,3.0,5)
```
<br>Recuperando os elementos unicos do array
```
arraytest = np.array([[1,2],[2,2],[3,3],[4,4]])
np.unique(arraytest)
```
<br>Lendo arquivos CSV com numpy
```
valores = np.genfromtxt('res/sample.csv',delimiter=';',skip_header=1)
print(valores)
```
| github_jupyter |
# CIFAR10 Split Mobilenet Client Side
This code is the server part of CIFAR10 split 2D-CNN model for **multi** client and a server.
```
users = 1 # number of clients
```
## Import required packages
```
import os
import struct
import socket
import pickle
import time
import h5py
from tqdm import tqdm
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
import torch.optim as optim
def getFreeDescription():
free = os.popen("free -h")
i = 0
while True:
i = i + 1
line = free.readline()
if i == 1:
return (line.split()[0:7])
def getFree():
free = os.popen("free -h")
i = 0
while True:
i = i + 1
line = free.readline()
if i == 2:
return (line.split()[0:7])
from gpiozero import CPUTemperature
def printPerformance():
cpu = CPUTemperature()
print("temperature: " + str(cpu.temperature))
description = getFreeDescription()
mem = getFree()
print(description[0] + " : " + mem[1])
print(description[1] + " : " + mem[2])
print(description[2] + " : " + mem[3])
print(description[3] + " : " + mem[4])
print(description[4] + " : " + mem[5])
print(description[5] + " : " + mem[6])
printPerformance()
root_path = '../../models/cifar10_data'
```
## SET CUDA
```
# device = "cuda:0" if torch.cuda.is_available() else "cpu"
device = "cpu"
torch.manual_seed(777)
if device =="cuda:0":
torch.cuda.manual_seed_all(777)
client_order = int(input("client_order(start from 0): "))
num_traindata = 50000 // users
```
## Data load
```
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2470, 0.2435, 0.2616))])
from torch.utils.data import Subset
indices = list(range(50000))
part_tr = indices[num_traindata * client_order : num_traindata * (client_order + 1)]
trainset = torchvision.datasets.CIFAR10 (root=root_path, train=True, download=True, transform=transform)
trainset_sub = Subset(trainset, part_tr)
train_loader = torch.utils.data.DataLoader(trainset_sub, batch_size=4, shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10 (root=root_path, train=False, download=True, transform=transform)
test_loader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2)
```
### Size check
```
x_train, y_train = next(iter(train_loader))
print(x_train.size())
print(y_train.size())
```
### Total number of batches
```
total_batch = len(train_loader)
print(total_batch)
# -*- coding: utf-8 -*-
"""
Created on Thu Nov 1 14:23:31 2018
@author: tshzzz
"""
import torch
import torch.nn as nn
def conv_dw_client(inplane,stride=1):
return nn.Sequential(
nn.Conv2d(inplane,inplane,kernel_size = 3,groups = inplane,stride=stride,padding=1),
nn.BatchNorm2d(inplane),
nn.ReLU()
)
def conv_bw(inplane,outplane,kernel_size = 3,stride=1):
return nn.Sequential(
nn.Conv2d(inplane,outplane,kernel_size = kernel_size,groups = 1,stride=stride,padding=1),
nn.BatchNorm2d(outplane),
nn.ReLU()
)
class MobileNet(nn.Module):
def __init__(self,num_class=10):
super(MobileNet,self).__init__()
layers = []
layers.append(conv_bw(3,32,3,1))
layers.append(conv_dw_client(32,1))
# layers.append(conv_dw(64,128,2))
# layers.append(conv_dw(128,128,1))
# layers.append(conv_dw(128,256,2))
# layers.append(conv_dw(256,256,1))
# layers.append(conv_dw(256,512,2))
# for i in range(5):
# layers.append(conv_dw(512,512,1))
# layers.append(conv_dw(512,1024,2))
# layers.append(conv_dw(1024,1024,1))
# self.classifer = nn.Sequential(
# nn.Dropout(0.5),
# nn.Linear(1024,num_class)
# )
self.feature = nn.Sequential(*layers)
def forward(self,x):
out = self.feature(x)
# out = out.mean(3).mean(2)
# out = out.view(-1,1024)
# out = self.classifer(out)
return out
mobilenet_client = MobileNet().to(device)
print(mobilenet_client)
# from torchsummary import summary
# summary(mobilenet_client, (3, 32, 32))
```
### Set other hyperparameters in the model
Hyperparameters here should be same with the server side.
```
epoch = 20 # default
lr = 0.001
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(mobilenet_client.parameters(), lr=lr, momentum=0.9)
```
## Socket initialization
### Required socket functions
```
def send_msg(sock, msg):
# prefix each message with a 4-byte length in network byte order
msg = pickle.dumps(msg)
msg = struct.pack('>I', len(msg)) + msg
sock.sendall(msg)
def recv_msg(sock):
# read message length and unpack it into an integer
raw_msglen = recvall(sock, 4)
if not raw_msglen:
return None
msglen = struct.unpack('>I', raw_msglen)[0]
# read the message data
msg = recvall(sock, msglen)
msg = pickle.loads(msg)
return msg
def recvall(sock, n):
# helper function to receive n bytes or return None if EOF is hit
data = b''
while len(data) < n:
packet = sock.recv(n - len(data))
if not packet:
return None
data += packet
return data
printPerformance()
```
### Set host address and port number
```
host = input("IP address: ")
port = 10080
```
## SET TIMER
```
start_time = time.time() # store start time
print("timmer start!")
```
### Open the client socket
```
s = socket.socket()
s.connect((host, port))
epoch = recv_msg(s) # get epoch
msg = total_batch
send_msg(s, msg) # send total_batch of train dataset
```
## Real training process
```
for e in range(epoch):
client_weights = recv_msg(s)
mobilenet_client.load_state_dict(client_weights)
mobilenet_client.eval()
for i, data in enumerate(tqdm(train_loader, ncols=100, desc='Epoch '+str(e+1))):
x, label = data
x = x.to(device)
label = label.to(device)
optimizer.zero_grad()
output = mobilenet_client(x)
client_output = output.clone().detach().requires_grad_(True)
msg = {
'client_output': client_output,
'label': label
}
send_msg(s, msg)
client_grad = recv_msg(s)
output.backward(client_grad)
optimizer.step()
send_msg(s, mobilenet_client.state_dict())
printPerformance()
end_time = time.time() #store end time
print("WorkingTime of ",device ,": {} sec".format(end_time - start_time))
```
| github_jupyter |
## Introduction
* Python is a dynamic, interpreted (bytecode-compiled) language.
* There are no type declarations of variables, parameters, functions, or methods in source code
```
a_number = 12
a_string = 'hello'
a_pi = 3.14
# shows how to print a string message template
print(a_number,a_string,a_pi,sep="\n") #prints variables separated by a new line
```
Let's try to mix types and see what happens
```
a_number + a_string
```
Since we do not declare types, runtime checks for types and does not allow invalid operations. TypeError is a type of expection, we will comeback to it when we learn about exception handling
## Python source code
* Python source files use the ".py" extension and are called "modules."
* With a Python module hello.py, the easiest way to run it is with the shell command `"python hello.py Alice"` which calls the Python interpreter to execute the code in hello.py, passing it the command line argument "Alice"
* Here we will use an Ipython notebook to learn the basics of python. Ipython notebooks provide an easy way to play with the python interpreter. Later we will create python files in exercises.
## Imports
* To import a python module we use the `import` statement
* We will import the `os` builtin module and print the current working directory
* [Python documentation](https://docs.python.org/3/) provides the language reference
* Reference to [OS module](https://docs.python.org/3/library/os.html?highlight=os#module-os)
```
import os
curr_working_dir = os.getcwd()
print(curr_working_dir)
```
* `from` keyword can be used to import specific item within a module
* item could be a variable, sub-module or a function
```
from sys import version
print(version)
```
#### We're using python3 for learning python.You don't have to worry about the os and sys right now
## User defined functions
Let's write a simple function to greet the user
```
def greet_user(user):
print("Hello {}. Welcome to the Python world !!!".format(user))
greet_user('John Doe')
```
#### greet_user function accepts a user name as input and it prints a message.
#### format function used inside print function,is a predefined function used to replace {} with a data.
#### thats' why you see John Doe in the place of {} in the output string
## Indentation
* Whitespace indentation of a piece of code affects its meaning.
* A logical block of statements such as the ones that make up a function should all have the same indentation.
* Set your editor to insert spaces instead of TABs for Python code.
* A common question beginners ask is, "How many spaces should I indent?" According to the official Python style guide (PEP 8), you should indent with 4 spaces. (Fun fact: Google's internal style guideline dictates indenting by 2 spaces!)
## Python Strings
* immutable. they cannot be changed after they are created (Java strings also use this immutable style). Since strings can't be changed, we construct *new* strings as we go to represent computed values
* The len(string) function returns the length of a string.
* Characters in a string can be accessed using the standard [ ] syntax, and like Java and C++, Python uses zero-based indexing, so if s is 'hello' s[1] is 'e'.
```
s = 'hi'
print(s[1]) ## i
print(len(s)) ## finds length of anything that is countable.Here it finds the length of the string
print(s + ' there') ## hi there
pi = 3.14
#text = 'The value of pi is ' + pi ## you cannot concatenate sting and float
text = 'The value of pi is ' + str(pi) ## you can do it by converting float to string by using str()
text
```
## String methods
https://docs.python.org/3/library/stdtypes.html#string-methods
```
text.lower()
text.upper()
text.isalpha()
text.isnumeric()
text.startswith('T')
'pi' in text # to check if text contains a substring
text.find('pizza')
csv = "abc,ced,def,hij"
csv.split(",")
```
## IF Statment
```
if 'pi' in text.lower():
print('Text contains the substring \'pi\'')
pi_index = text.find('pi')
print(pi_index)
if 4 < 3:
pass # pass is a command to do nothing
elif 'pi' in text.lower():
print('Text contains the substring \'pi\'')
else:
pass
```
## You can learn more from:
#### `Socratica`(https://www.youtube.com/playlist?list=PLi01XoE8jYohWFPpC17Z-wWhPOSuh8Er-)
| github_jupyter |
TSG061 - Get tail of all container logs for pods in BDC namespace
=================================================================
Steps
-----
### Parameters
```
since_seconds = 60 * 60 * 1 # the last hour
coalesce_duplicates = True
```
### Instantiate Kubernetes client
```
# Instantiate the Python Kubernetes client into 'api' variable
import os
from IPython.display import Markdown
try:
from kubernetes import client, config
from kubernetes.stream import stream
except ImportError:
# Install the Kubernetes module
import sys
!{sys.executable} -m pip install kubernetes
try:
from kubernetes import client, config
from kubernetes.stream import stream
except ImportError:
display(Markdown(f'HINT: Use [SOP059 - Install Kubernetes Python module](../install/sop059-install-kubernetes-module.ipynb) to resolve this issue.'))
raise
if "KUBERNETES_SERVICE_PORT" in os.environ and "KUBERNETES_SERVICE_HOST" in os.environ:
config.load_incluster_config()
else:
try:
config.load_kube_config()
except:
display(Markdown(f'HINT: Use [TSG118 - Configure Kubernetes config](../repair/tsg118-configure-kube-config.ipynb) to resolve this issue.'))
raise
api = client.CoreV1Api()
print('Kubernetes client instantiated')
```
### Get the namespace for the big data cluster
Get the namespace of the Big Data Cluster from the Kuberenetes API.
**NOTE:**
If there is more than one Big Data Cluster in the target Kubernetes
cluster, then either:
- set \[0\] to the correct value for the big data cluster.
- set the environment variable AZDATA\_NAMESPACE, before starting
Azure Data Studio.
```
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = api.list_namespace(label_selector='MSSQL_CLUSTER').items[0].metadata.name
except IndexError:
from IPython.display import Markdown
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print('The kubernetes namespace for your big data cluster is: ' + namespace)
```
### Get logs for all containers in Big Data Cluster namespace
```
pod_list = api.list_namespaced_pod(namespace)
pod_names = [pod.metadata.name for pod in pod_list.items]
print('Scanning pods: ' + ', '.join(pod_names))
for pod in pod_list.items:
print("*** %s\t%s\t%s" % (pod.metadata.name,
pod.status.phase,
pod.status.pod_ip))
container_names = [container.name for container in pod.spec.containers]
for container in container_names:
print (f"POD: {pod.metadata.name} / CONTAINER: {container}")
try:
logs = api.read_namespaced_pod_log(pod.metadata.name, namespace, container=container, since_seconds=since_seconds)
if coalesce_duplicates:
previous_line = ""
duplicates = 1
for line in logs.split('\n'):
if line[27:] != previous_line[27:]:
if duplicates != 1:
print(f"\t{previous_line} (x{duplicates})")
print(f"\t{line}")
duplicates = 1
else:
duplicates = duplicates + 1
previous_line = line
else:
print(logs)
except Exception:
print (f"Failed to get LOGS for CONTAINER: {container} in POD: {pod.metadata.name}")
print("Notebook execution is complete.")
```
Related
-------
- [TSG062 - Get tail of all previous container logs for pods in BDC namespace](../log-files/tsg062-tail-bdc-previous-container-logs.ipynb)
| github_jupyter |
```
#@title
from google.colab import drive
drive.mount('/content/drive')
#@title
!cp -r '/content/drive/My Drive/Colab Notebooks/Melanoma/Scripts/.' .
COLAB_BASE_PATH = '/content/drive/My Drive/Colab Notebooks/Melanoma/'
MODEL_NAME = '65-efficientnetb0'
MODEL_BASE_PATH = f'{COLAB_BASE_PATH}Models/Files/{MODEL_NAME}/'
SUBMISSION_BASE_PATH = f'{COLAB_BASE_PATH}Submissions/'
SUBMISSION_PATH = SUBMISSION_BASE_PATH + f'{MODEL_NAME}.csv'
SUBMISSION_LAST_PATH = SUBMISSION_BASE_PATH + f'{MODEL_NAME}_last.csv'
SUBMISSION_BLEND_PATH = SUBMISSION_BASE_PATH + f'{MODEL_NAME}_blend.csv'
SUBMISSION_TTA_PATH = SUBMISSION_BASE_PATH + f'{MODEL_NAME}_tta.csv'
SUBMISSION_TTA_LAST_PATH = SUBMISSION_BASE_PATH + f'{MODEL_NAME}_tta_last.csv'
SUBMISSION_TTA_BLEND_PATH = SUBMISSION_BASE_PATH + f'{MODEL_NAME}_tta_blend.csv'
import os
os.mkdir(MODEL_BASE_PATH)
```
## Dependencies
```
#@title
!pip install --quiet efficientnet
# !pip install --quiet image-classifiers
#@title
import warnings, json, re, glob, math
from scripts_step_lr_schedulers import *
from melanoma_utility_scripts import *
from sklearn.model_selection import KFold
import tensorflow.keras.layers as L
import tensorflow.keras.backend as K
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from tensorflow.keras import optimizers, layers, metrics, losses, Model
import efficientnet.tfkeras as efn
# from classification_models.tfkeras import Classifiers
import tensorflow_addons as tfa
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
```
## TPU configuration
```
#@title
strategy, tpu = set_up_strategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
AUTO = tf.data.experimental.AUTOTUNE
```
# Model parameters
```
#@title
config = {
"HEIGHT": 256,
"WIDTH": 256,
"CHANNELS": 3,
"BATCH_SIZE": 128,
"EPOCHS": 12,
"LEARNING_RATE": 3e-4,
"ES_PATIENCE": 10,
"N_FOLDS": 5,
"N_USED_FOLDS": 5,
"TTA_STEPS": 25,
"BASE_MODEL": 'EfficientNetB0',
"BASE_MODEL_WEIGHTS": 'noisy-student',
"DATASET_PATH": 'melanoma-256x256'
}
with open(MODEL_BASE_PATH + 'config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
config
```
# Load data
```
#@title
database_base_path = COLAB_BASE_PATH + 'Data/'
k_fold = pd.read_csv(database_base_path + 'train.csv')
test = pd.read_csv(database_base_path + 'test.csv')
print('Train samples: %d' % len(k_fold))
display(k_fold.head())
print(f'Test samples: {len(test)}')
display(test.head())
GCS_PATH = 'gs://kds-00a03a913a177ffd710f19e39a9e65d51860d02557f57cc1d1d8e589'
TRAINING_FILENAMES = tf.io.gfile.glob(GCS_PATH + '/train*.tfrec')
TEST_FILENAMES = tf.io.gfile.glob(GCS_PATH + '/test*.tfrec')
```
# Augmentations
```
#@title
def data_augment(image, label):
p_spatial = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
p_spatial2 = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
p_rotate = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
p_crop = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
p_pixel = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
### Spatial-level transforms
if p_spatial >= .2: # flips
image['input_image'] = tf.image.random_flip_left_right(image['input_image'])
image['input_image'] = tf.image.random_flip_up_down(image['input_image'])
if p_spatial >= .7:
image['input_image'] = tf.image.transpose(image['input_image'])
if p_rotate >= .8: # rotate 270º
image['input_image'] = tf.image.rot90(image['input_image'], k=3)
elif p_rotate >= .6: # rotate 180º
image['input_image'] = tf.image.rot90(image['input_image'], k=2)
elif p_rotate >= .4: # rotate 90º
image['input_image'] = tf.image.rot90(image['input_image'], k=1)
if p_spatial2 >= .6:
if p_spatial2 >= .9:
image['input_image'] = transform_rotation(image['input_image'], config['HEIGHT'], 180.)
elif p_spatial2 >= .8:
image['input_image'] = transform_zoom(image['input_image'], config['HEIGHT'], 8., 8.)
elif p_spatial2 >= .7:
image['input_image'] = transform_shift(image['input_image'], config['HEIGHT'], 8., 8.)
else:
image['input_image'] = transform_shear(image['input_image'], config['HEIGHT'], 2.)
if p_crop >= .6: # crops
if p_crop >= .8:
image['input_image'] = tf.image.random_crop(image['input_image'], size=[int(config['HEIGHT']*.8), int(config['WIDTH']*.8), config['CHANNELS']])
elif p_crop >= .7:
image['input_image'] = tf.image.random_crop(image['input_image'], size=[int(config['HEIGHT']*.9), int(config['WIDTH']*.9), config['CHANNELS']])
else:
image['input_image'] = tf.image.central_crop(image['input_image'], central_fraction=.8)
image['input_image'] = tf.image.resize(image['input_image'], size=[config['HEIGHT'], config['WIDTH']])
if p_pixel >= .6: # Pixel-level transforms
if p_pixel >= .9:
image['input_image'] = tf.image.random_hue(image['input_image'], 0.01)
elif p_pixel >= .8:
image['input_image'] = tf.image.random_saturation(image['input_image'], 0.7, 1.3)
elif p_pixel >= .7:
image['input_image'] = tf.image.random_contrast(image['input_image'], 0.8, 1.2)
else:
image['input_image'] = tf.image.random_brightness(image['input_image'], 0.1)
return image, label
```
## Auxiliary functions
```
#@title
# Datasets utility functions
def read_labeled_tfrecord(example, height=config['HEIGHT'], width=config['WIDTH'], channels=config['CHANNELS']):
example = tf.io.parse_single_example(example, LABELED_TFREC_FORMAT)
image = decode_image(example['image'], height, width, channels)
label = tf.cast(example['target'], tf.float32)
# meta features
data = {}
data['patient_id'] = tf.cast(example['patient_id'], tf.int32)
data['sex'] = tf.cast(example['sex'], tf.int32)
data['age_approx'] = tf.cast(example['age_approx'], tf.int32)
data['anatom_site_general_challenge'] = tf.cast(tf.one_hot(example['anatom_site_general_challenge'], 7), tf.int32)
return {'input_image': image, 'input_meta': data}, label # returns a dataset of (image, data, label)
def read_labeled_tfrecord_eval(example, height=config['HEIGHT'], width=config['WIDTH'], channels=config['CHANNELS']):
example = tf.io.parse_single_example(example, LABELED_TFREC_FORMAT)
image = decode_image(example['image'], height, width, channels)
label = tf.cast(example['target'], tf.float32)
image_name = example['image_name']
# meta features
data = {}
data['patient_id'] = tf.cast(example['patient_id'], tf.int32)
data['sex'] = tf.cast(example['sex'], tf.int32)
data['age_approx'] = tf.cast(example['age_approx'], tf.int32)
data['anatom_site_general_challenge'] = tf.cast(tf.one_hot(example['anatom_site_general_challenge'], 7), tf.int32)
return {'input_image': image, 'input_meta': data}, label, image_name # returns a dataset of (image, data, label, image_name)
def load_dataset(filenames, ordered=False, buffer_size=-1):
ignore_order = tf.data.Options()
if not ordered:
ignore_order.experimental_deterministic = False # disable order, increase speed
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=buffer_size) # automatically interleaves reads from multiple files
dataset = dataset.with_options(ignore_order) # uses data as soon as it streams in, rather than in its original order
dataset = dataset.map(read_labeled_tfrecord, num_parallel_calls=buffer_size)
return dataset # returns a dataset of (image, data, label)
def load_dataset_eval(filenames, buffer_size=-1):
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=buffer_size) # automatically interleaves reads from multiple files
dataset = dataset.map(read_labeled_tfrecord_eval, num_parallel_calls=buffer_size)
return dataset # returns a dataset of (image, data, label, image_name)
def get_training_dataset(filenames, batch_size, buffer_size=-1):
dataset = load_dataset(filenames, ordered=False, buffer_size=buffer_size)
dataset = dataset.map(data_augment, num_parallel_calls=AUTO)
dataset = dataset.repeat() # the training dataset must repeat for several epochs
dataset = dataset.shuffle(2048)
dataset = dataset.batch(batch_size, drop_remainder=True) # slighly faster with fixed tensor sizes
dataset = dataset.prefetch(buffer_size) # prefetch next batch while training (autotune prefetch buffer size)
return dataset
def get_validation_dataset(filenames, ordered=True, repeated=False, batch_size=32, buffer_size=-1):
dataset = load_dataset(filenames, ordered=ordered, buffer_size=buffer_size)
if repeated:
dataset = dataset.repeat()
dataset = dataset.shuffle(2048)
dataset = dataset.batch(batch_size, drop_remainder=repeated)
dataset = dataset.prefetch(buffer_size)
return dataset
def get_eval_dataset(filenames, batch_size=32, buffer_size=-1):
dataset = load_dataset_eval(filenames, buffer_size=buffer_size)
dataset = dataset.batch(batch_size, drop_remainder=False)
dataset = dataset.prefetch(buffer_size)
return dataset
# Test function
def read_unlabeled_tfrecord(example, height=config['HEIGHT'], width=config['WIDTH'], channels=config['CHANNELS']):
example = tf.io.parse_single_example(example, UNLABELED_TFREC_FORMAT)
image = decode_image(example['image'], height, width, channels)
image_name = example['image_name']
# meta features
data = {}
data['patient_id'] = tf.cast(example['patient_id'], tf.int32)
data['sex'] = tf.cast(example['sex'], tf.int32)
data['age_approx'] = tf.cast(example['age_approx'], tf.int32)
data['anatom_site_general_challenge'] = tf.cast(tf.one_hot(example['anatom_site_general_challenge'], 7), tf.int32)
return {'input_image': image, 'input_tabular': data}, image_name # returns a dataset of (image, data, image_name)
def load_dataset_test(filenames, buffer_size=-1):
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=buffer_size) # automatically interleaves reads from multiple files
dataset = dataset.map(read_unlabeled_tfrecord, num_parallel_calls=buffer_size)
# returns a dataset of (image, data, label, image_name) pairs if labeled=True or (image, data, image_name) pairs if labeled=False
return dataset
def get_test_dataset(filenames, batch_size=32, buffer_size=-1, tta=False):
dataset = load_dataset_test(filenames, buffer_size=buffer_size)
if tta:
dataset = dataset.map(data_augment, num_parallel_calls=AUTO)
dataset = dataset.batch(batch_size, drop_remainder=False)
dataset = dataset.prefetch(buffer_size)
return dataset
# Advanced augmentations
def transform_rotation(image, height, rotation):
# input image - is one image of size [dim,dim,3] not a batch of [b,dim,dim,3]
# output - image randomly rotated
DIM = height
XDIM = DIM%2 #fix for size 331
rotation = rotation * tf.random.normal([1],dtype='float32')
# CONVERT DEGREES TO RADIANS
rotation = math.pi * rotation / 180.
# ROTATION MATRIX
c1 = tf.math.cos(rotation)
s1 = tf.math.sin(rotation)
one = tf.constant([1],dtype='float32')
zero = tf.constant([0],dtype='float32')
rotation_matrix = tf.reshape( tf.concat([c1,s1,zero, -s1,c1,zero, zero,zero,one],axis=0),[3,3] )
# LIST DESTINATION PIXEL INDICES
x = tf.repeat( tf.range(DIM//2,-DIM//2,-1), DIM )
y = tf.tile( tf.range(-DIM//2,DIM//2),[DIM] )
z = tf.ones([DIM*DIM],dtype='int32')
idx = tf.stack( [x,y,z] )
# ROTATE DESTINATION PIXELS ONTO ORIGIN PIXELS
idx2 = K.dot(rotation_matrix,tf.cast(idx,dtype='float32'))
idx2 = K.cast(idx2,dtype='int32')
idx2 = K.clip(idx2,-DIM//2+XDIM+1,DIM//2)
# FIND ORIGIN PIXEL VALUES
idx3 = tf.stack( [DIM//2-idx2[0,], DIM//2-1+idx2[1,]] )
d = tf.gather_nd(image, tf.transpose(idx3))
return tf.reshape(d,[DIM,DIM,3])
def transform_shear(image, height, shear):
# input image - is one image of size [dim,dim,3] not a batch of [b,dim,dim,3]
# output - image randomly sheared
DIM = height
XDIM = DIM%2 #fix for size 331
shear = shear * tf.random.normal([1],dtype='float32')
shear = math.pi * shear / 180.
# SHEAR MATRIX
one = tf.constant([1],dtype='float32')
zero = tf.constant([0],dtype='float32')
c2 = tf.math.cos(shear)
s2 = tf.math.sin(shear)
shear_matrix = tf.reshape( tf.concat([one,s2,zero, zero,c2,zero, zero,zero,one],axis=0),[3,3] )
# LIST DESTINATION PIXEL INDICES
x = tf.repeat( tf.range(DIM//2,-DIM//2,-1), DIM )
y = tf.tile( tf.range(-DIM//2,DIM//2),[DIM] )
z = tf.ones([DIM*DIM],dtype='int32')
idx = tf.stack( [x,y,z] )
# ROTATE DESTINATION PIXELS ONTO ORIGIN PIXELS
idx2 = K.dot(shear_matrix,tf.cast(idx,dtype='float32'))
idx2 = K.cast(idx2,dtype='int32')
idx2 = K.clip(idx2,-DIM//2+XDIM+1,DIM//2)
# FIND ORIGIN PIXEL VALUES
idx3 = tf.stack( [DIM//2-idx2[0,], DIM//2-1+idx2[1,]] )
d = tf.gather_nd(image, tf.transpose(idx3))
return tf.reshape(d,[DIM,DIM,3])
def transform_shift(image, height, h_shift, w_shift):
# input image - is one image of size [dim,dim,3] not a batch of [b,dim,dim,3]
# output - image randomly shifted
DIM = height
XDIM = DIM%2 #fix for size 331
height_shift = h_shift * tf.random.normal([1],dtype='float32')
width_shift = w_shift * tf.random.normal([1],dtype='float32')
one = tf.constant([1],dtype='float32')
zero = tf.constant([0],dtype='float32')
# SHIFT MATRIX
shift_matrix = tf.reshape( tf.concat([one,zero,height_shift, zero,one,width_shift, zero,zero,one],axis=0),[3,3] )
# LIST DESTINATION PIXEL INDICES
x = tf.repeat( tf.range(DIM//2,-DIM//2,-1), DIM )
y = tf.tile( tf.range(-DIM//2,DIM//2),[DIM] )
z = tf.ones([DIM*DIM],dtype='int32')
idx = tf.stack( [x,y,z] )
# ROTATE DESTINATION PIXELS ONTO ORIGIN PIXELS
idx2 = K.dot(shift_matrix,tf.cast(idx,dtype='float32'))
idx2 = K.cast(idx2,dtype='int32')
idx2 = K.clip(idx2,-DIM//2+XDIM+1,DIM//2)
# FIND ORIGIN PIXEL VALUES
idx3 = tf.stack( [DIM//2-idx2[0,], DIM//2-1+idx2[1,]] )
d = tf.gather_nd(image, tf.transpose(idx3))
return tf.reshape(d,[DIM,DIM,3])
def transform_zoom(image, height, h_zoom, w_zoom):
# input image - is one image of size [dim,dim,3] not a batch of [b,dim,dim,3]
# output - image randomly zoomed
DIM = height
XDIM = DIM%2 #fix for size 331
height_zoom = 1.0 + tf.random.normal([1],dtype='float32')/h_zoom
width_zoom = 1.0 + tf.random.normal([1],dtype='float32')/w_zoom
one = tf.constant([1],dtype='float32')
zero = tf.constant([0],dtype='float32')
# ZOOM MATRIX
zoom_matrix = tf.reshape( tf.concat([one/height_zoom,zero,zero, zero,one/width_zoom,zero, zero,zero,one],axis=0),[3,3] )
# LIST DESTINATION PIXEL INDICES
x = tf.repeat( tf.range(DIM//2,-DIM//2,-1), DIM )
y = tf.tile( tf.range(-DIM//2,DIM//2),[DIM] )
z = tf.ones([DIM*DIM],dtype='int32')
idx = tf.stack( [x,y,z] )
# ROTATE DESTINATION PIXELS ONTO ORIGIN PIXELS
idx2 = K.dot(zoom_matrix,tf.cast(idx,dtype='float32'))
idx2 = K.cast(idx2,dtype='int32')
idx2 = K.clip(idx2,-DIM//2+XDIM+1,DIM//2)
# FIND ORIGIN PIXEL VALUES
idx3 = tf.stack( [DIM//2-idx2[0,], DIM//2-1+idx2[1,]] )
d = tf.gather_nd(image, tf.transpose(idx3))
return tf.reshape(d,[DIM,DIM,3])
```
## Learning rate scheduler
```
#@title
lr_min = 1e-6
# lr_start = 5e-6
lr_max = config['LEARNING_RATE']
steps_per_epoch = 24844 // config['BATCH_SIZE']
total_steps = config['EPOCHS'] * steps_per_epoch
warmup_steps = steps_per_epoch * 5
# hold_max_steps = 0
# step_decay = .8
# step_size = steps_per_epoch * 1
# rng = [i for i in range(0, total_steps, 32)]
# y = [step_schedule_with_warmup(tf.cast(x, tf.float32), step_size=step_size,
# warmup_steps=warmup_steps, hold_max_steps=hold_max_steps,
# lr_start=lr_start, lr_max=lr_max, step_decay=step_decay) for x in rng]
# sns.set(style="whitegrid")
# fig, ax = plt.subplots(figsize=(20, 6))
# plt.plot(rng, y)
# print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
```
# Model
```
#@title
# Initial bias
pos = len(k_fold[k_fold['target'] == 1])
neg = len(k_fold[k_fold['target'] == 0])
initial_bias = np.log([pos/neg])
print('Bias')
print(pos)
print(neg)
print(initial_bias)
# class weights
total = len(k_fold)
weight_for_0 = (1 / neg)*(total)/2.0
weight_for_1 = (1 / pos)*(total)/2.0
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Class weight')
print(class_weight)
def model_fn(input_shape):
input_image = L.Input(shape=input_shape, name='input_image')
base_model = efn.EfficientNetB0(weights=config['BASE_MODEL_WEIGHTS'],
include_top=False)
x = base_model(input_image)
x = L.GlobalAveragePooling2D()(x)
output = L.Dense(1, activation='sigmoid', name='output',
bias_initializer=tf.keras.initializers.Constant(initial_bias))(x)
model = Model(inputs=input_image, outputs=output)
return model
```
# Training
```
# Evaluation
eval_dataset = get_eval_dataset(TRAINING_FILENAMES, batch_size=config['BATCH_SIZE'], buffer_size=AUTO)
image_names = next(iter(eval_dataset.unbatch().map(lambda data, label, image_name: image_name).batch(count_data_items(TRAINING_FILENAMES)))).numpy().astype('U')
image_data = eval_dataset.map(lambda data, label, image_name: data)
# Resample dataframe
k_fold = k_fold[k_fold['image_name'].isin(image_names)]
# Test
NUM_TEST_IMAGES = len(test)
test_preds = np.zeros((NUM_TEST_IMAGES, 1))
test_preds_last = np.zeros((NUM_TEST_IMAGES, 1))
test_dataset = get_test_dataset(TEST_FILENAMES, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, tta=True)
image_names_test = next(iter(test_dataset.unbatch().map(lambda data, image_name: image_name).batch(NUM_TEST_IMAGES))).numpy().astype('U')
test_image_data = test_dataset.map(lambda data, image_name: data)
history_list = []
k_fold_best = k_fold.copy()
kfold = KFold(config['N_FOLDS'], shuffle=True, random_state=SEED)
for n_fold, (trn_idx, val_idx) in enumerate(kfold.split(TRAINING_FILENAMES)):
if n_fold < config['N_USED_FOLDS']:
n_fold +=1
print('\nFOLD: %d' % (n_fold))
tf.tpu.experimental.initialize_tpu_system(tpu)
K.clear_session()
### Data
train_filenames = np.array(TRAINING_FILENAMES)[trn_idx]
valid_filenames = np.array(TRAINING_FILENAMES)[val_idx]
steps_per_epoch = count_data_items(train_filenames) // config['BATCH_SIZE']
# Train model
model_path = f'model_fold_{n_fold}.h5'
es = EarlyStopping(monitor='val_auc', mode='max', patience=config['ES_PATIENCE'],
restore_best_weights=False, verbose=1)
checkpoint = ModelCheckpoint((MODEL_BASE_PATH + model_path), monitor='val_auc', mode='max',
save_best_only=True, save_weights_only=True)
with strategy.scope():
model = model_fn((config['HEIGHT'], config['WIDTH'], config['CHANNELS']))
optimizer = tfa.optimizers.RectifiedAdam(lr=lr_max,
total_steps=total_steps,
warmup_proportion=(warmup_steps / total_steps),
min_lr=lr_min)
model.compile(optimizer, loss=losses.BinaryCrossentropy(label_smoothing=0.2),
metrics=[metrics.AUC()])
history = model.fit(get_training_dataset(train_filenames, batch_size=config['BATCH_SIZE'], buffer_size=AUTO),
validation_data=get_validation_dataset(valid_filenames, ordered=True, repeated=False,
batch_size=config['BATCH_SIZE'], buffer_size=AUTO),
epochs=config['EPOCHS'],
steps_per_epoch=steps_per_epoch ,
callbacks=[checkpoint, es],
class_weight=class_weight,
verbose=2).history
# save last epoch weights
model.save_weights((MODEL_BASE_PATH + 'last_' + model_path))
history_list.append(history)
# Get validation IDs
valid_dataset = get_eval_dataset(valid_filenames, batch_size=config['BATCH_SIZE'], buffer_size=AUTO)
valid_image_names = next(iter(valid_dataset.unbatch().map(lambda data, label, image_name: image_name).batch(count_data_items(valid_filenames)))).numpy().astype('U')
k_fold[f'fold_{n_fold}'] = k_fold.apply(lambda x: 'validation' if x['image_name'] in valid_image_names else 'train', axis=1)
k_fold_best[f'fold_{n_fold}'] = k_fold_best.apply(lambda x: 'validation' if x['image_name'] in valid_image_names else 'train', axis=1)
##### Last model #####
print('Last model evaluation...')
preds = model.predict(image_data)
name_preds_eval = dict(zip(image_names, preds.reshape(len(preds))))
k_fold[f'pred_fold_{n_fold}'] = k_fold.apply(lambda x: name_preds_eval[x['image_name']], axis=1)
print(f'Last model inference (TTA {config["TTA_STEPS"]} steps)...')
for step in range(config['TTA_STEPS']):
test_preds_last += model.predict(test_image_data)
##### Best model #####
print('Best model evaluation...')
model.load_weights(MODEL_BASE_PATH + model_path)
preds = model.predict(image_data)
name_preds_eval = dict(zip(image_names, preds.reshape(len(preds))))
k_fold_best[f'pred_fold_{n_fold}'] = k_fold_best.apply(lambda x: name_preds_eval[x['image_name']], axis=1)
print(f'Best model inference (TTA {config["TTA_STEPS"]} steps)...')
for step in range(config['TTA_STEPS']):
test_preds += model.predict(test_image_data)
# normalize preds
test_preds /= (config['N_USED_FOLDS'] * config['TTA_STEPS'])
test_preds_last /= (config['N_USED_FOLDS'] * config['TTA_STEPS'])
name_preds = dict(zip(image_names_test, test_preds.reshape(NUM_TEST_IMAGES)))
name_preds_last = dict(zip(image_names_test, test_preds_last.reshape(NUM_TEST_IMAGES)))
test['target'] = test.apply(lambda x: name_preds[x['image_name']], axis=1)
test['target_last'] = test.apply(lambda x: name_preds_last[x['image_name']], axis=1)
```
## Model loss graph
```
#@title
for n_fold in range(config['N_USED_FOLDS']):
print(f'Fold: {n_fold + 1}')
plot_metrics(history_list[n_fold])
```
## Model loss graph aggregated
```
#@title
plot_metrics_agg(history_list, config['N_USED_FOLDS'])
```
# Model evaluation (best)
```
#@title
display(evaluate_model(k_fold_best, config['N_USED_FOLDS']).style.applymap(color_map))
display(evaluate_model_Subset(k_fold_best, config['N_USED_FOLDS']).style.applymap(color_map))
```
# Model evaluation (last)
```
#@title
display(evaluate_model(k_fold, config['N_USED_FOLDS']).style.applymap(color_map))
display(evaluate_model_Subset(k_fold, config['N_USED_FOLDS']).style.applymap(color_map))
```
# Confusion matrix
```
#@title
for n_fold in range(config['N_USED_FOLDS']):
n_fold += 1
pred_col = f'pred_fold_{n_fold}'
train_set = k_fold_best[k_fold_best[f'fold_{n_fold}'] == 'train']
valid_set = k_fold_best[k_fold_best[f'fold_{n_fold}'] == 'validation']
print(f'Fold: {n_fold}')
plot_confusion_matrix(train_set['target'], np.round(train_set[pred_col]),
valid_set['target'], np.round(valid_set[pred_col]))
```
# Visualize predictions
```
#@title
k_fold['pred'] = 0
for n_fold in range(config['N_USED_FOLDS']):
k_fold['pred'] += k_fold[f'pred_fold_{n_fold+1}'] / config['N_FOLDS']
print('Label/prediction distribution')
print(f"Train positive labels: {len(k_fold[k_fold['target'] > .5])}")
print(f"Train positive predictions: {len(k_fold[k_fold['pred'] > .5])}")
print(f"Train positive correct predictions: {len(k_fold[(k_fold['target'] > .5) & (k_fold['pred'] > .5)])}")
print('Top 10 samples')
display(k_fold[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in k_fold.columns if (c.startswith('pred_fold'))]].head(10))
print('Top 10 positive samples')
display(k_fold[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in k_fold.columns if (c.startswith('pred_fold'))]].query('target == 1').head(10))
print('Top 10 predicted positive samples')
display(k_fold[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in k_fold.columns if (c.startswith('pred_fold'))]].query('pred > .5').head(10))
```
# Visualize test predictions
```
#@title
print(f"Test predictions {len(test[test['target'] > .5])}|{len(test[test['target'] <= .5])}")
print(f"Test predictions (last) {len(test[test['target_last'] > .5])}|{len(test[test['target_last'] <= .5])}")
print('Top 10 samples')
display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'target', 'target_last']
+ [c for c in test.columns if (c.startswith('pred_fold'))]].head(10))
print('Top 10 positive samples')
display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'target', 'target_last']
+ [c for c in test.columns if (c.startswith('pred_fold'))]].query('target > .5').head(10))
print('Top 10 positive samples (last)')
display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'target', 'target_last']
+ [c for c in test.columns if (c.startswith('pred_fold'))]].query('target_last > .5').head(10))
```
# Test set predictions
```
#@title
submission = pd.read_csv(database_base_path + 'sample_submission.csv')
submission['target'] = test['target']
submission['target_last'] = test['target_last']
submission['target_blend'] = (test['target'] * .5) + (test['target_last'] * .5)
display(submission.head(10))
display(submission.describe())
### BEST ###
submission[['image_name', 'target']].to_csv(SUBMISSION_PATH, index=False)
### LAST ###
submission_last = submission[['image_name', 'target_last']]
submission_last.columns = ['image_name', 'target']
submission_last.to_csv(SUBMISSION_LAST_PATH, index=False)
### BLEND ###
submission_blend = submission[['image_name', 'target_blend']]
submission_blend.columns = ['image_name', 'target']
submission_blend.to_csv(SUBMISSION_BLEND_PATH, index=False)
```
| github_jupyter |
# Tab Transformer
The TabTransformer (introduced [TabTransformer: Tabular Data Modeling Using Contextual Embeddings](https://arxiv.org/pdf/2012.06678.pdf)) is built upon self-attention based Transformers. The Transformer layers transform the embeddings of categorical features into robust contextual embeddings to achieve higher predictive accuracy.
The TabTransformer architecture works as follows:
* All the categorical features are encoded as embeddings, using the same embedding_dims. This means that each value in each categorical feature will have its own embedding vector.
* A column embedding, one embedding vector for each categorical feature, is added (point-wise) to the categorical feature embedding.
* The embedded categorical features are fed into a stack of Transformer blocks. Each Transformer block consists of a multi-head self-attention layer followed by a feed-forward layer.
* The outputs of the final Transformer layer, which are the contextual embeddings of the categorical features, are concatenated with the input numerical features, and fed into a final MLP block.
* A softmax classifer is applied at the end of the model.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers as L
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.model_selection import StratifiedKFold
import joblib
```
# Data
```
data = pd.read_csv('../input/song-popularity-prediction/train.csv')
print(data.shape)
data.head()
test = pd.read_csv('../input/song-popularity-prediction/test.csv')
X_test = test.drop(['id'], axis=1)
X = data.drop(['id', 'song_popularity'], axis=1)
y = data['song_popularity']
skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=0)
```
# Model
```
class TabularNetworkConfig():
def __init__(
self,
target_feature_name,
target_feature_labels,
numeric_feature_names,
categorical_features_with_vocabulary,
num_outputs,
out_activation,
num_transformer_blocks,
num_heads,
embedding_dim,
mlp_hidden_units_factors,
dropout_rate,
use_column_embedding,
):
self.TARGET_FEATURE_NAME = target_feature_name
self.TARGET_FEATURE_LABELS = target_feature_labels
self.NUMERIC_FEATURE_NAMES = numeric_feature_names
self.CATEGORICAL_FEATURES_WITH_VOCABULARY = categorical_features_with_vocabulary
self.CATEGORICAL_FEATURE_NAMES = list(self.CATEGORICAL_FEATURES_WITH_VOCABULARY.keys())
self.FEATURE_NAMES = self.NUMERIC_FEATURE_NAMES + self.CATEGORICAL_FEATURE_NAMES
self.NUM_OUT = num_outputs
self.OUT_ACTIVATION = out_activation
self.NUM_TRANSFORMER_BLOCKS = num_transformer_blocks
self.NUM_HEADS = num_heads
self.EMBEDDING_DIM = embedding_dim
self.MLP_HIDDEN_UNITS_FACTORS = mlp_hidden_units_factors
self.DROPOUT_RATE = dropout_rate
self.USE_COLUMN_EMBEDDING = use_column_embedding
class BaseTabularNetwork():
@staticmethod
def get_inputs(config):
return {
feature_name: L.Input(
name=feature_name,
shape=(),
dtype=(tf.float32 if feature_name in config.NUMERIC_FEATURE_NAMES else tf.string),
)
for feature_name in config.FEATURE_NAMES
}
@staticmethod
def encode_inputs(inputs, config, prefix=''):
cat_features = []
num_features = []
for feature_name in inputs:
if feature_name in config.CATEGORICAL_FEATURE_NAMES:
vocabulary = config.CATEGORICAL_FEATURES_WITH_VOCABULARY[feature_name]
lookup = L.StringLookup(
vocabulary=vocabulary,
mask_token=None,
num_oov_indices=0,
output_mode="int",
name=f"{prefix}{feature_name}_lookup"
)
encoded_feature = lookup(inputs[feature_name])
embedding = L.Embedding(
input_dim=len(vocabulary), output_dim=config.EMBEDDING_DIM,
name=f"{prefix}{feature_name}_embeddings"
)
encoded_feature = embedding(encoded_feature)
cat_features.append(encoded_feature)
else:
encoded_feature = L.Reshape((1, ), name=f"{prefix}{feature_name}_reshape")(inputs[feature_name])
num_features.append(encoded_feature)
return cat_features, num_features
def create_mlp(hidden_units, dropout_rate, activation, normalization_layer, name=None):
mlp_layers = []
for units in hidden_units:
mlp_layers.append(normalization_layer),
mlp_layers.append(L.Dense(units, activation=activation))
mlp_layers.append(L.Dropout(dropout_rate))
return keras.Sequential(mlp_layers, name=name)
class TabTransformer(BaseTabularNetwork):
@classmethod
def from_config(cls, name, config):
inputs = cls.get_inputs(config)
cat_features, num_features = cls.encode_inputs(inputs, config)
cat_features = tf.stack(cat_features, axis=1)
num_features = L.concatenate(num_features)
if config.USE_COLUMN_EMBEDDING:
num_columns = cat_features.shape[1]
column_embedding = L.Embedding(
input_dim=num_columns, output_dim=config.EMBEDDING_DIM
)
column_indices = tf.range(start=0, limit=num_columns, delta=1)
cat_features = cat_features + column_embedding(column_indices)
for block_idx in range(config.NUM_TRANSFORMER_BLOCKS):
attention_output = L.MultiHeadAttention(
num_heads=config.NUM_HEADS,
key_dim=config.EMBEDDING_DIM,
dropout=config.DROPOUT_RATE,
name=f"multihead_attention_{block_idx}",
)(cat_features, cat_features)
x = L.Add(name=f"skip_connection1_{block_idx}")([attention_output, cat_features])
x = L.LayerNormalization(name=f"layer_norm1_{block_idx}", epsilon=1e-6)(x)
feedforward_output = create_mlp(
hidden_units=[config.EMBEDDING_DIM],
dropout_rate=config.DROPOUT_RATE,
activation=keras.activations.gelu,
normalization_layer=L.LayerNormalization(epsilon=1e-6),
name=f"feedforward_{block_idx}",
)(x)
x = L.Add(name=f"skip_connection2_{block_idx}")([feedforward_output, x])
cat_features = L.LayerNormalization(name=f"layer_norm2_{block_idx}", epsilon=1e-6)(x)
cat_features = L.Flatten()(cat_features)
num_features = L.LayerNormalization(epsilon=1e-6)(num_features)
features = L.concatenate([cat_features, num_features])
mlp_hidden_units = [factor * features.shape[-1] for factor in config.MLP_HIDDEN_UNITS_FACTORS]
features = create_mlp(
hidden_units=mlp_hidden_units,
dropout_rate=config.DROPOUT_RATE,
activation=keras.activations.selu,
normalization_layer=L.BatchNormalization(),
name="MLP",
)(features)
outputs = L.Dense(units=config.NUM_OUT, activation=config.OUT_ACTIVATION, name="outputs")(features)
model = keras.Model(inputs=inputs, outputs=outputs, name=name)
return model
model_config = TabularNetworkConfig(
target_feature_name="song_popularity",
target_feature_labels=["0", "1"],
numeric_feature_names=[
'song_duration_ms', 'acousticness', 'danceability', 'energy', 'instrumentalness', 'liveness', 'loudness',
'speechiness', 'tempo', 'audio_valence'
],
categorical_features_with_vocabulary={
'key': list(map(str, range(12))),
'audio_mode': ["0", "1"],
'time_signature': ["2", "3", "4", "5"]
},
num_outputs=1,
out_activation="sigmoid",
num_transformer_blocks=3,
num_heads=4,
embedding_dim=32,
mlp_hidden_units_factors=[2, 1],
dropout_rate=0.2,
use_column_embedding=True,
)
MAX_EPOCHS = 250
get_callbacks = lambda : [
keras.callbacks.EarlyStopping(min_delta=1e-4, patience=10, verbose=1, restore_best_weights=True),
keras.callbacks.ReduceLROnPlateau(patience=3, verbose=1)
]
keras.utils.plot_model(
TabTransformer.from_config("tab_transformer", model_config),
show_shapes=True, rankdir="LR", to_file="tab_transformer.png"
)
```
# Training
```
preds = []
for fold, (train_index, valid_index) in enumerate(skf.split(X, y)):
X_train, X_valid = X.iloc[train_index], X.iloc[valid_index]
y_train, y_valid = y.iloc[train_index], y.iloc[valid_index]
mean_imputer = Pipeline([
('imputer', SimpleImputer(strategy='median')),
('scaler', StandardScaler())
]).fit(X_train[model_config.NUMERIC_FEATURE_NAMES])
mode_imputer = SimpleImputer(strategy='most_frequent').fit(X_train[model_config.CATEGORICAL_FEATURE_NAMES])
X_train = pd.concat([
pd.DataFrame(
mean_imputer.transform(X_train[model_config.NUMERIC_FEATURE_NAMES]),
columns=model_config.NUMERIC_FEATURE_NAMES
),
pd.DataFrame(
mode_imputer.transform(X_train[model_config.CATEGORICAL_FEATURE_NAMES]).astype(float).astype(int),
columns=model_config.CATEGORICAL_FEATURE_NAMES
).astype(str),
], axis=1)
X_valid = pd.concat([
pd.DataFrame(
mean_imputer.transform(X_valid[model_config.NUMERIC_FEATURE_NAMES]),
columns=model_config.NUMERIC_FEATURE_NAMES
),
pd.DataFrame(
mode_imputer.transform(X_valid[model_config.CATEGORICAL_FEATURE_NAMES]).astype(float).astype(int),
columns=model_config.CATEGORICAL_FEATURE_NAMES
).astype(str),
], axis=1)
X_test_ = pd.concat([
pd.DataFrame(
mean_imputer.transform(X_test[model_config.NUMERIC_FEATURE_NAMES]),
columns=model_config.NUMERIC_FEATURE_NAMES
),
pd.DataFrame(
mode_imputer.transform(X_test[model_config.CATEGORICAL_FEATURE_NAMES]).astype(float).astype(int),
columns=model_config.CATEGORICAL_FEATURE_NAMES
).astype(str),
], axis=1)
data_train = tf.data.Dataset.from_tensor_slices((
{col: X_train[col].values.tolist() for col in model_config.FEATURE_NAMES},
y_train.values.tolist()
)).batch(1024)
data_valid = tf.data.Dataset.from_tensor_slices((
{col: X_valid[col].values.tolist() for col in model_config.FEATURE_NAMES},
y_valid.values.tolist()
)).batch(1024)
data_test = tf.data.Dataset.from_tensor_slices((
{col: X_test_[col].values.tolist() for col in model_config.FEATURE_NAMES}
)).batch(1024)
model = TabTransformer.from_config("deep_network", model_config)
model.compile(loss="binary_crossentropy", optimizer="adam", metrics=['accuracy'])
model.fit(data_train, validation_data=data_valid, callbacks=get_callbacks(), epochs=MAX_EPOCHS)
preds.append(model.predict(data_test))
```
# Submissions
```
submissions = pd.read_csv('../input/song-popularity-prediction/sample_submission.csv')
submissions['song_popularity'] = np.array(preds).mean(axis=0)
submissions.to_csv('preds.csv', index=False)
```
| github_jupyter |
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
%matplotlib inline
import argparse
import os
import pprint
import shutil
import pickle
import cv2
from collections import defaultdict
import matplotlib.pyplot as plt
import numpy as np
import random
import torch
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim
import torch.utils.data
import torch.utils.data.distributed
import torchvision.transforms as transforms
#from tensorboardX import SummaryWriter
import _init_paths
from config import cfg
from config import update_config
from core.loss import JointsMSELoss
from core.function import train
from core.function import validate
from utils.utils import get_optimizer
from utils.utils import save_checkpoint
from utils.utils import create_logger
from utils.utils import get_model_summary
import dataset
import models
with open('/u/snaha/v6/dataset/AWA/Animals_with_Attributes2/quadruped_keypoints/coco_format/dataset.pickle', 'rb') as handle:
dataset = pickle.load(handle)
dataset.keys()
dataset['images'][0]
path = '/u/snaha/v6/dataset/AWA/Animals_with_Attributes2/quadruped_keypoints/Annotations/'
list_subfolders_with_paths = [f.path for f in os.scandir(path) if f.is_dir()]
animal_list = []
for animal in list_subfolders_with_paths:
animal_folder = animal#os.path.join(path, animal)
animal_list.append(animal_folder.split('/')[-1])
animal_list
def generate_split(animal_list, dataset, all_full_animal_image_list_dict):
rand_inds = np.arange(len(animal_list)).tolist()
random.shuffle(rand_inds)
anim_list = []
for ind in rand_inds:
anim_list.append(animal_list[ind])
import ipdb; ipdb.set_trace()
exit(0)
dataset_train = dict()
dataset_test = dict()
annotations_train = []
annotations_test = []
images_train = []
images_test = []
train_animals = anim_list[:-4]
test_animal = anim_list[-4:]
for im, anno in zip(dataset['images'], dataset['annotations']):
# import ipdb; ipdb.set_trace()
# exit(0)
if anno['animal'] in test_animal and im['filename'].split('_')[0] in test_animal \
and im['filename'].split('.')[0] in all_full_animal_image_list_dict[anno['animal']]:
annotations_test.append(anno)
images_test.append(im)
else:
annotations_train.append(anno)
images_train.append(im)
# for anno in dataset['images']:
# if anno['id'].split('_')[0] in test_animal:
# images_test.append(anno)
# else:
# images_train.append(anno)
dataset_train['annotations'] = annotations_train
dataset_test['annotations'] = annotations_test
dataset_train['images'] = images_train
dataset_test['images'] = images_test
dataset_train['categories'] = dataset['categories']
dataset_test['categories'] = dataset['categories']
dataset_train['info'] = dataset['info']
dataset_test['info'] = dataset['info']
return train_animals, test_animal, dataset_train, dataset_test
#all_full_animal_image_list_dict =
with open('/u/snaha/v6/dataset/AWA/Animals_with_Attributes2/quadruped_keypoints/Annotations/all_full_animal_image_list_dict.pickle', 'rb') as handle:
all_full_animal_image_list_dict = pickle.load(handle)
for i in range(5):
train_animals, test_animals, dataset_train, dataset_test = generate_split(animal_list, dataset, all_full_animal_image_list_dict)
#test_animals
print('Number of train instances = ', len(dataset_train['images']), len(dataset_train['annotations']))
print('Number of test instances = ', len(dataset_test['images']), len(dataset_test['annotations']))
with open('/u/snaha/v6/dataset/AWA/Animals_with_Attributes2/quadruped_keypoints/coco_format/dataset_train_' + str(i+1) + '.pickle', 'wb') as handle:
pickle.dump(dataset_train, handle, protocol=pickle.HIGHEST_PROTOCOL)
with open('/u/snaha/v6/dataset/AWA/Animals_with_Attributes2/quadruped_keypoints/coco_format/dataset_val_' + str(i+1) + '.pickle', 'wb') as handle:
pickle.dump(dataset_test, handle, protocol=pickle.HIGHEST_PROTOCOL)
a = [.24, .72, .72, .18, .62, .23, .56, .26, .38, .28, .29, .48, .23, .60, .47, .24, .72, .72, .18, .62, .23, .56, .26, .38, .28, .29, .48, .23, .60, .47, .18, .62, .23, .56, .26, .38, .28, .29, .48]
len(a)
```
| github_jupyter |
# <h1 align= 'center'>Gensim Word2Vec Embeddings</h1>
The goal for this analysis is to predict if a review rates the movie positively or negatively. Inside this dataset there are 25,000 labeled movies reviews for training, 50,000 unlabeled reviews for training, and 25,000 reviews for testing.
<a href="https://imgur.com/FfdEBRz"><img src="https://i.imgur.com/FfdEBRzm.png" title="source: imgur.com" align="right"></a>
- IMDB movie reviews dataset
- http://ai.stanford.edu/~amaas/data/sentiment
- https://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews
- Contains 25000 positive and 25000 negative reviews
- Contains at most reviews per movie
- At least 7 stars out of 10 $\rightarrow$ positive (label = 1)
- At most 4 stars out of 10 $\rightarrow$ negative (label = 0)
> **Here, we use Gensim Word2Vec vectors to train a bag of centroids model to train machine learning models on.**
## <h2 align = "center" >Dependecies</h2>
```
import os
import re
import nltk
import time
import logging
import unicodedata
import numpy as np
import pandas as pd
import seaborn as sns
import multiprocessing
from bs4 import BeautifulSoup
import matplotlib.pyplot as plt
from nltk.corpus import stopwords
from collections import defaultdict
from nltk.stem import SnowballStemmer
from nltk.tokenize import word_tokenize, sent_tokenize
from sklearn import preprocessing
from gensim.models import word2vec
from sklearn.cluster import KMeans
from sklearn.ensemble import VotingClassifier as vc
from sklearn.model_selection import cross_val_score
from sklearn.svm import LinearSVC as svc
from sklearn.linear_model import LogisticRegression as lr
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import roc_auc_score, accuracy_score, confusion_matrix
nltk.download('stopwords')
import warnings
warnings.filterwarnings('ignore')
# read from local
# movies = pd.read_csv('data/imdb_data.csv')
# movies.sample(7)
# for importing data to colab
from google.colab import drive
drive.mount('/content/drive')
movies = pd.read_csv('/content/drive/My Drive/Colab Notebooks/imdb_data.csv')
movies.sample(7)
# Cateogrize positive and negative as 1 and 0 respectively
label_encoder = preprocessing.LabelEncoder()
movies['sentiment'] = label_encoder.fit_transform(movies['sentiment'])
movies.head()
X = movies['review']
y = (np.array(movies['sentiment']))
# Here we split data to training and testing parts
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=7)
print(f"Train dataset shape: {X_train.shape}, \nTest dataset shape: {X_test.shape}")
print
```
## <h2> <center>Preprocessing</center></h2>
```
!pip install contractions
!pip install textsearch
import contractions
def review_to_wordlist(review, custom = True, stem_words = True):
# Clean the text, with the option to remove stopwords and stem words.
# Strip html
soup = BeautifulSoup(review, "html.parser")
[s.extract() for s in soup(['iframe', 'script'])]
stripped_text = soup.get_text()
review_text = re.sub(r'[\r|\n|\r\n]+', '\n', stripped_text)
# replace accents
review_text = unicodedata.normalize('NFKD', review_text).encode('ascii', 'ignore').decode('utf-8', 'ignore')
review_text = re.sub(r"[^A-Za-z0-9!?\'\`]", " ", review_text) # remove special characters
review_text = contractions.fix(review_text) # expand contractions
review_text = review_text.lower()
words = review_text.split()
if custom:
stop_words = set(nltk.corpus.stopwords.words('english'))
stop_words.update(['movie', 'film', 'one', 'would', 'even',
'movies', 'films', 'cinema',
'character', 'show', "'", "!", 'like'])
else:
stop_words = set(nltk.corpus.stopwords.words('english'))
words = [w for w in words if not w in stop_words]
review_text = " ".join(words)
review_text = re.sub(r"!", " ! ", review_text)
review_text = re.sub(r"\?", " ? ", review_text)
review_text = re.sub(r"\s{2,}", " ", review_text)
if stem_words:
words = review_text.split()
stemmer = SnowballStemmer('english')
stemmed_words = [stemmer.stem(word) for word in words]
review_text = " ".join(stemmed_words)
# Return a list of words, with each word as its own string
return review_text.split()
def review_to_sentences(review):
# Split a review into parsed sentences
raw_sentences = sent_tokenize(review.strip())
sentences = []
for raw_sentence in raw_sentences:
if len(raw_sentence) > 0:
sentences.append(review_to_wordlist(raw_sentence))
# Each sentence is a list of words, so this returns a list of lists
return sentences
sentences = []
print ("Parsing sentences ...")
for review in movies['review']:
sentences += review_to_sentences(review)
# Check how many sentences we have in total
print (len(sentences))
print()
print (sentences[0])
print()
print (sentences[1])
# Import the built-in logging module and configure it so that Word2Vec
# creates nice output messages
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
num_features = 300 # Word vector dimensionality
min_word_count = 5 # Minimum word count
num_workers = 1 # Number of threads to run in parallel
context = 20 # Context window size
downsampling = 1e-4 # Downsample setting for frequent words
# Initialize and train the model
print ("Training model...")
model = word2vec.Word2Vec(sentences,
workers = num_workers,
size = num_features,
min_count = min_word_count,
window = context,
sample = downsampling)
# Call init_sims because we won't train the model any further
# This will make the model much more memory-efficient.
model.init_sims(replace=True)
# save the model for potential, future use.
model_name = "{}features_{}minwords_{}context".format(num_features,min_word_count,context)
model.save(model_name)
# Load the model, if necessary
# model = Word2Vec.load("300features_5minwords_20context")
model.most_similar("great")
model.most_similar("stori")
model.most_similar("bad")
model.wv.syn0.shape
```
## <h2> <center>Bag of Centroids</center></h2>
```
# Set "k" (num_clusters) to be 1/5th of the vocabulary size, or an
# average of 5 words per cluster
word_vectors = model.wv.syn0
num_clusters = int(word_vectors.shape[0] / 5)
# Initalize a k-means object and use it to extract centroids
kmeans_clustering = KMeans(n_clusters = num_clusters,
n_init = 5,
verbose = 2)
idx = kmeans_clustering.fit_predict(word_vectors)
# Create a Word / Index dictionary, mapping each vocabulary word to a cluster number
word_centroid_map = dict(zip(model.wv.index2word, idx))
X = movies['review']
y = movies['sentiment']
# (np.array(movies['sentiment']))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=7)
print(f"Train dataset shape: {X_train.shape}, \nTest dataset shape: {X_test.shape}")
# Clean the training and testing reviews.
clean_train_reviews = []
for review in X_train:
clean_train_reviews.append(review_to_wordlist(review))
print("Training reviews are clean")
clean_test_reviews = []
for review in X_test:
clean_test_reviews.append(review_to_wordlist(review))
print("Testing reviews are clean")
def create_bag_of_centroids(wordlist, word_centroid_map):
# The number of clusters is equal to the highest cluster index
# in the word / centroid map
num_centroids = max(word_centroid_map.values()) + 1
# Pre-allocate the bag of centroids vector (for speed)
bag_of_centroids = np.zeros(num_centroids, dtype="float32")
# Loop over the words in the review. If the word is in the vocabulary,
# find which cluster it belongs to, and increment that cluster count
for word in wordlist:
if word in word_centroid_map:
index = word_centroid_map[word]
bag_of_centroids[index] += 1
return bag_of_centroids
# Pre-allocate an array for the training set bags of centroids (for speed)
train_centroids = np.zeros((X_train.size, num_clusters), dtype="float32")
# Transform the training set reviews into bags of centroids
counter = 0
for review in clean_train_reviews:
train_centroids[counter] = create_bag_of_centroids(review, word_centroid_map)
counter += 1
print("Training reviews are complete.")
# Repeat for test reviews
test_centroids = np.zeros((X_test.size, num_clusters), dtype="float32")
counter = 0
for review in clean_test_reviews:
test_centroids[counter] = create_bag_of_centroids(review, word_centroid_map )
counter += 1
print("Testing reviews are complete.")
print(f"Train centroids shape: {train_centroids.shape}, \nTest centroids shape: {test_centroids.shape}")
```
## <h2> <center>Modeling</center></h2>
```
def plot_confusion_matrix(y_true, y_pred, ax, class_names = ['Positive', 'Negative'], vmax=None,
normalized=True, title='Confusion matrix'):
"""
Helper fuction to generate a clean Confusion Matrix using seaborn library.
y_true: True labels, y_pred: Model Predictions, class_names: Override if needed
normalized: True, gives the proportions instead of absolute numbers
"""
matrix = confusion_matrix(y_true,y_pred)
if normalized:
matrix = matrix.astype('float') / matrix.sum(axis=1)[:, np.newaxis]
annot_kws = {'fontsize':25,
'fontstyle': 'italic'}
sns.heatmap(matrix, vmax=vmax, annot=True, annot_kws = annot_kws,
square=True, ax=ax, cbar=False,
cmap=sns.diverging_palette(20, 250, as_cmap=True),
linecolor='black', linewidths=0.5,
xticklabels=class_names)
ax.set_title(title, y=1.20, fontsize=16)
ax.set_ylabel('True labels', fontsize=12)
ax.set_xlabel('Predicted labels', y=1.10, fontsize=12)
ax.set_yticklabels(class_names, rotation=0)
lr_model = lr(C = 0.01,
max_iter = 6,
fit_intercept = True)
lr_model.fit(train_centroids, y_train)
lr_pred = lr_model.predict(test_centroids)
print("Test set Accuracy: ", accuracy_score(lr_pred, y_test)*100)
fig, axis1 = plt.subplots(nrows=1, ncols=1)
plot_confusion_matrix(y_test, lr_pred, ax=axis1,
title='Confusion matrix (lr)')
svc_model = svc()
svc_model.fit(train_centroids, y_train)
svc_pred = svc_model.predict(test_centroids)
print("Test set Accuracy: ", accuracy_score(svc_pred, y_test)*100)
fig, axis1 = plt.subplots(nrows=1, ncols=1)
plot_confusion_matrix(y_test, svc_pred, ax=axis1,
title='Confusion matrix (svc)')
```
| github_jupyter |
[PyCUDA](https://mathema.tician.de/software/pycuda/) is a Python wrapper around CUDA, NVidia's extension of C/C++ for GPUs.
There's also a [PyOpenCL](https://mathema.tician.de/software/pyopencl/) for the vendor-independent OpenCL standard.
```
import math
import numpy
import pycuda.autoinit
import pycuda.driver as driver
from pycuda.compiler import SourceModule
# compute a MILLION values
PROBLEM_SIZE = int(1e6)
# generate a CUDA (C-ish) function that will run on the GPU; PROBLEM_SIZE is hard-wired
module = SourceModule("""
__global__ void just_multiply(float *dest, float *a, float *b)
{
// function is called for ONE item; find out which one
const int id = threadIdx.x + blockDim.x*blockIdx.x;
if (id < %d)
dest[id] = a[id] * b[id];
}
""" % PROBLEM_SIZE)
# pull "just_multiply" out as a Python callable
just_multiply = module.get_function("just_multiply")
# create Numpy arrays on the CPU
a = numpy.random.randn(PROBLEM_SIZE).astype(numpy.float32)
b = numpy.random.randn(PROBLEM_SIZE).astype(numpy.float32)
dest = numpy.zeros_like(a)
# define block/grid size for our problem: at least 512 threads at a time (might do more)
# and we're only going to use x indexes (the y and z sizes are 1)
blockdim = (512, 1, 1)
griddim = (int(math.ceil(PROBLEM_SIZE / 512.0)), 1, 1)
# copy the "driver.In" arrays to the GPU, run the
just_multiply(driver.Out(dest), driver.In(a), driver.In(b), block=blockdim, grid=griddim)
# compare the GPU calculation (dest) with a CPU calculation (a*b)
print dest - a*b
```
Now let's do that calculation of $\pi$.
```
module2 = SourceModule("""
__global__ void mapper(float *dest)
{
const int id = threadIdx.x + blockDim.x*blockIdx.x;
const double x = 1.0 * id / %d; // x goes from 0.0 to 1.0 in PROBLEM_SIZE steps
if (id < %d)
dest[id] = 4.0 / (1.0 + x*x);
}
""" % (PROBLEM_SIZE, PROBLEM_SIZE))
mapper = module2.get_function("mapper")
dest = numpy.empty(PROBLEM_SIZE, dtype=numpy.float32)
blockdim = (512, 1, 1)
griddim = (int(math.ceil(PROBLEM_SIZE / 512.0)), 1, 1)
mapper(driver.Out(dest), block=blockdim, grid=griddim)
dest.sum() * (1.0 / PROBLEM_SIZE) # correct for bin size
```
We're doing the mapper (problem of size 1 million) on the GPU and the final sum (problem of size 1 million) on the CPU.
However, we want to do all the big data work on the GPU.
On the next slide is an algorithm that merges array elements with their neighbors in $\log_2(\mbox{million}) = 20$ steps.
```
module3 = SourceModule("""
__global__ void reducer(float *dest, int i)
{
const int PROBLEM_SIZE = %d;
const int id = threadIdx.x + blockDim.x*blockIdx.x;
if (id %% (2*i) == 0 && id + i < PROBLEM_SIZE) {
dest[id] += dest[id + i];
}
}
""" % PROBLEM_SIZE)
blockdim = (512, 1, 1)
griddim = (int(math.ceil(PROBLEM_SIZE / 512.0)), 1, 1)
reducer = module3.get_function("reducer")
# Python for loop over the 20 steps to reduce the array
i = 1
while i < PROBLEM_SIZE:
reducer(driver.InOut(dest), numpy.int32(i), block=blockdim, grid=griddim)
i *= 2
# final result is in the first element
dest[0] * (1.0 / PROBLEM_SIZE)
```
The only problem now is that we're copying this `dest` array back and forth between the CPU and GPU. Let's fix that:
```
# allocate the array directly on the GPU, no CPU involved
dest_gpu = driver.mem_alloc(PROBLEM_SIZE * numpy.dtype(numpy.float32).itemsize)
# do it again without "driver.InOut", which copies Numpy (CPU) to and from the GPU
mapper(dest_gpu, block=blockdim, grid=griddim)
i = 1
while i < PROBLEM_SIZE:
reducer(dest_gpu, numpy.int32(i), block=blockdim, grid=griddim)
i *= 2
# we only need the first element, so create a Numpy array with exactly one element
only_one_element = numpy.empty(1, dtype=numpy.float32)
# copy just that one element
driver.memcpy_dtoh(only_one_element, dest_gpu)
print only_one_element[0] * (1.0 / PROBLEM_SIZE)
```
| github_jupyter |
# MNIST dataset introduction
## MNIST dataset
MNIST (Mixed [National Institute of Standards and Technology](https://en.wikipedia.org/wiki/National_Institute_of_Standards_and_Technology)) database is dataset for handwritten digits, distributed by Yann Lecun's [THE MNIST DATABASE of handwritten digits](http://yann.lecun.com/exdb/mnist/) website.
- [Wikipedia](https://en.wikipedia.org/wiki/MNIST_database)
The dataset consists of pair, "handwritten digit image" and "label". Digit ranges from 0 to 9, meaning 10 patterns in total.
- handwritten digit image: This is gray scale image with size 28 x 28 pixel.
- label : This is actual digit number this handwritten digit image represents. It is either 0 to 9.

Several samples of "handwritten digit image" and its "label" from MNIST dataset.
MNIST dataset is widely used for "classification", "image recognition" task. This is considered as relatively simple task, and often used for "Hello world" program in machine learning category. It is also often used to compare algorithm performances in research.
Handling MNIST dataset with Chainer
For these famous datasets like MNIST, Chainer provides utility function to prepare dataset. So you don't need to write preprocessing code by your own, downloading dataset from internet, and extract it, followed by formatting it etc... Chainer function do it for you!
Currently,
MNIST
CIFAR-10, CIFAR-100
Penn Tree Bank (PTB)
are supported, refer Official document for dataset.
Let's get familiar with MNIST dataset handling at first. Below codes are based on mnist_dataset.ipynb. To prepare MNIST dataset, you just need to call chainer.datasets.get_mnist function.
```
import numpy as np
import chainer
# Load the MNIST dataset from pre-inn chainer method
train, test = chainer.datasets.get_mnist()
```
If this is first time, it starts downloading the dataset which might take several minutes. From second time, chainer will refer the cached contents automatically so it runs faster.
You will get 2 returns, each of them corresponds to "training dataset" and "test dataset".
MNIST have total 70000 data, where training dataset size is 60000, and test dataset size is 10000.
```
# train[i] represents i-th data, there are 60000 training data
# test data structure is same, but total 10000 test data
print('len(train), type ', len(train), type(train))
print('len(test), type ', len(test), type(test))
```
I will explain about only train dataset below, but test dataset have same dataset format.
`train[i]` represents i-th data, type=tuple($ x_i $, $y_i $), where $ x_i $ is image data in array format with size 784, and $y_i$ is label data indicates actual digit of image.
```
print('train[0]', type(train[0]), len(train[0]))
# print(train[0]) # x_i = long array and y_i = label
```
$x_i $ information. You can see that image is represented as just an array of float numbers ranging from 0 to 1. MNIST image size is 28 × 28 pixel, so it is represented as 784 1-d array.
```
# train[i][0] represents x_i, MNIST image data,
# type=numpy(784,) vector <- specified by ndim of get_mnist()
print('train[0][0]', train[0][0].shape)
np.set_printoptions(threshold=10) # set np.inf to print all.
print(train[0][0])
```
$y_i $ information. In below case you can see that 0-th image has label "5".
```
# train[i][1] represents y_i, MNIST label data(0-9), type=numpy() -> this means scalar
print('train[0][1]', train[0][1].shape, train[0][1])
```
Plotting MNIST
So, each i-th dataset consists of image and label
- train[i][0] or test[i][0]: i-th handwritten image
- train[i][1] or test[i][1]: i-th label
Below is a plotting code to check how images (this is just an array vector in python program) look like. This code will generate the MNIST image which was shown in the top of this articl
```
import os
import chainer
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
base_dir = 'src/02_mnist_mlp/images'
# Load the MNIST dataset from pre-inn chainer method
train, test = chainer.datasets.get_mnist(ndim=1)
ROW = 4
COLUMN = 5
for i in range(ROW * COLUMN):
# train[i][0] is i-th image data with size 28x28
image = train[i][0].reshape(28, 28) # not necessary to reshape if ndim is set to 2
plt.subplot(ROW, COLUMN, i+1) # subplot with size (width 3, height 5)
plt.imshow(image, cmap='gray') # cmap='gray' is for black and white picture.
# train[i][1] is i-th digit label
plt.title('label = {}'.format(train[i][1]))
plt.axis('off') # do not show axis value
plt.tight_layout() # automatic padding between subplots
plt.savefig(os.path.join(base_dir, 'mnist_plot.png'))
#plt.show()
```
[Hands on] Try plotting "test" dataset instead of "train" dataset.
```
```
| github_jupyter |
# Run the demo
For FuxiCTR v1.1.x only.
We provide [multiple demo scripts](https://github.com/xue-pai/FuxiCTR/tree/main/demo) to run a given model on the tiny dataset. Please follow these examples to get started. The code workflow is structured as follows:
```python
# Set data params and model params
params = {...}
# Define the feature encoder with feature encoding specs
feature_encoder = FeatureEncoder(feature_cols, label_col, ...) #
# Build dataset from csv to h5
datasets.build_dataset(feature_encoder, train_data, valid_data, test_data)
# Get feature_map that are required for data loading and model training.
feature_map = feature_encoder.feature_map
# Load data generators
train_gen, valid_gen = datasets.h5_generator(feature_map, ...)
# Define a model
model = DeepFM(feature_map, ...)
# Train the model
model.fit_generator(train_gen, validation_data=valid_gen, ...)
# Load test data generator and evaluation
test_gen = datasets.h5_generator(feature_map, ...)
model.evaluate_generator(test_gen)
```
.
In the following, we show the demo `DeepFM_demo.py`.
```
import os
import logging
from datetime import datetime
from fuxictr import datasets
from fuxictr.datasets.taobao import FeatureEncoder
from fuxictr.features import FeatureMap
from fuxictr.utils import load_config, set_logger, print_to_json
from fuxictr.pytorch.models import DeepFM
from fuxictr.pytorch.torch_utils import seed_everything
```
After importing the required packages, one needs to define the params dict for DeepFM.
```
feature_cols = [{'name': ["userid","adgroup_id","pid","cate_id","campaign_id","customer","brand","cms_segid",
"cms_group_id","final_gender_code","age_level","pvalue_level","shopping_level","occupation"],
'active': True, 'dtype': 'str', 'type': 'categorical'}]
label_col = {'name': 'clk', 'dtype': float}
params = {'model_id': 'DeepFM_demo',
'dataset_id': 'taobao_tiny',
'train_data': '../data/tiny_data/train_sample.csv',
'valid_data': '../data/tiny_data/valid_sample.csv',
'test_data': '../data/tiny_data/test_sample.csv',
'model_root': '../checkpoints/',
'data_root': '../data/',
'feature_cols': feature_cols,
'label_col': label_col,
'embedding_regularizer': 0,
'net_regularizer': 0,
'hidden_units': [64, 64],
'hidden_activations': "relu",
'learning_rate': 1e-3,
'net_dropout': 0,
'batch_norm': False,
'optimizer': 'adam',
'task': 'binary_classification',
'loss': 'binary_crossentropy',
'metrics': ['logloss', 'AUC'],
'min_categr_count': 1,
'embedding_dim': 10,
'batch_size': 16,
'epochs': 3,
'shuffle': True,
'seed': 2019,
'monitor': 'AUC',
'monitor_mode': 'max',
'use_hdf5': True,
'pickle_feature_encoder': True,
'save_best_only': True,
'every_x_epochs': 1,
'patience': 2,
'num_workers': 1,
'partition_block_size': -1,
'verbose': 1,
'version': 'pytorch',
'gpu': -1}
# Set the logger and random seed
set_logger(params)
logging.info(print_to_json(params))
seed_everything(seed=params['seed'])
```
Then set the FeatureEncoder to fit the training data and encode the raw features (e.g., normalizing continious values and mapping/reindex categorical features) from csv files.
```
# Set feature_encoder that defines how to preprocess data
feature_encoder = FeatureEncoder(feature_cols,
label_col,
dataset_id=params['dataset_id'],
data_root=params["data_root"])
# Build dataset from csv to h5
datasets.build_dataset(feature_encoder,
train_data=params["train_data"],
valid_data=params["valid_data"],
test_data=params["test_data"])
```
Preprocess the csv files to h5 files and get the data generators ready for train/validation/test. Note that the h5 files can be reused for subsequent experiments directly.
```
# Get feature_map that defines feature specs
feature_map = feature_encoder.feature_map
# Get train and validation data generator from h5
data_dir = os.path.join(params['data_root'], params['dataset_id'])
train_gen, valid_gen = datasets.h5_generator(feature_map,
stage='train',
train_data=os.path.join(data_dir, 'train.h5'),
valid_data=os.path.join(data_dir, 'valid.h5'),
batch_size=params['batch_size'],
shuffle=params['shuffle'])
```
Initialize a DeepFM model and fit the model with the training and validation data.
```
model = DeepFM(feature_map, **params)
model.count_parameters() # print number of parameters used in model
model.fit_generator(train_gen,
validation_data=valid_gen,
epochs=params['epochs'],
verbose=params['verbose'])
```
Reload the saved best model checkpoint for testing.
```
model.load_weights(model.checkpoint) # reload the best checkpoint
logging.info('***** validation results *****')
model.evaluate_generator(valid_gen)
logging.info('***** validation results *****')
test_gen = datasets.h5_generator(feature_map,
stage='test',
test_data=os.path.join(data_dir, 'test.h5'),
batch_size=params['batch_size'],
shuffle=False)
model.evaluate_generator(test_gen)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/alirezash97/BraTS/blob/master/BraTS2020.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
# from google.colab import drive
# drive.mount('/content/drive')
# !wget 'https://www.cbica.upenn.edu/MICCAI_BraTS2020_TrainingData'
# !unzip /content/MICCAI_BraTS2020_TrainingData -d '/content/drive/My Drive/BRATS2020/'
# !cp -r '/content/drive/My Drive/BRATS2020/MICCAI_BraTS2020_TrainingData/' /content/MICCAI_BraTS2020_TrainingData/
# %tensorflow_version 2.x
import tensorflow as tf
# print("Tensorflow version " + tf.__version__)
# try:
# tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection
# print('Running on TPU ', tpu.cluster_spec().as_dict()['worker'])
# except ValueError:
# raise BaseException('ERROR: Not connected to a TPU runtime; please see the previous cell in this notebook for instructions!')
# tf.config.experimental_connect_to_cluster(tpu)
# tf.tpu.experimental.initialize_tpu_system(tpu)
# tpu_strategy = tf.distribute.experimental.TPUStrategy(tpu)
import os
# tpu_address = 'grpc://' + os.environ['COLAB_TPU_ADDR']
# print ('TPU address is', tpu_address)
import os
import numpy as np
from nibabel.testing import data_path
import nibabel as nib
import matplotlib.pyplot as plt
from keras.utils import to_categorical
import cv2
import keras
import random
import glob, os
images_path = glob.glob('/content/drive/My Drive/BRATS2020/MICCAI_BraTS2020_TrainingData/**/*.nii.gz', recursive=True)
X_trainset_filenames = []
y_trainset_filenames = []
for item in images_path:
if 'seg' in item:
y_trainset_filenames.append(os.path.join(data_path, item))
else:
X_trainset_filenames.append(os.path.join(data_path, item))
print(len(X_trainset_filenames))
print(len(y_trainset_filenames))
for i in range(8):
print(X_trainset_filenames[i:i+1])
print()
for i in range(2):
print(y_trainset_filenames[i:i+1])
print()
def shuffle(sample_path, target_path):
a_list = list(range(0, len(target_path)))
random.shuffle(a_list)
new_sample_path = []
new_target_path = []
for i in a_list:
for j in range(4):
new_sample_path.append(sample_path[(i*4)+j])
new_target_path.append(target_path[i])
return new_sample_path, new_target_path
X_trainset_filenames, y_trainset_filenames = shuffle(X_trainset_filenames, y_trainset_filenames)
for i in range(8):
print(X_trainset_filenames[i:i+1])
print()
for i in range(2):
print(y_trainset_filenames[i:i+1])
print()
def sort_by_channel(sample_path):
n = int(len(sample_path) / 4)
new_path = []
for i in range(n):
temp = sample_path[(i*4): (i+1)*4]
new_temp = []
###############
for path in temp:
if '_t1.' in path:
new_temp.append(path)
else:
pass
for path in temp:
if '_t1ce.' in path:
new_temp.append(path)
else:
pass
for path in temp:
if '_t2.' in path:
new_temp.append(path)
else:
pass
for path in temp:
if '_flair.' in path:
new_temp.append(path)
else:
pass
################
for path in new_temp:
new_path.append(path)
return new_path
X_trainset_filenames = sort_by_channel(X_trainset_filenames)
for i in range(88, 96):
print(X_trainset_filenames[i:i+1])
print()
for i in range(22, 24):
print(y_trainset_filenames[i:i+1])
print()
def get_labeled_image(image, label, is_categorical=False):
if not is_categorical:
############## voxel categories were 0, 1, 2, 4
for i in range(240):
for j in range(240):
for k in range(155):
if label[i, j, k] == 4.0 :
label[i, j, k] = 3.0
else:
pass
########## voxel categpries are 0, 1, 2, 3 for better categorization
label = to_categorical(label, num_classes=4).astype(np.uint8)
image = cv2.normalize(image[:, :, :, 0], None, alpha=0, beta=255,
norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F).astype(
np.uint8)
labeled_image = np.zeros_like(label[:, :, :, 1:])
# remove tumor part from image
labeled_image[:, :, :, 0] = image * (label[:, :, :, 0])
labeled_image[:, :, :, 1] = image * (label[:, :, :, 0])
labeled_image[:, :, :, 2] = image * (label[:, :, :, 0])
# color labels
labeled_image += label[:, :, :, 1:] * 255
return labeled_image
def plot_image_grid(image, unlabeled_image):
############################################################################
data_all = []
data_all.append(image)
fig, ax = plt.subplots(3, 6, figsize=[16, 9])
# coronal plane
coronal = np.transpose(data_all, [1, 3, 2, 4, 0])
coronal = np.rot90(coronal, 1)
# transversal plane
transversal = np.transpose(data_all, [2, 1, 3, 4, 0])
transversal = np.rot90(transversal, 2)
# sagittal plane
sagittal = np.transpose(data_all, [2, 3, 1, 4, 0])
sagittal = np.rot90(sagittal, 1)
n_coronal = []
for i in range(6):
n = np.random.randint(coronal.shape[2])
n_coronal.append(n)
ax[0][i].imshow(np.squeeze(coronal[:, :, n, :]))
ax[0][i].set_xticks([])
ax[0][i].set_yticks([])
if i == 0:
ax[0][i].set_ylabel('Coronal', fontsize=15)
n_transversal = []
for i in range(6):
n = np.random.randint(transversal.shape[2])
n_transversal.append(n)
ax[1][i].imshow(np.squeeze(transversal[:, :, n, :]))
ax[1][i].set_xticks([])
ax[1][i].set_yticks([])
if i == 0:
ax[1][i].set_ylabel('Transversal', fontsize=15)
n_sagittal = []
for i in range(6):
n = np.random.randint(sagittal.shape[2])
n_sagittal.append(n)
ax[2][i].imshow(np.squeeze(sagittal[:, :, n, :]))
ax[2][i].set_yticks([])
if i == 0:
ax[2][i].set_ylabel('Sagittal', fontsize=15)
fig.suptitle('\n\n#########################\n #### labeled MRI ####\n ##############################', fontsize=16, color='white')
fig.subplots_adjust(wspace=0, hspace=0)
############################################################################
data_all = []
data_all.append(unlabeled_image)
fig, ax = plt.subplots(3, 6, figsize=[16, 9])
# coronal plane
coronal = np.transpose(data_all, [1, 3, 2, 4, 0])
coronal = np.rot90(coronal, 1)
# transversal plane
transversal = np.transpose(data_all, [2, 1, 3, 4, 0])
transversal = np.rot90(transversal, 2)
# sagittal plane
sagittal = np.transpose(data_all, [2, 3, 1, 4, 0])
sagittal = np.rot90(sagittal, 1)
for i in range(6):
ax[0][i].imshow(np.squeeze(coronal[:, :, n_coronal[i], 1]), cmap='gray')
ax[0][i].set_xticks([])
ax[0][i].set_yticks([])
if i == 0:
ax[0][i].set_ylabel('Coronal', fontsize=15)
for i in range(6):
ax[1][i].imshow(np.squeeze(transversal[:, :, n_transversal[i], 1]), cmap='gray')
ax[1][i].set_xticks([])
ax[1][i].set_yticks([])
if i == 0:
ax[1][i].set_ylabel('Transversal', fontsize=15)
for i in range(6):
ax[2][i].imshow(np.squeeze(sagittal[:, :, n_sagittal[i], 1]), cmap='gray')
ax[2][i].set_yticks([])
if i == 0:
ax[2][i].set_ylabel('Sagittal', fontsize=15)
fig.suptitle('\n\n#########################\n #### unlabeled MRI ####\n ##############################', fontsize=16, color='white')
fig.subplots_adjust(wspace=0, hspace=0)
def load_case(image_nifty_file, label_nifty_file):
# load the image and label file, get the image content and return a numpy array for each
image = np.zeros((240, 240, 155, 4))
img0 = np.array(nib.load(image_nifty_file[0]).get_fdata())
img1 = np.array(nib.load(image_nifty_file[1]).get_fdata())
img2 = np.array(nib.load(image_nifty_file[2]).get_fdata())
img3 = np.array(nib.load(image_nifty_file[3]).get_fdata())
image[:, :, :, 0] = img0
image[:, :, :, 1] = img1
image[:, :, :, 2] = img2
image[:, :, :, 3] = img3
label = np.array(nib.load(label_nifty_file).get_fdata())
return image, label
# image_unlabeled , label = load_case(X_trainset_filenames[:4], y_trainset_filenames[0])
# image = get_labeled_image(image_unlabeled, label)
# plot_image_grid(image, image_unlabeled)
import imageio
from IPython.display import Image
def visualize_data_gif(data_):
images = []
for i in range(data_.shape[0]):
x = data_[min(i, data_.shape[0] - 1), :, :]
y = data_[:, min(i, data_.shape[1] - 1), :]
z = data_[:, :, min(i, data_.shape[2] - 1)]
img = np.concatenate((x, y, z), axis=1)
images.append(img)
imageio.mimsave("/tmp/gif.gif", images, duration=0.01)
return Image(filename="/tmp/gif.gif", format='png')
# ## combine t1, t1c, t2 and flair for patient 337 = X_trainset_filenames[4:8]
# ## label for patient 337 = y_trainset_filenames[1]
# image, label = load_case(X_trainset_filenames[4:8], y_trainset_filenames[1])
# visualize_data_gif(get_labeled_image(image, label))
def get_sub_volume(image, label,
orig_x = 240, orig_y = 240, orig_z = 155,
output_x = 120, output_y = 120, output_z = 16,
num_classes = 5, max_tries = 1000,
background_threshold=0.95):
############## voxel categories were 0, 1, 2, 4
for i in range(240):
for j in range(240):
for k in range(155):
if label[i, j, k] == 4.0 :
label[i, j, k] = 3.0
else:
pass
########## voxel categpries are 0, 1, 2, 3 for better categorization
# Initialize features and labels with `None`
tries = 0
while tries < max_tries:
# randomly sample sub-volume by sampling the corner voxel
# hint: make sure to leave enough room for the output dimensions!
start_x = np.random.randint(0, (orig_x - output_x + 1))
start_y = np.random.randint(0, (orig_y - output_y + 1))
start_z = np.random.randint(0, (orig_z - output_z + 1))
# extract relevant area of label
y = label[start_x: start_x + output_x,
start_y: start_y + output_y,
start_z: start_z + output_z]
# One-hot encode the categories.
# This adds a 4th dimension, 'num_classes'
# (output_x, output_y, output_z, num_classes)
y = keras.utils.to_categorical(y, num_classes=num_classes)
# compute the background ratio
bgrd_ratio = np.sum(y[:, :, :, 0]) / (output_x * output_y * output_z)
# increment tries counter
tries += 1
# if background ratio is below the desired threshold,
# use that sub-volume.
# otherwise continue the loop and try another random sub-volume
if bgrd_ratio < background_threshold:
# make copy of the sub-volume
X = np.copy(image[start_x: start_x + output_x,
start_y: start_y + output_y,
start_z: start_z + output_z, :])
# random1 = np.random.uniform(0, 1)
# random2 = np.random.uniform(0, 1)
# change dimension of X
# from (x_dim, y_dim, z_dim, num_channels)
# to (num_channels, x_dim, y_dim, z_dim)
X = np.moveaxis(X, 3, 0)
# if random1 > 0.5:
######### data augmentation ##############################
# if random2 > 0.66:
#### 90 degree rotation #####
# X = np.moveaxis(X, 1, 2)
#############################
# elif 0.33 < random2 <= 0.66:
#### 180 degree rotation ####
# X = np.flip(X, (1, 2))
#############################
# else :
#### 270 degree rotation #####
# X = np.moveaxis(X, 1, 2)
# X = np.flip(X, (1, 2))
##############################
###############################################################
# else :
# pass
# change dimension of y
# from (x_dim, y_dim, z_dim, num_classes)
# to (num_classes, x_dim, y_dim, z_dim)
y = np.moveaxis(y, 3, 0)
# if random1 > 0.5:
######### data augmentation #############################
# if random2 > 0.66:
#### 90 degree rotation #####
# y = np.moveaxis(y, 1, 2)
#############################
# elif 0.33 < random2 <= 0.66:
#### 180 degree rotation ####
# y = np.flip(y, (1, 2))
#############################
# else :
#### 270 degree rotation #####
# y = np.moveaxis(y, 1, 2)
# y = np.flip(y, (1, 2))
##############################
###############################################################
# else :
# pass
# take a subset of y that excludes the background class
# in the 'num_classes' dimension
y = y[1:, :, :, :]
return X, y
def visualize_patch(X, y):
fig, ax = plt.subplots(1, 2, figsize=[10, 5], squeeze=False)
ax[0][0].imshow(X[:, :, 0], cmap='Greys_r')
ax[0][0].set_yticks([])
ax[0][0].set_xticks([])
ax[0][1].imshow(y[:, :, 0], cmap='Greys_r')
ax[0][1].set_xticks([])
ax[0][1].set_yticks([])
fig.subplots_adjust(wspace=0, hspace=0)
# image, label = load_case(X_trainset_filenames[4:8], y_trainset_filenames[1])
# X, y = get_sub_volume(image, label)
# #############
# print(X.shape)
# print(y.shape)
# #############
# # non-enhancing tumor is channel 1 in the class label
# visualize_patch(X[1, :, :, :], y[1])
# visualize_patch(X[1, :, :, :], y[2])
# visualize_patch(X[1, :, :, :], y[0])
def standardize(image):
one = 1
# initialize to array of zeros, with same shape as the image
standardized_image = np.zeros((image.shape[0], image.shape[1], image.shape[2], image.shape[3]))
# iterate over channels
for c in range(image.shape[0]):
# iterate over the `z` dimension
for z in range(image.shape[3]):
# get a slice of the image
# at channel c and z-th dimension `z`
image_slice = image[c,:,:,z]
# subtract the mean from image_slice
centered = image_slice - np.mean(image_slice)
# divide by the standard deviation (only if it is different from zero)
if np.std(centered) != 0:
centered_scaled = centered / np.std(centered)
else:
### error exception ###
centered_scaled = centered / one
# update the slice of standardized image
# with the scaled centered and scaled image
standardized_image[c, :, :, z] = centered_scaled
return standardized_image
# X_norm = standardize(X)
# visualize_patch(X_norm[1, :, :, :], y[0])
def dice_coefficient(y_true, y_pred, axis=(1, 2, 3),
epsilon=0.00001):
dice_numerator = 2 * K.sum(y_true * y_pred, axis=axis) + epsilon
dice_denominator = K.sum(y_true, axis=axis) + K.sum(y_pred, axis=axis) + epsilon
dice_coefficient = K.mean((dice_numerator)/(dice_denominator))
return dice_coefficient
def soft_dice_loss(y_true, y_pred, axis=(1, 2, 3),
epsilon=0.00001):
dice_numerator = 2. * K.sum(y_true * y_pred, axis=axis) + epsilon
dice_denominator = K.sum((y_true**2), axis=axis) + K.sum((y_pred**2), axis=axis) + epsilon
dice_loss = 1 - K.mean((dice_numerator)/(dice_denominator))
return dice_loss
def create_convolution_block(input_layer, n_filters, batch_normalization=False,
kernel=(3, 3, 3), activation=None,
padding='same', strides=(1, 1, 1),
instance_normalization=False):
layer = Conv3D(n_filters, kernel, padding=padding, strides=strides)(
input_layer)
if activation is None:
return Activation('relu')(layer)
else:
return activation()(layer)
def get_up_convolution(n_filters, pool_size, kernel_size=(2, 2, 2),
strides=(2, 2, 2),
deconvolution=False):
if deconvolution:
return Deconvolution3D(filters=n_filters, kernel_size=kernel_size,
strides=strides)
else:
return UpSampling3D(size=pool_size)
def unet_model_3d(loss_function, input_shape=(120, 120, 16, 4),
pool_size=(2, 2, 2), n_labels=4,
initial_learning_rate=0.0001,
deconvolution=False, depth=4, n_base_filters=32,
include_label_wise_dice_coefficients=False, metrics=[],
batch_normalization=False, activation_name="sigmoid", repeat=4):
flower_outputs = list()
inputs = Input(input_shape)
############################
for i in range(repeat):
current_layer = inputs
levels = []
# add levels with max pooling
for layer_depth in range(depth):
layer1 = create_convolution_block(input_layer=current_layer,
n_filters=n_base_filters * (
2 ** layer_depth),
batch_normalization=batch_normalization)
layer2 = create_convolution_block(input_layer=layer1,
n_filters=n_base_filters * (
2 ** layer_depth) * 2,
batch_normalization=batch_normalization)
if layer_depth < depth - 1:
current_layer = MaxPooling3D(pool_size=pool_size)(layer2)
levels.append([layer1, layer2, current_layer])
else:
current_layer = layer2
levels.append([layer1, layer2])
# add levels with up-convolution or up-sampling
for layer_depth in range(depth - 2, -1, -1):
up_convolution = get_up_convolution(pool_size=pool_size,
deconvolution=deconvolution,
n_filters=
current_layer.shape[1])(
current_layer)
concat = concatenate([up_convolution, levels[layer_depth][1]], axis=-1)
current_layer = create_convolution_block(
n_filters=levels[layer_depth][1].shape[1],
input_layer=concat, batch_normalization=batch_normalization)
current_layer = create_convolution_block(
n_filters=levels[layer_depth][1].shape[1],
input_layer=current_layer,
batch_normalization=batch_normalization)
flower_outputs.append(current_layer)
############################
####### say number of repeats are 4 #############
# final_layer0 = concatenate([flower_outputs[0], flower_outputs[1]])
# final_layer1 = concatenate([flower_outputs[2], flower_outputs[3]])
# final_layer_final = concatenate([final_layer0, final_layer1])
###################################################
if repeat > 1:
final_layer = concatenate([flower_outputs[0], flower_outputs[1]])
for i in range(2, len(flower_outputs)):
final_layer = concatenate([final_layer, flower_outputs[i]])
else :
final_layer = flower_outputs[0]
###########################
final_convolution = Conv3D(n_labels, (1, 1, 1))(final_layer)
act = Activation(activation_name)(final_convolution)
model = Model(inputs=inputs, outputs=act)
if not isinstance(metrics, list):
metrics = [metrics]
model.compile(optimizer=Adam(lr=initial_learning_rate), loss=loss_function,
metrics=metrics)
return model
# from keras.layers import Input, Conv3D, Activation, MaxPooling3D, UpSampling3D, concatenate
# from keras import Model
# from keras.optimizers import Adam
# import keras.backend as K
# # with tpu_strategy.scope():
# model = unet_model_3d(loss_function=soft_dice_loss, metrics=[dice_coefficient])
# model.summary()
################################# server crashd andthis is model after two epoch ##############
# load and evaluate a saved model
from numpy import loadtxt
from keras.models import load_model
from keras.layers import Input, Conv3D, Activation, MaxPooling3D, UpSampling3D, concatenate
from keras import Model
from keras.optimizers import Adam
import keras.backend as K
# initial_learning_rate_for_loaded_model = 0.0001
# load model
model = load_model('/content/drive/My Drive/BRATS2020/newerby_model.02-0.31.h5', custom_objects={'soft_dice_loss':soft_dice_loss, 'dice_coefficient':dice_coefficient})
# learning rate decay beacuse this is thirth epoch that i'm loading
# K.set_value(model.optimizer.learning_rate, initial_learning_rate_for_loaded_model)
# summarize model.
model.summary()
# load dataset
model.layers[-1].output
file_path_Xtrain = X_trainset_filenames[:1236]
file_path_ytrain = y_trainset_filenames[:309]
file_path_Xvalid = X_trainset_filenames[1236:]
file_path_yvalid = y_trainset_filenames[309:]
def generator(X_path, y_path, batch_size=2):
num_samples = len(X_path) / 4
while True: # Loop forever so the generator never terminates
# Get index to start each batch: [0, batch_size, 2*batch_size, ..., max multiple of batch_size <= num_samples]
for offset in range(0, int(num_samples), batch_size):
# Get the samples you'll use in this batch
batch_samples = X_path[ ((offset*4) * batch_size) : (((offset+1)*4) * batch_size) ]
batch_targets = y_path[ (offset*batch_size) : ((offset+1)*batch_size) ]
### iterate 8 times on the same batch to generate 8 subvolumes of each image
for i in range(8):
# Initialise X_train and y_train arrays for this batch
X_train = []
y_train = []
for i in range(len(batch_targets)):
image, label = load_case(X_trainset_filenames[ (i*4) : ((i+1)*4) ], y_trainset_filenames[i])
X, y = get_sub_volume(image, label)
X_norm = standardize(X)
X_norm = np.moveaxis( X_norm, 0, 3)
y = np.moveaxis( y, 0, 3)
X_train.append(X_norm)
y_train.append(y)
# Make sure they're numpy arrays (as opposed to lists)
X_train = np.array(X_train)
y_train = np.array(y_train)
if X_train.ndim == 5:
yield X_train, y_train
else:
pass
train_generator = generator(file_path_Xtrain, file_path_ytrain, batch_size=3)
validation_generator = generator(file_path_Xvalid, file_path_yvalid, batch_size=3)
from keras.callbacks import EarlyStopping, ModelCheckpoint, LearningRateScheduler
initial_learning_rate = 0.0001
def scheduler(epoch):
## decrease learning rate every 3 epochs to 0.1(latest_value) ##
return float(initial_learning_rate * tf.math.exp(0.1 * (int(epoch/4))))
# callback
my_callbacks = [
LearningRateScheduler(scheduler),
EarlyStopping(monitor='val_loss', patience=6),
ModelCheckpoint(filepath='/content/drive/My Drive/BRATS2020/Newerby_Series1_model.{epoch:02d}-{val_loss:.2f}.h5', monitor='val_loss', save_best_only=False)
]
#### steps per epoch should be x/batch*8 because model generates 8 subvolumes per training sample
# Fit model using generator
history = model.fit(train_generator, steps_per_epoch=824, epochs=20, validation_data=validation_generator, validation_steps=160, callbacks=my_callbacks)
```
| github_jupyter |
#### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Classification on imbalanced data
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/structured_data/imbalanced_data"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/structured_data/imbalanced_data.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/structured_data/imbalanced_data.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/structured_data/imbalanced_data.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial demonstrates how to classify a highly imbalanced dataset in which the number of examples in one class greatly outnumbers the examples in another. You will work with the [Credit Card Fraud Detection](https://www.kaggle.com/mlg-ulb/creditcardfraud) dataset hosted on Kaggle. The aim is to detect a mere 492 fraudulent transactions from 284,807 transactions in total. You will use [Keras](../../guide/keras/overview.ipynb) to define the model and [class weights](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model) to help the model learn from the imbalanced data. .
This tutorial contains complete code to:
* Load a CSV file using Pandas.
* Create train, validation, and test sets.
* Define and train a model using Keras (including setting class weights).
* Evaluate the model using various metrics (including precision and recall).
* Try common techniques for dealing with imbalanced data like:
* Class weighting
* Oversampling
## Setup
```
import tensorflow as tf
from tensorflow import keras
import os
import tempfile
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import sklearn
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
mpl.rcParams['figure.figsize'] = (12, 10)
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
```
## Data processing and exploration
### Download the Kaggle Credit Card Fraud data set
Pandas is a Python library with many helpful utilities for loading and working with structured data and can be used to download CSVs into a dataframe.
Note: This dataset has been collected and analysed during a research collaboration of Worldline and the [Machine Learning Group](http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available [here](https://www.researchgate.net/project/Fraud-detection-5) and the page of the [DefeatFraud](https://mlg.ulb.ac.be/wordpress/portfolio_page/defeatfraud-assessment-and-validation-of-deep-feature-engineering-and-learning-solutions-for-fraud-detection/) project
```
file = tf.keras.utils
raw_df = pd.read_csv('https://storage.googleapis.com/download.tensorflow.org/data/creditcard.csv')
raw_df.head()
raw_df[['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V26', 'V27', 'V28', 'Amount', 'Class']].describe()
```
### Examine the class label imbalance
Let's look at the dataset imbalance:
```
neg, pos = np.bincount(raw_df['Class'])
total = neg + pos
print('Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n'.format(
total, pos, 100 * pos / total))
```
This shows the small fraction of positive samples.
### Clean, split and normalize the data
The raw data has a few issues. First the `Time` and `Amount` columns are too variable to use directly. Drop the `Time` column (since it's not clear what it means) and take the log of the `Amount` column to reduce its range.
```
cleaned_df = raw_df.copy()
# You don't want the `Time` column.
cleaned_df.pop('Time')
# The `Amount` column covers a huge range. Convert to log-space.
eps = 0.001 # 0 => 0.1¢
cleaned_df['Log Ammount'] = np.log(cleaned_df.pop('Amount')+eps)
```
Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where [overfitting](https://developers.google.com/machine-learning/crash-course/generalization/peril-of-overfitting) is a significant concern from the lack of training data.
```
# Use a utility from sklearn to split and shuffle our dataset.
train_df, test_df = train_test_split(cleaned_df, test_size=0.2)
train_df, val_df = train_test_split(train_df, test_size=0.2)
# Form np arrays of labels and features.
train_labels = np.array(train_df.pop('Class'))
bool_train_labels = train_labels != 0
val_labels = np.array(val_df.pop('Class'))
test_labels = np.array(test_df.pop('Class'))
train_features = np.array(train_df)
val_features = np.array(val_df)
test_features = np.array(test_df)
```
Normalize the input features using the sklearn StandardScaler.
This will set the mean to 0 and standard deviation to 1.
Note: The `StandardScaler` is only fit using the `train_features` to be sure the model is not peeking at the validation or test sets.
```
scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)
val_features = scaler.transform(val_features)
test_features = scaler.transform(test_features)
train_features = np.clip(train_features, -5, 5)
val_features = np.clip(val_features, -5, 5)
test_features = np.clip(test_features, -5, 5)
print('Training labels shape:', train_labels.shape)
print('Validation labels shape:', val_labels.shape)
print('Test labels shape:', test_labels.shape)
print('Training features shape:', train_features.shape)
print('Validation features shape:', val_features.shape)
print('Test features shape:', test_features.shape)
```
Caution: If you want to deploy a model, it's critical that you preserve the preprocessing calculations. The easiest way to implement them as layers, and attach them to your model before export.
### Look at the data distribution
Next compare the distributions of the positive and negative examples over a few features. Good questions to ask yourself at this point are:
* Do these distributions make sense?
* Yes. You've normalized the input and these are mostly concentrated in the `+/- 2` range.
* Can you see the difference between the distributions?
* Yes the positive examples contain a much higher rate of extreme values.
```
pos_df = pd.DataFrame(train_features[ bool_train_labels], columns=train_df.columns)
neg_df = pd.DataFrame(train_features[~bool_train_labels], columns=train_df.columns)
sns.jointplot(pos_df['V5'], pos_df['V6'],
kind='hex', xlim=(-5,5), ylim=(-5,5))
plt.suptitle("Positive distribution")
sns.jointplot(neg_df['V5'], neg_df['V6'],
kind='hex', xlim=(-5,5), ylim=(-5,5))
_ = plt.suptitle("Negative distribution")
```
## Define the model and metrics
Define a function that creates a simple neural network with a densly connected hidden layer, a [dropout](https://developers.google.com/machine-learning/glossary/#dropout_regularization) layer to reduce overfitting, and an output sigmoid layer that returns the probability of a transaction being fraudulent:
```
METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.BinaryAccuracy(name='accuracy'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc'),
]
def make_model(metrics=METRICS, output_bias=None):
if output_bias is not None:
output_bias = tf.keras.initializers.Constant(output_bias)
model = keras.Sequential([
keras.layers.Dense(
16, activation='relu',
input_shape=(train_features.shape[-1],)),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation='sigmoid',
bias_initializer=output_bias),
])
model.compile(
optimizer=keras.optimizers.Adam(lr=1e-3),
loss=keras.losses.BinaryCrossentropy(),
metrics=metrics)
return model
```
### Understanding useful metrics
Notice that there are a few metrics defined above that can be computed by the model that will be helpful when evaluating the performance.
* **False** negatives and **false** positives are samples that were **incorrectly** classified
* **True** negatives and **true** positives are samples that were **correctly** classified
* **Accuracy** is the percentage of examples correctly classified
> $\frac{\text{true samples}}{\text{total samples}}$
* **Precision** is the percentage of **predicted** positives that were correctly classified
> $\frac{\text{true positives}}{\text{true positives + false positives}}$
* **Recall** is the percentage of **actual** positives that were correctly classified
> $\frac{\text{true positives}}{\text{true positives + false negatives}}$
* **AUC** refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than a random negative sample.
Note: Accuracy is not a helpful metric for this task. You can 99.8%+ accuracy on this task by predicting False all the time.
Read more:
* [True vs. False and Positive vs. Negative](https://developers.google.com/machine-learning/crash-course/classification/true-false-positive-negative)
* [Accuracy](https://developers.google.com/machine-learning/crash-course/classification/accuracy)
* [Precision and Recall](https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall)
* [ROC-AUC](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc)
## Baseline model
### Build the model
Now create and train your model using the function that was defined earlier. Notice that the model is fit using a larger than default batch size of 2048, this is important to ensure that each batch has a decent chance of containing a few positive samples. If the batch size was too small, they would likely have no fraudulent transactions to learn from.
Note: this model will not handle the class imbalance well. You will improve it later in this tutorial.
```
EPOCHS = 100
BATCH_SIZE = 2048
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_auc',
verbose=1,
patience=10,
mode='max',
restore_best_weights=True)
model = make_model()
model.summary()
```
Test run the model:
```
model.predict(train_features[:10])
```
### Optional: Set the correct initial bias.
These initial guesses are not great. You know the dataset is imbalanced. Set the output layer's bias to reflect that (See: [A Recipe for Training Neural Networks: "init well"](http://karpathy.github.io/2019/04/25/recipe/#2-set-up-the-end-to-end-trainingevaluation-skeleton--get-dumb-baselines)). This can help with initial convergence.
With the default bias initialization the loss should be about `math.log(2) = 0.69314`
```
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
```
The correct bias to set can be derived from:
$$ p_0 = pos/(pos + neg) = 1/(1+e^{-b_0}) $$
$$ b_0 = -log_e(1/p_0 - 1) $$
$$ b_0 = log_e(pos/neg)$$
```
initial_bias = np.log([pos/neg])
initial_bias
```
Set that as the initial bias, and the model will give much more reasonable initial guesses.
It should be near: `pos/total = 0.0018`
```
model = make_model(output_bias=initial_bias)
model.predict(train_features[:10])
```
With this initialization the initial loss should be approximately:
$$-p_0log(p_0)-(1-p_0)log(1-p_0) = 0.01317$$
```
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
```
This initial loss is about 50 times less than if would have been with naive initialization.
This way the model doesn't need to spend the first few epochs just learning that positive examples are unlikely. This also makes it easier to read plots of the loss during training.
### Checkpoint the initial weights
To make the various training runs more comparable, keep this initial model's weights in a checkpoint file, and load them into each model before training.
```
initial_weights = os.path.join(tempfile.mkdtemp(), 'initial_weights')
model.save_weights(initial_weights)
```
### Confirm that the bias fix helps
Before moving on, confirm quick that the careful bias initialization actually helped.
Train the model for 20 epochs, with and without this careful initialization, and compare the losses:
```
model = make_model()
model.load_weights(initial_weights)
model.layers[-1].bias.assign([0.0])
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
model = make_model()
model.load_weights(initial_weights)
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
def plot_loss(history, label, n):
# Use a log scale to show the wide range of values.
plt.semilogy(history.epoch, history.history['loss'],
color=colors[n], label='Train '+label)
plt.semilogy(history.epoch, history.history['val_loss'],
color=colors[n], label='Val '+label,
linestyle="--")
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plot_loss(zero_bias_history, "Zero Bias", 0)
plot_loss(careful_bias_history, "Careful Bias", 1)
```
The above figure makes it clear: In terms of validation loss, on this problem, this careful initialization gives a clear advantage.
### Train the model
```
model = make_model()
model.load_weights(initial_weights)
baseline_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=[early_stopping],
validation_data=(val_features, val_labels))
```
### Check training history
In this section, you will produce plots of your model's accuracy and loss on the training and validation set. These are useful to check for overfitting, which you can learn more about in this [tutorial](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit).
Additionally, you can produce these plots for any of the metrics you created above. False negatives are included as an example.
```
def plot_metrics(history):
metrics = ['loss', 'auc', 'precision', 'recall']
for n, metric in enumerate(metrics):
name = metric.replace("_"," ").capitalize()
plt.subplot(2,2,n+1)
plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train')
plt.plot(history.epoch, history.history['val_'+metric],
color=colors[0], linestyle="--", label='Val')
plt.xlabel('Epoch')
plt.ylabel(name)
if metric == 'loss':
plt.ylim([0, plt.ylim()[1]])
elif metric == 'auc':
plt.ylim([0.8,1])
else:
plt.ylim([0,1])
plt.legend()
plot_metrics(baseline_history)
```
Note: That the validation curve generally performs better than the training curve. This is mainly caused by the fact that the dropout layer is not active when evaluating the model.
### Evaluate metrics
You can use a [confusion matrix](https://developers.google.com/machine-learning/glossary/#confusion_matrix) to summarize the actual vs. predicted labels where the X axis is the predicted label and the Y axis is the actual label.
```
train_predictions_baseline = model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_baseline = model.predict(test_features, batch_size=BATCH_SIZE)
def plot_cm(labels, predictions, p=0.5):
cm = confusion_matrix(labels, predictions > p)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt="d")
plt.title('Confusion matrix @{:.2f}'.format(p))
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
print('Legitimate Transactions Detected (True Negatives): ', cm[0][0])
print('Legitimate Transactions Incorrectly Detected (False Positives): ', cm[0][1])
print('Fraudulent Transactions Missed (False Negatives): ', cm[1][0])
print('Fraudulent Transactions Detected (True Positives): ', cm[1][1])
print('Total Fraudulent Transactions: ', np.sum(cm[1]))
```
Evaluate your model on the test dataset and display the results for the metrics you created above.
```
baseline_results = model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(model.metrics_names, baseline_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_baseline)
```
If the model had predicted everything perfectly, this would be a [diagonal matrix](https://en.wikipedia.org/wiki/Diagonal_matrix) where values off the main diagonal, indicating incorrect predictions, would be zero. In this case the matrix shows that you have relatively few false positives, meaning that there were relatively few legitimate transactions that were incorrectly flagged. However, you would likely want to have even fewer false negatives despite the cost of increasing the number of false positives. This trade off may be preferable because false negatives would allow fraudulent transactions to go through, whereas false positives may cause an email to be sent to a customer to ask them to verify their card activity.
### Plot the ROC
Now plot the [ROC](https://developers.google.com/machine-learning/glossary#ROC). This plot is useful because it shows, at a glance, the range of performance the model can reach just by tuning the output threshold.
```
def plot_roc(name, labels, predictions, **kwargs):
fp, tp, _ = sklearn.metrics.roc_curve(labels, predictions)
plt.plot(100*fp, 100*tp, label=name, linewidth=2, **kwargs)
plt.xlabel('False positives [%]')
plt.ylabel('True positives [%]')
plt.xlim([-0.5,20])
plt.ylim([80,100.5])
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right')
```
It looks like the precision is relatively high, but the recall and the area under the ROC curve (AUC) aren't as high as you might like. Classifiers often face challenges when trying to maximize both precision and recall, which is especially true when working with imbalanced datasets. It is important to consider the costs of different types of errors in the context of the problem you care about. In this example, a false negative (a fraudulent transaction is missed) may have a financial cost, while a false positive (a transaction is incorrectly flagged as fraudulent) may decrease user happiness.
## Class weights
### Calculate class weights
The goal is to identify fraudulent transactions, but you don't have very many of those positive samples to work with, so you would want to have the classifier heavily weight the few examples that are available. You can do this by passing Keras weights for each class through a parameter. These will cause the model to "pay more attention" to examples from an under-represented class.
```
# Scaling by total/2 helps keep the loss to a similar magnitude.
# The sum of the weights of all examples stays the same.
weight_for_0 = (1 / neg)*(total)/2.0
weight_for_1 = (1 / pos)*(total)/2.0
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
```
### Train a model with class weights
Now try re-training and evaluating the model with class weights to see how that affects the predictions.
Note: Using `class_weights` changes the range of the loss. This may affect the stability of the training depending on the optimizer. Optimizers whose step size is dependent on the magnitude of the gradient, like `optimizers.SGD`, may fail. The optimizer used here, `optimizers.Adam`, is unaffected by the scaling change. Also note that because of the weighting, the total losses are not comparable between the two models.
```
weighted_model = make_model()
weighted_model.load_weights(initial_weights)
weighted_history = weighted_model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=[early_stopping],
validation_data=(val_features, val_labels),
# The class weights go here
class_weight=class_weight)
```
### Check training history
```
plot_metrics(weighted_history)
```
### Evaluate metrics
```
train_predictions_weighted = weighted_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_weighted = weighted_model.predict(test_features, batch_size=BATCH_SIZE)
weighted_results = weighted_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(weighted_model.metrics_names, weighted_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_weighted)
```
Here you can see that with class weights the accuracy and precision are lower because there are more false positives, but conversely the recall and AUC are higher because the model also found more true positives. Despite having lower accuracy, this model has higher recall (and identifies more fraudulent transactions). Of course, there is a cost to both types of error (you wouldn't want to bug users by flagging too many legitimate transactions as fraudulent, either). Carefully consider the trade-offs between these different types of errors for your application.
### Plot the ROC
```
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plt.legend(loc='lower right')
```
## Oversampling
### Oversample the minority class
A related approach would be to resample the dataset by oversampling the minority class.
```
pos_features = train_features[bool_train_labels]
neg_features = train_features[~bool_train_labels]
pos_labels = train_labels[bool_train_labels]
neg_labels = train_labels[~bool_train_labels]
```
#### Using NumPy
You can balance the dataset manually by choosing the right number of random
indices from the positive examples:
```
ids = np.arange(len(pos_features))
choices = np.random.choice(ids, len(neg_features))
res_pos_features = pos_features[choices]
res_pos_labels = pos_labels[choices]
res_pos_features.shape
resampled_features = np.concatenate([res_pos_features, neg_features], axis=0)
resampled_labels = np.concatenate([res_pos_labels, neg_labels], axis=0)
order = np.arange(len(resampled_labels))
np.random.shuffle(order)
resampled_features = resampled_features[order]
resampled_labels = resampled_labels[order]
resampled_features.shape
```
#### Using `tf.data`
If you're using `tf.data` the easiest way to produce balanced examples is to start with a `positive` and a `negative` dataset, and merge them. See [the tf.data guide](../../guide/data.ipynb) for more examples.
```
BUFFER_SIZE = 100000
def make_ds(features, labels):
ds = tf.data.Dataset.from_tensor_slices((features, labels))#.cache()
ds = ds.shuffle(BUFFER_SIZE).repeat()
return ds
pos_ds = make_ds(pos_features, pos_labels)
neg_ds = make_ds(neg_features, neg_labels)
```
Each dataset provides `(feature, label)` pairs:
```
for features, label in pos_ds.take(1):
print("Features:\n", features.numpy())
print()
print("Label: ", label.numpy())
```
Merge the two together using `experimental.sample_from_datasets`:
```
resampled_ds = tf.data.experimental.sample_from_datasets([pos_ds, neg_ds], weights=[0.5, 0.5])
resampled_ds = resampled_ds.batch(BATCH_SIZE).prefetch(2)
for features, label in resampled_ds.take(1):
print(label.numpy().mean())
```
To use this dataset, you'll need the number of steps per epoch.
The definition of "epoch" in this case is less clear. Say it's the number of batches required to see each negative example once:
```
resampled_steps_per_epoch = np.ceil(2.0*neg/BATCH_SIZE)
resampled_steps_per_epoch
```
### Train on the oversampled data
Now try training the model with the resampled data set instead of using class weights to see how these methods compare.
Note: Because the data was balanced by replicating the positive examples, the total dataset size is larger, and each epoch runs for more training steps.
```
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
val_ds = tf.data.Dataset.from_tensor_slices((val_features, val_labels)).cache()
val_ds = val_ds.batch(BATCH_SIZE).prefetch(2)
resampled_history = resampled_model.fit(
resampled_ds,
epochs=EPOCHS,
steps_per_epoch=resampled_steps_per_epoch,
callbacks=[early_stopping],
validation_data=val_ds)
```
If the training process were considering the whole dataset on each gradient update, this oversampling would be basically identical to the class weighting.
But when training the model batch-wise, as you did here, the oversampled data provides a smoother gradient signal: Instead of each positive example being shown in one batch with a large weight, they're shown in many different batches each time with a small weight.
This smoother gradient signal makes it easier to train the model.
### Check training history
Note that the distributions of metrics will be different here, because the training data has a totally different distribution from the validation and test data.
```
plot_metrics(resampled_history)
```
### Re-train
Because training is easier on the balanced data, the above training procedure may overfit quickly.
So break up the epochs to give the `callbacks.EarlyStopping` finer control over when to stop training.
```
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
resampled_history = resampled_model.fit(
resampled_ds,
# These are not real epochs
steps_per_epoch=20,
epochs=10*EPOCHS,
callbacks=[early_stopping],
validation_data=(val_ds))
```
### Re-check training history
```
plot_metrics(resampled_history)
```
### Evaluate metrics
```
train_predictions_resampled = resampled_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_resampled = resampled_model.predict(test_features, batch_size=BATCH_SIZE)
resampled_results = resampled_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(resampled_model.metrics_names, resampled_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_resampled)
```
### Plot the ROC
```
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plot_roc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_roc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right')
```
## Applying this tutorial to your problem
Imbalanced data classification is an inherently difficult task since there are so few samples to learn from. You should always start with the data first and do your best to collect as many samples as possible and give substantial thought to what features may be relevant so the model can get the most out of your minority class. At some point your model may struggle to improve and yield the results you want, so it is important to keep in mind the context of your problem and the trade offs between different types of errors.
| github_jupyter |
# Gaussian Process Regression
**Zhenwen Dai (2019-05-29)**
## Introduction
Gaussian process (GP) is a Bayesian non-parametric model used for various machine learning problems such as regression, classification. This notebook shows about how to use a Gaussian process regression model in MXFusion.
```
import warnings
warnings.filterwarnings('ignore')
import os
os.environ['MXNET_ENGINE_TYPE'] = 'NaiveEngine'
```
## Toy data
We generate some synthetic data for our regression example. The data set is generate from a sine function with some additive Gaussian noise.
```
import numpy as np
%matplotlib inline
from pylab import *
np.random.seed(0)
X = np.random.uniform(-3.,3.,(20,1))
Y = np.sin(X) + np.random.randn(20,1)*0.05
```
The generated data are visualized as follows:
```
plot(X, Y, 'rx', label='data points')
_=legend()
```
## Gaussian process regression with Gaussian likelihood
Denote a set of input points $X \in \mathbb{R}^{N \times Q}$. A Gaussian process is often formulated as a multi-variate normal distribution conditioned on the inputs:
$$
p(F|X) = \mathcal{N}(F; 0, K),
$$
where $F \in \mathbb{R}^{N \times 1}$ is the corresponding output points of the Gaussian process and $K$ is the covariance matrix computed on the set of inputs according to a chosen kernel function $k(\cdot, \cdot)$.
For a regression problem, $F$ is often referred to as the noise-free output and we usually assume an additional probability distribution as the observation noise. In this case, we assume the noise distribution to be Gaussian:
$$
p(Y|F) = \mathcal{N}(Y; F, \sigma^2 \mathcal{I}),
$$
where $Y \in \mathbb{R}^{N \times 1}$ is the observed output and $\sigma^2$ is the variance of the Gaussian distribution.
The following code defines the above GP regression in MXFusion. First, we change the default data dtype to double precision to avoid any potential numerical issues.
```
from mxfusion.common import config
config.DEFAULT_DTYPE = 'float64'
```
In the code below, the variable ```Y``` is defined following the probabilistic module ```GPRegression```. A probabilistic module in MXFusion is a pre-built probabilistic model with dedicated inference algorithms for computing log-pdf and drawing samples. In this case, ```GPRegression``` defines the above GP regression model with a Gaussian likelihood. It understands that the log-likelihood after marginalizing $F$ is closed-form and exploits this property when computing log-pdf.
The model is defined by the input variable ```X``` with the shape ```(m.N, 1)```, where the value of ```m.N``` is discovered when data is given during inference. A positive noise variance variable ```m.noise_var``` is defined with the initial value to be 0.01. For GP, we define a RBF kernel with input dimensionality being one and initial value of variance and lengthscale to be one. We define the variable ```m.Y``` following the GP regression distribution with the above specified kernel, input variable and noise_variance.
```
from mxfusion import Model, Variable
from mxfusion.components.variables import PositiveTransformation
from mxfusion.components.distributions.gp.kernels import RBF
from mxfusion.modules.gp_modules import GPRegression
m = Model()
m.N = Variable()
m.X = Variable(shape=(m.N, 1))
m.noise_var = Variable(shape=(1,), transformation=PositiveTransformation(), initial_value=0.01)
m.kernel = RBF(input_dim=1, variance=1, lengthscale=1)
m.Y = GPRegression.define_variable(X=m.X, kernel=m.kernel, noise_var=m.noise_var, shape=(m.N, 1))
```
In the above model, we have not defined any prior distributions for any hyper-parameters. To use the model for regrssion, we typically do a maximum likelihood estimate for all the hyper-parameters conditioned on the input and output variable. In MXFusion, this is done by first creating an inference algorithm, which is ```MAP``` in this case, by specifying the observed variables. Then, we create an inference body for gradient optimization inference methods, which is called ```GradBasedInference```. The inference method is triggered by calling the ```run``` method, in which all the observed data are given as keyword arguments and any necessary configruation parameters are specified.
```
import mxnet as mx
from mxfusion.inference import GradBasedInference, MAP
infr = GradBasedInference(inference_algorithm=MAP(model=m, observed=[m.X, m.Y]))
infr.run(X=mx.nd.array(X, dtype='float64'), Y=mx.nd.array(Y, dtype='float64'),
max_iter=100, learning_rate=0.05, verbose=True)
```
All the inference outcomes are in the attribute ```params``` of the inference body. The inferred value of a parameter can be access by passing the reference of the queried parameter to the ```params``` attribute. For example, to get the value ```m.noise_var```, we can call ```inference.params[m.noise_var]```. The estimated parameters from the above experiment are as follows:
```
print('The estimated variance of the RBF kernel is %f.' % infr.params[m.kernel.variance].asscalar())
print('The estimated length scale of the RBF kernel is %f.' % infr.params[m.kernel.lengthscale].asscalar())
print('The estimated variance of the Gaussian likelihood is %f.' % infr.params[m.noise_var].asscalar())
```
We can compare the estimated values with the same model implemented in GPy. The estimated values from GPy are very close to the ones from MXFusion.
```
import GPy
m_gpy = GPy.models.GPRegression(X, Y, kernel=GPy.kern.RBF(1))
m_gpy.optimize()
print(m_gpy)
```
## Prediction
The above section shows how to estimate the model hyper-parameters of a GP regression model. This is often referred to as training. After training, we are often interested in using the inferred model to predict on unseen inputs. The GP modules offers two types of predictions: predicting the mean and variance of the output variable or drawing samples from the predictive posterior distributions.
### Mean and variance of the posterior distribution
To estimate the mean and variance of the predictive posterior distribution, we use the inference algorithm ```ModulePredictionAlgorithm```, which takes the model, the observed variables and the target variables of prediction as input arguments. We use ```TransferInference``` as the inference body, which allows us to take the inference outcome from the previous inference. This is done by passing the inference parameters ```infr.params``` into the ```infr_params``` argument.
```
from mxfusion.inference import TransferInference, ModulePredictionAlgorithm
infr_pred = TransferInference(ModulePredictionAlgorithm(model=m, observed=[m.X], target_variables=[m.Y]),
infr_params=infr.params)
```
To visualize the fitted model, we make predictions on 100 points evenly spanned from -5 to 5. We estimate the mean and variance of the noise-free output $F$.
```
xt = np.linspace(-5,5,100)[:, None]
res = infr_pred.run(X=mx.nd.array(xt, dtype='float64'))[0]
f_mean, f_var = res[0].asnumpy()[0], res[1].asnumpy()[0]
```
The resulting figure is shown as follows:
```
plot(xt, f_mean[:,0], 'b-', label='mean')
plot(xt, f_mean[:,0]-2*np.sqrt(f_var), 'b--', label='2 x std')
plot(xt, f_mean[:,0]+2*np.sqrt(f_var), 'b--')
plot(X, Y, 'rx', label='data points')
ylabel('F')
xlabel('X')
_=legend()
```
### Posterior samples of Gaussian process
Apart from getting the mean and variance at every location, we may need to draw samples from the posterior GP. As the output variables at different locations are correlated with each other, each sample gives us some idea of a potential function from the posterior GP distribution.
To draw samples from the posterior distribution, we need to change the prediction inference algorithm attached to the GP module. The default prediction function estimate the mean and variance of the output variable as shown above. We can attach another inference algorithm as the prediction algorithm. In the following code, we attach the ```GPRegressionSamplingPrediction``` algorithm as the prediction algorithm. The ```targets``` and ```conditionals``` arguments specify the target variables of the algorithm and the conditional variables of the algorithm. After spcifying a name in the ```alg_name``` argument such as ```gp_predict```, we can access this inference algorithm with the specified name like ```gp.gp_predict```. In following code, we set the ```diagonal_variance``` attribute to be ```False``` in order to draw samples from a full covariace matrix. To avoid numerical issue, we set a small jitter to help matrix inversion. Then, we create the inference body in the same way as the above example.
```
from mxfusion.inference import TransferInference, ModulePredictionAlgorithm
from mxfusion.modules.gp_modules.gp_regression import GPRegressionSamplingPrediction
gp = m.Y.factor
gp.attach_prediction_algorithms(targets=gp.output_names, conditionals=gp.input_names,
algorithm=GPRegressionSamplingPrediction(
gp._module_graph, gp._extra_graphs[0], [gp._module_graph.X]),
alg_name='gp_predict')
gp.gp_predict.diagonal_variance = False
gp.gp_predict.jitter = 1e-8
infr_pred = TransferInference(ModulePredictionAlgorithm(model=m, observed=[m.X], target_variables=[m.Y], num_samples=5),
infr_params=infr.params)
```
We draw five samples on the 100 evenly spanned input locations.
```
xt = np.linspace(-5,5,100)[:, None]
y_samples = infr_pred.run(X=mx.nd.array(xt, dtype='float64'))[0].asnumpy()
```
We visualize the individual samples each with a different color.
```
for i in range(y_samples.shape[0]):
plot(xt, y_samples[i,:,0])
```
## Gaussian process with a mean function
In the previous example, we created an GP regression model without a mean function (the mean of GP is zero). It is very easy to extend a GP model with a mean field. First, we create a mean function in MXNet (a neural network). For simplicity, we create a 1D linear function as the mean function.
```
mean_func = mx.gluon.nn.Dense(1, in_units=1, flatten=False)
mean_func.initialize(mx.init.Xavier(magnitude=3))
```
We create the GP regression model in a similar way as above. The difference is
1. We create a wrapper of the mean function in model definition ```m.mean_func```.
2. We evaluate the mean function with the input of our GP model, which results into the mean of the GP.
3. We pass the resulting mean into the mean argument of the GP module.
```
m = Model()
m.N = Variable()
m.X = Variable(shape=(m.N, 1))
m.mean_func = MXFusionGluonFunction(mean_func, num_outputs=1, broadcastable=True)
m.mean = m.mean_func(m.X)
m.noise_var = Variable(shape=(1,), transformation=PositiveTransformation(), initial_value=0.01)
m.kernel = RBF(input_dim=1, variance=1, lengthscale=1)
m.Y = GPRegression.define_variable(X=m.X, kernel=m.kernel, noise_var=m.noise_var, mean=m.mean, shape=(m.N, 1))
import mxnet as mx
from mxfusion.inference import GradBasedInference, MAP
infr = GradBasedInference(inference_algorithm=MAP(model=m, observed=[m.X, m.Y]))
infr.run(X=mx.nd.array(X, dtype='float64'), Y=mx.nd.array(Y, dtype='float64'),
max_iter=100, learning_rate=0.05, verbose=True)
from mxfusion.inference import TransferInference, ModulePredictionAlgorithm
infr_pred = TransferInference(ModulePredictionAlgorithm(model=m, observed=[m.X], target_variables=[m.Y]),
infr_params=infr.params)
xt = np.linspace(-5,5,100)[:, None]
res = infr_pred.run(X=mx.nd.array(xt, dtype='float64'))[0]
f_mean, f_var = res[0].asnumpy()[0], res[1].asnumpy()[0]
plot(xt, f_mean[:,0], 'b-', label='mean')
plot(xt, f_mean[:,0]-2*np.sqrt(f_var), 'b--', label='2 x std')
plot(xt, f_mean[:,0]+2*np.sqrt(f_var), 'b--')
plot(X, Y, 'rx', label='data points')
ylabel('F')
xlabel('X')
_=legend()
```
The effect of the mean function is not noticable, because there is no linear trend in our data. We can plot the values of the estimated parameters of the linear mean function.
```
print("The weight is %f and the bias is %f." %(infr.params[m.mean_func.parameters['dense1_weight']].asnumpy(),
infr.params[m.mean_func.parameters['dense1_bias']].asscalar()))
```
## Variational sparse Gaussian process regression
In MXFusion, we also have variational sparse GP implemented as a module. A sparse GP model can be created in a similar way as the plain GP model.
```
from mxfusion import Model, Variable
from mxfusion.components.variables import PositiveTransformation
from mxfusion.components.distributions.gp.kernels import RBF
from mxfusion.modules.gp_modules import SparseGPRegression
m = Model()
m.N = Variable()
m.X = Variable(shape=(m.N, 1))
m.noise_var = Variable(shape=(1,), transformation=PositiveTransformation(), initial_value=0.01)
m.kernel = RBF(input_dim=1, variance=1, lengthscale=1)
m.Y = SparseGPRegression.define_variable(X=m.X, kernel=m.kernel, noise_var=m.noise_var, shape=(m.N, 1), num_inducing=50)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/saritmaitra/Momentum_Trading/blob/main/Monthly_Series_MomentumStrategy.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip install yfinance
import yfinance as yf
import datetime as dt
import matplotlib.pyplot as plt
import matplotlib.transforms as transform
import matplotlib.gridspec as gridspec
import pandas as pd
from pandas.tseries.offsets import MonthEnd
pd.options.mode.chained_assignment = None
pd.set_option('use_inf_as_na', True)
def get(tickers, startdate, enddate):
def data(ticker):
return (yf.download(ticker, start=startdate, end=enddate))
datas = map (data, tickers)
return(pd.concat(datas, keys=tickers, names=['Ticker', 'Date']))
tickers=["SGRY", "NTRA", "Z", "FATE", "DIS", "GM", 'BPMC', "PTC"]
start = dt.datetime(2015,12,31)
end = dt.datetime(2020,12,31)
data = get(tickers, start, end)
# data = data.resample('m').mean()
data
# Isolate the `Adj Close` values and transform the DataFrame
daily_close_px = data[['Adj Close']].reset_index().pivot('Date', 'Ticker', 'Adj Close')
# Monthly percentage change
daily_pct_change = daily_close_px.pct_change()
# distributions plot
daily_pct_change.hist(bins=50, sharex=True, figsize=(12,8))
plt.show()
# Isolate the `Adj Close` values and transform the DataFrame
close_px = data[['Adj Close']].reset_index().pivot('Date', 'Ticker', 'Adj Close')
mtly_close_px = close_px.resample('m').mean()
# print(mtly_close_px)
# Monthly percentage change
mtly_return = mtly_close_px.pct_change()
# distributions plot
mtly_return.hist(bins=20, sharex=True, figsize=(12,8))
plt.show()
# scatter matrix
pd.plotting.scatter_matrix(mtly_return, diagonal='kde', figsize=(15,12))
plt.grid(True);plt.show()
bpmc = pd.DataFrame(mtly_close_px['BPMC'])
# The first component to extract is the trend.
bpmc['trend'] = bpmc.rolling(window=3, min_periods=3).mean()
# Seasonality will be cyclical patterns that occur in our time series once the data has had trend removed.
bpmc['detrend_a'] = bpmc['BPMC'] - bpmc['trend']
bpmc['detrend_b'] = bpmc['BPMC'] / bpmc['trend']
# To work out the seasonality we need to work out what the typical de-trended values are over a cycle.
# Here I will calculate the mean value for the observations in Q1, Q2, Q3, and Q4.
bpmc['seasonal_a'] = bpmc['detrend_a'].mean()
bpmc['seasonal_b'] = bpmc['detrend_b'].mean()
# Now that we have our two components, we can calculate the residual in both situations and see which has the better fit.
bpmc['residual_a'] = bpmc['detrend_a'] - bpmc['seasonal_a']
bpmc['residual_b'] = bpmc['detrend_b'] - bpmc['seasonal_b']
bpmc
import numpy as np
# Define the minumum of periods to consider
min_periods = 3
# Calculate the volatility
vol = mtly_return.rolling(min_periods).std() * np.sqrt(min_periods)
# Plot the volatility
vol.plot(figsize=(15, 6)); plt.grid(True)
plt.show()
from datetime import datetime
tickers=["SGRY", "NTRA", "Z", "FATE", "DIS", "GM", 'BPMC', "PTC"]
ls_key = 'Adj Close'
start = dt.datetime(2015,12,31)
end = dt.datetime(2020,12,31)
df = yf.download(tickers,start, end)
prices = df[[("Adj Close", s) for s in tickers]]
prices.columns = prices.columns.droplevel(level=0)
prices.index = pd.to_datetime(prices.index)
monthly_ret = prices.pct_change().resample('m').agg(lambda x: (x+1).prod() - 1)
monthly_ret
import numpy as np
past_11 = (monthly_ret + 1).rolling(11).apply(np.prod) - 1
past_11.head(11)
# portfolio formation date
formation = dt.datetime(2016,12,31)
formation
# 1 month prior to formation date
end_measurement = formation - MonthEnd(1)
end_measurement
# 12 months return
ret_12 = past_11.loc[end_measurement]
ret_12 = ret_12.reset_index()
ret_12
```
## Rank
```
ret_12['decile'] = pd.qcut(ret_12.iloc[:, 1], 10, labels=False, duplicates='drop')
ret_12
winners = ret_12[ret_12.decile >= 8]
losers = ret_12[ret_12.decile <= 1]
print('Winners:'); print(winners); print()
print('Losers:'), print(losers)
winner_ret = monthly_ret.loc[formation + MonthEnd(1),
monthly_ret.columns.isin(winners['index'])]
loser_ret = monthly_ret.loc[formation + MonthEnd(1),
monthly_ret.columns.isin(losers['index'])]
momentum_profit = winner_ret.mean() - loser_ret.mean()
print(momentum_profit)
```
## Function to define momentun:
```
def momentum(formation):
end_measurement = formation - MonthEnd(1)
ret_12 = past_11.loc[end_measurement]
ret_12 = ret_12.reset_index()
ret_12['decile'] = pd.qcut(ret_12.iloc[:, 1], 10, labels=False, duplicates='drop')
winners = ret_12[ret_12.decile >= 8]
losers = ret_12[ret_12.decile <= 1]
winner_ret = monthly_ret.loc[formation + MonthEnd(1),
monthly_ret.columns.isin(winners['index'])]
loser_ret = monthly_ret.loc[formation + MonthEnd(1),
monthly_ret.columns.isin(losers['index'])]
momentum_profit = winner_ret.mean() - loser_ret.mean()
return (momentum_profit)
momentum(formation)
for i in range(12*10):
print(formation + MonthEnd(i))
profits = []
dates = []
for i in range(12*4):
profits.append(momentum(formation + MonthEnd(i)))
dates.append(formation + MonthEnd(i))
dataframe = pd.DataFrame(profits[1:])
dataframe
dates
```
## Benchmarking against SP500
```
SP = yf.download('^GSPC', start=dates[0], end=dates[-1])
SP = SP['Adj Close']
SP_mtnhly_ret = SP.pct_change().resample('m').agg(lambda x: (x+1).prod() -1)
print(SP_mtnhly_ret)
dataframe['SP500'] = SP_mtnhly_ret.values
print(dataframe)
dataframe['excess'] = dataframe.iloc[:,0] - dataframe.iloc[:,1]
print(dataframe)
dataframe['outperformed'] = ['YES' if i > 0 else "NO" for i in dataframe.excess]
print(dataframe)
pd.value_counts(dataframe.outperformed)
```
| github_jupyter |
# Bernstein-Vazirani Algorithm
In this section, we first introduce the Bernstein-Vazirani problem, its classical solution, and the quantum algorithm to solve it. We then implement the quantum algorithm using Qiskit and run it on both a simulator and a device.
## Contents
1. [The Bernstein-Vazirani Algorithm](#algorithm)
1.1 [Bernstein-Vazirani Problem](#bvproblem)
1.2 [The Classical Solution](#bclassical-solution)
1.3 [The Quantum Solution](#quantum-solution)
2. [Example](#example)
3. [Qiskit Implementation](#implementation)
3.1 [Simulation](#simulation)
3.2 [Device](#device)
4. [Problems](#problems)
5. [References](#references)
## 1. The Bernstein-Vazirani Algorithm<a id='algorithm'></a>
The Bernstein-Vazirani algorithm, first introduced in Reference [1], can be seen as an extension of the Deutsch-Jozsa algorithm we covered in the last section. It showed that there can be advantages in using a quantum computer as a computational tool for more complex problems than the Deutsch-Jozsa problem.
### 1.1 The Bernstein-Vazirani Problem <a id='bvproblem'> </a>
We are again given a black-box function $f$, which takes as input a string of bits ($x$), and returns either $0$ or $1$, that is:
$$f(\{x_0,x_1,x_2,...\}) \rightarrow 0 \textrm{ or } 1 \textrm{ where } x_n \textrm{ is }0 \textrm{ or } 1 $$
Instead of the function being balanced or constant as in the Deutsch-Jozsa problem, now the function is guaranteed to return the bitwise product of the input with some string, $s$. In other words, given an input $x$, $f(x) = s \cdot x \, \text{(mod 2)}$. We are expected to find $s$. As a classical reversible circuit, the Bernstein-Vazirani oracle looks like this:

### 1.2 The Classical Solution <a id='classical-solution'> </a>
Classically, the oracle returns:
$$f_s(x) = s \cdot x \mod 2$$
given an input $x$. Thus, the hidden bit string $s$ can be revealed by querying the oracle with the sequence of inputs:
|Input(x)|
|:-----:|
|100...0|
|010...0|
|001...0|
|000...1|
Where each query reveals a different bit of $s$ (the bit $s_i$). For example, with `x = 1000...0` one can obtain the least significant bit of $s$, with `x = 0100...0` we can find the next least significant, and so on. This means we would need to call the function $f_s(x)$, $n$ times.
### 1.3 The Quantum Solution <a id='quantum-solution'> </a>
Using a quantum computer, we can solve this problem with 100% confidence after only one call to the function $f(x)$. The quantum Bernstein-Vazirani algorithm to find the hidden bit string is very simple:
1. Initialize the inputs qubits to the $|0\rangle^{\otimes n}$ state, and output qubit to $|{-}\rangle$.
2. Apply Hadamard gates to the input register
3. Query the oracle
4. Apply Hadamard gates to the input register
5. Measure

To explain the algorithm, let’s look more closely at what happens when we apply a H-gate to each qubit. If we have an $n$-qubit state, $|a\rangle$, and apply the H-gates, we will see the transformation:
$$
|a\rangle \xrightarrow{H^{\otimes n}} \frac{1}{\sqrt{2^n}} \sum_{x\in \{0,1\}^n} (-1)^{a\cdot x}|x\rangle.
$$
<details>
<summary>Explain Equation (Click to Expand)</summary>
We remember the Hadamard performs the following transformations on one qubit:
$$
H|0\rangle = \tfrac{1}{\sqrt{2}}(|0\rangle + |1\rangle)
$$ $$
H|1\rangle = \tfrac{1}{\sqrt{2}}(|0\rangle - |1\rangle)
$$
Using summation notation, we could rewrite it like this:
$$
H|a\rangle = \frac{1}{\sqrt{2}}\sum_{x\in \{0,1\}} (-1)^{a\cdot x}|x\rangle.
$$
For two qubits, applying a Hadamard to each performs the following transformations:
$$
H^{\otimes 2}|00\rangle = \tfrac{1}{2}(|00\rangle + |01\rangle + |10\rangle + |11\rangle)
$$ $$
H^{\otimes 2}|01\rangle = \tfrac{1}{2}(|00\rangle - |01\rangle + |10\rangle - |11\rangle)
$$ $$
H^{\otimes 2}|10\rangle = \tfrac{1}{2}(|00\rangle + |01\rangle - |10\rangle - |11\rangle)
$$ $$
H^{\otimes 2}|11\rangle = \tfrac{1}{2}(|00\rangle - |01\rangle - |10\rangle + |11\rangle)
$$
We can express this using the summation below:
$$
H^{\otimes 2}|a\rangle = \frac{1}{2}\sum_{x\in \{0,1\}^2} (-1)^{a\cdot x}|x\rangle
$$
You will hopefully now see how we arrive at the equation above.
</details>
In particular, when we start with a quantum register $|00\dots 0\rangle$ and apply $n$ Hadamard gates to it, we have the familiar quantum superposition:
$$
|00\dots 0\rangle \xrightarrow{H^{\otimes n}} \frac{1}{\sqrt{2^n}} \sum_{x\in \{0,1\}^n} |x\rangle
$$
In this case, the phase term $(-1)^{a\cdot x}$ disappears, since $a=0$, and thus $(-1)^{a\cdot x} = 1$.
The classical oracle $f_s$ returns $1$ for any input $x$ such that $s \cdot x\mod 2 = 1$, and returns $0$ otherwise. If we use the same phase kickback trick from the Deutsch-Jozsa algorithm and act on a qubit in the state $|{-}\rangle$, we get the following transformation:
$$
|x \rangle \xrightarrow{f_s} (-1)^{s\cdot x} |x \rangle
$$
The algorithm to reveal the hidden bit string follows naturally by querying the quantum oracle $f_s$ with the quantum superposition obtained from the Hadamard transformation of $|00\dots 0\rangle$. Namely,
$$
|00\dots 0\rangle \xrightarrow{H^{\otimes n}} \frac{1}{\sqrt{2^n}} \sum_{x\in \{0,1\}^n} |x\rangle \xrightarrow{f_a} \frac{1}{\sqrt{2^n}} \sum_{x\in \{0,1\}^n} (-1)^{a\cdot x}|x\rangle
$$
Because the inverse of the $n$ Hadamard gates is again the $n$ Hadamard gates, we can obtain $a$ by
$$
\frac{1}{\sqrt{2^n}} \sum_{x\in \{0,1\}^n} (-1)^{a\cdot x}|x\rangle \xrightarrow{H^{\otimes n}} |a\rangle
$$
## 2. Example <a id='example'></a>
Let's go through a specific example for $n=2$ qubits and a secret string $s=11$. Note that we are following the formulation in Reference [2] that generates a circuit for the Bernstein-Vazirani quantum oracle using only one register.
<ol>
<li> The register of two qubits is initialized to zero:
$$\lvert \psi_0 \rangle = \lvert 0 0 \rangle$$
</li>
<li> Apply a Hadamard gate to both qubits:
$$\lvert \psi_1 \rangle = \frac{1}{2} \left( \lvert 0 0 \rangle + \lvert 0 1 \rangle + \lvert 1 0 \rangle + \lvert 1 1 \rangle \right) $$
</li>
<li> For the string $s=11$, the quantum oracle performs the operation:
$$
|x \rangle \xrightarrow{f_s} (-1)^{x\cdot 11} |x \rangle.
$$
$$\lvert \psi_2 \rangle = \frac{1}{2} \left( (-1)^{00\cdot 11}|00\rangle + (-1)^{01\cdot 11}|01\rangle + (-1)^{10\cdot 11}|10\rangle + (-1)^{11\cdot 11}|11\rangle \right)$$
$$\lvert \psi_2 \rangle = \frac{1}{2} \left( \lvert 0 0 \rangle - \lvert 0 1 \rangle - \lvert 1 0 \rangle + \lvert 1 1 \rangle \right)$$
</li>
<li> Apply a Hadamard gate to both qubits:
$$\lvert \psi_3 \rangle = \lvert 1 1 \rangle$$
</li>
<li> Measure to find the secret string $s=11$
</li>
</ol>
Use the widget `bv_widget` below. Press the buttons to apply the different steps, and try to follow the algorithm through. You can change the number of input qubits and the value of the secret string through the first two positional arguments.
```
from qiskit_textbook.widgets import bv_widget
bv_widget(2, "11")
```
## 3. Qiskit Implementation <a id='implementation'></a>
We'll now walk through the Bernstein-Vazirani algorithm implementation in Qiskit for a three bit function with $s=011$.
```
# initialization
import matplotlib.pyplot as plt
import numpy as np
# importing Qiskit
from qiskit import IBMQ, Aer
from qiskit.providers.ibmq import least_busy
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, transpile, assemble
# import basic plot tools
from qiskit.visualization import plot_histogram
```
We first set the number of qubits used in the experiment, and the hidden bit string $s$ to be found by the algorithm. The hidden bit string $s$ determines the circuit for the quantum oracle.
```
n = 3 # number of qubits used to represent s
s = '011' # the hidden binary string
```
We then use Qiskit to program the Bernstein-Vazirani algorithm.
```
# We need a circuit with n qubits, plus one auxiliary qubit
# Also need n classical bits to write the output to
bv_circuit = QuantumCircuit(n+1, n)
# put auxiliary in state |->
bv_circuit.h(n)
bv_circuit.z(n)
# Apply Hadamard gates before querying the oracle
for i in range(n):
bv_circuit.h(i)
# Apply barrier
bv_circuit.barrier()
# Apply the inner-product oracle
s = s[::-1] # reverse s to fit qiskit's qubit ordering
for q in range(n):
if s[q] == '0':
bv_circuit.i(q)
else:
bv_circuit.cx(q, n)
# Apply barrier
bv_circuit.barrier()
#Apply Hadamard gates after querying the oracle
for i in range(n):
bv_circuit.h(i)
# Measurement
for i in range(n):
bv_circuit.measure(i, i)
bv_circuit.draw()
```
### 3a. Experiment with Simulators <a id='simulation'></a>
We can run the above circuit on the simulator.
```
# use local simulator
aer_sim = Aer.get_backend('aer_simulator')
shots = 1024
qobj = assemble(bv_circuit)
results = aer_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
```
We can see that the result of the measurement is the hidden string `011`.
### 3b. Experiment with Real Devices <a id='device'></a>
We can run the circuit on the real device as below.
```
# Load our saved IBMQ accounts and get the least busy backend device with less than or equal to 5 qubits
IBMQ.load_account()
provider = IBMQ.get_provider(hub='ibm-q')
provider.backends()
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits <= 5 and
x.configuration().n_qubits >= 2 and
not x.configuration().simulator and x.status().operational==True))
print("least busy backend: ", backend)
# Run our circuit on the least busy backend. Monitor the execution of the job in the queue
from qiskit.tools.monitor import job_monitor
shots = 1024
transpiled_bv_circuit = transpile(bv_circuit, backend)
job = backend.run(transpiled_bv_circuit, shots=shots)
job_monitor(job, interval=2)
# Get the results from the computation
results = job.result()
answer = results.get_counts()
plot_histogram(answer)
```
As we can see, most of the results are `011`. The other results are due to errors in the quantum computation.
## 4. Exercises <a id='problems'></a>
1. Use the widget below to see the Bernstein-Vazirani algorithm in action on different oracles:
```
from qiskit_textbook.widgets import bv_widget
bv_widget(3, "011", hide_oracle=False)
```
2. The above [implementation](#implementation) of Bernstein-Vazirani is for a secret bit string $s = 011$. Modify the implementation for a secret string $s = 1011$. Are the results what you expect? Explain.
3. The above [implementation](#implementation) of Bernstein-Vazirani is for a secret bit string $s = 011$. Modify the implementation for a secret string $s = 11101101$. Are the results what you expect? Explain.
## 5. References <a id='references'></a>
1. Ethan Bernstein and Umesh Vazirani (1997) "Quantum Complexity Theory" SIAM Journal on Computing, Vol. 26, No. 5: 1411-1473, [doi:10.1137/S0097539796300921](https://doi.org/10.1137/S0097539796300921).
2. Jiangfeng Du, Mingjun Shi, Jihui Wu, Xianyi Zhou, Yangmei Fan, BangJiao Ye, Rongdian Han (2001) "Implementation of a quantum algorithm to solve the Bernstein-Vazirani parity problem without entanglement on an ensemble quantum computer", Phys. Rev. A 64, 042306, [10.1103/PhysRevA.64.042306](https://doi.org/10.1103/PhysRevA.64.042306), [arXiv:quant-ph/0012114](https://arxiv.org/abs/quant-ph/0012114).
```
import qiskit.tools.jupyter
%qiskit_version_table
```
| github_jupyter |
# Simple MiniBatchSparsePCA
This code template is for Mini-batch Sparse Principal Components Analysis in python for dimensionality reduction technique. It finds the set of sparse components that can optimally reconstruct the data. The amount of sparseness is controllable by the coefficient of the L1 penalty, given by the parameter alpha.
### Required Packages
```
import warnings
import itertools
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
from sklearn.preprocessing import LabelEncoder
from sklearn.decomposition import MiniBatchSparsePCA
from numpy.linalg import eigh
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= " "
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=' '
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X = df[features]
Y = df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Choosing the number of components
A vital part of using Mini-Batch Sparse PCA in practice is the ability to estimate how many components are needed to describe the data. This can be determined by looking at the cumulative explained variance ratio as a function of the number of components.
This curve quantifies how much of the total, dimensional variance is contained within the first N components.
### Explained Variance
Explained variance refers to the variance explained by each of the principal components (eigenvectors). It can be represented as a function of ratio of related eigenvalue and sum of eigenvalues of all eigenvectors.
```
def explained_variance_plot(X):
cov_matrix = np.cov(X, rowvar=False)
egnvalues, egnvectors = eigh(cov_matrix)
total_egnvalues = sum(egnvalues)
var_exp = [(i/total_egnvalues) for i in sorted(egnvalues, reverse=True)]
plt.plot(np.cumsum(var_exp))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance')
return var_exp
var_exp = explained_variance_plot(X)
```
#### Scree plot
The scree plot helps you to determine the optimal number of components. The eigenvalue of each component in the initial solution is plotted. Generally, you want to extract the components on the steep slope. The components on the shallow slope contribute little to the solution.
```
plt.plot(var_exp, 'ro-', linewidth=2)
plt.title('Scree Plot')
plt.xlabel('Principal Component')
plt.ylabel('Proportion of Variance Explained')
plt.show()
```
### Model
Mini-batch Sparse Principal Components Analysis
Finds the set of sparse components that can optimally reconstruct the data. The amount of sparseness is controllable by the coefficient of the L1 penalty, given by the parameter alpha.
#### Model Tuning Parameters:
[API](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.MiniBatchSparsePCA.html)
1. n_components: int, default=None
> number of sparse atoms to extract
2. alpha: int, default=1
> Sparsity controlling parameter. Higher values lead to sparser components.
3. ridge_alpha: float, default=0.01
> Amount of ridge shrinkage to apply in order to improve conditioning when calling the transform method.
4. n_iter: int, default=100
> number of iterations to perform for each mini batch
5. batch_size: int, default=3
> the number of features to take in each mini batch
6. shuffle: bool, default=True
> whether to shuffle the data before splitting it in batches
7. method: {‘lars’, ‘cd’}, default=’lars’
> lars: uses the least angle regression method to solve the lasso problem (linear_model.lars_path) cd: uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). Lars will be faster if the estimated components are sparse.
8. random_state: int, RandomState instance or None, default=None
> Used for random shuffling when shuffle is set to True, during online dictionary learning. Pass an int for reproducible results across multiple function calls.
```
mbspca = MiniBatchSparsePCA(n_components=3)
pcaX = pd.DataFrame(data = mbspca.fit_transform(X))
```
#### Output Dataframe
```
finalDf = pd.concat([pcaX, Y], axis = 1)
finalDf.head()
```
#### Creator: Viraj Jayant, Github: [Profile](https://github.com/Viraj-Jayant)
| github_jupyter |
### Dependences
```
import sys
sys.path.append("../")
import math
from tqdm import tqdm
import numpy as np
import tensorflow as tf
from PIL import Image
from tqdm import tqdm
import soundfile as sf
import json
import os
import matplotlib.pyplot as plt
from IPython.display import clear_output
from lib.models.Dual_Res_UNet_v2 import Dual_Res_UNet_v2
import lib.utils as utils
import IPython.display as ipd
TESLA_K40c = '/gpu:1'
GTX_1080 = '/gpu:0'
GPU = TESLA_K40c
```
### Loading experiment data
```
#set experiment ID
with tf.device(GPU):
EXP_ID = "Dual_Res_UNet_v2_MDCT_64"
utils.create_experiment_folders(EXP_ID)
utils.load_experiment_data()
```
### Model instantiation
```
with tf.device(GPU):
model = Dual_Res_UNet_v2()
model.build((None,64,64,1))
print(model.summary())
model.load_weights("model_save/Res_UNet_v2_MDCT_64_10_50/model_best_valid/model")
for i in range(16):
model.layers[i].trainable = False
print(model.summary())
```
### Loading Dataset
```
train_x = np.load("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/X_train_MDCT.npy", mmap_mode='c')
train_y = np.load("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/y_train_MDCT.npy", mmap_mode='c')
qtd_traning = train_x.shape
print("Loaded",qtd_traning, "samples")
valid_x_1 = np.load("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/X_val_MDCT.npy", mmap_mode='c')
valid_y_1 = np.load("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/y_val_MDCT.npy", mmap_mode='c')
qtd_traning = valid_x_1.shape
print("Loaded",qtd_traning, "samples")
```
### Dataset Normalization and Batches split
```
value = np.load("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/scale_and_shift_MDCT_64.npy", mmap_mode='c')
print(value)
SHIFT_VALUE_X, SHIFT_VALUE_Y, SCALE_VALUE_X, SCALE_VALUE_Y = value[0], value[1], value[2], value[3]
# SHIFT_VALUE_X, SHIFT_VALUE_Y, SCALE_VALUE_X, SCALE_VALUE_Y = utils.get_shift_scale_maxmin(train_x, train_y, valid_x_1, valid_y_1)
mini_batch_size = 117
num_train_minibatches = math.floor(train_x.shape[0]/mini_batch_size)
num_val_minibatches = math.floor(valid_x_1.shape[0]/mini_batch_size)
print("train_batches:", num_train_minibatches, "valid_batches:", num_val_minibatches)
```
### Metrics
```
with tf.device(GPU):
#default tf.keras metrics
train_loss = tf.keras.metrics.Mean(name='train_loss')
```
### Set Loss and load model weights
```
with tf.device(GPU):
loss_object_recons = tf.keras.losses.MeanSquaredError()
loss_object_refine = tf.keras.losses.MeanSquaredError()
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
#get last saved epoch index and best result in validation step
CURRENT_EPOCH, BEST_VALIDATION = utils.get_model_last_data()
if CURRENT_EPOCH > 0:
print("Loading last model state in epoch", CURRENT_EPOCH)
model.load_weights(utils.get_exp_folder_last_epoch())
print("Best validation result was PSNR=", BEST_VALIDATION)
```
### Training
```
with tf.device(GPU):
@tf.function
def train_step(patch_x, patch_y):
with tf.GradientTape() as tape:
pred_reconst, pred_refine = model(patch_x)[0], model(patch_x)[1]
# reconstruction_loss = loss_object_recons(patch_y, pred_reconst)
refinement_loss = loss_object_refine(patch_y, pred_refine)
# loss = reconstruction_loss + refinement_loss
loss = refinement_loss
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_loss(loss)
def valid_step(valid_x, valid_y, num_val_minibatches, mini_batch_size):
valid_mse = tf.keras.metrics.MeanSquaredError(name='train_mse')
valid_custom_metrics = utils.CustomMetric()
for i in tqdm(range(num_val_minibatches)):
data_x = valid_x[i * mini_batch_size : i * mini_batch_size + mini_batch_size]
data_y = valid_y[i * mini_batch_size : i * mini_batch_size + mini_batch_size]
data_x = tf.convert_to_tensor(data_x, dtype=tf.float32)
data_y = tf.convert_to_tensor(data_y, dtype=tf.float32)
data_x = ((data_x+SHIFT_VALUE_X)/SCALE_VALUE_X)+CONST_GAMA
data_y = ((data_y+SHIFT_VALUE_Y)/SCALE_VALUE_Y)+CONST_GAMA
predictions = model(data_x)[1]
valid_mse(data_y, predictions)
predictions = predictions.numpy()
data_y = data_y.numpy()
#feed the metric evaluator
valid_custom_metrics.feed(data_y, predictions)
#get metric results
psnr, nrmse = valid_custom_metrics.result()
valid_mse_result = valid_mse.result().numpy()
valid_custom_metrics.reset_states()
valid_mse.reset_states()
return psnr, nrmse, valid_mse_result
MAX_EPOCHS = 100
EVAL_STEP = 1
CONST_GAMA = 0.001
for epoch in range(CURRENT_EPOCH, MAX_EPOCHS):
#TRAINING
print("TRAINING EPOCH", epoch)
for k in tqdm(range(0, num_train_minibatches)):
seismic_x = train_x[k * mini_batch_size : k * mini_batch_size + mini_batch_size]
seismic_y = train_y[k * mini_batch_size : k * mini_batch_size + mini_batch_size]
seismic_x = tf.convert_to_tensor(seismic_x, dtype=tf.float32)
seismic_y = tf.convert_to_tensor(seismic_y, dtype=tf.float32)
seismic_x = ((seismic_x+SHIFT_VALUE_X)/SCALE_VALUE_X)+CONST_GAMA
seismic_y = ((seismic_y+SHIFT_VALUE_Y)/SCALE_VALUE_Y)+CONST_GAMA
train_step(seismic_x, seismic_y)
#VALIDATION
if epoch%EVAL_STEP == 0:
clear_output()
print("VALIDATION EPOCH", epoch)
#saving last epoch model
model.save_weights(utils.get_exp_folder_last_epoch(), save_format='tf')
#valid with set 1
print("Validation set")
psnr_1, nrmse_1, mse_1 = valid_step(valid_x_1, valid_y_1, num_val_minibatches, mini_batch_size)
#valid with set 2
#print("Validation set 2")
#psnr_2, nrmse_2, mse_2 = valid_step(valid_x_2, valid_y_2, num_val_minibatches, mini_batch_size)
psnr_2, nrmse_2, mse_2 = 0, 0, 0
#valid with set 3
#print("Validation set 3")
#psnr_3, nrmse_3, mse_3 = valid_step(valid_x_3, valid_y_3, num_val_minibatches, mini_batch_size)
psnr_3, nrmse_3, mse_3 = 0, 0, 0
utils.update_chart_data(epoch=epoch, train_mse=train_loss.result().numpy(),
valid_mse=[mse_1,mse_2,mse_3], psnr=[psnr_1,psnr_2,psnr_3], nrmse=[nrmse_1,nrmse_2, nrmse_3])
utils.draw_chart()
#saving best validation model
if psnr_1 > BEST_VALIDATION:
BEST_VALIDATION = psnr_1
model.save_weights(utils.get_exp_folder_best_valid(), save_format='tf')
train_loss.reset_states()
utils.draw_chart()
#experimentos results
print(utils.get_experiment_results())
with tf.device(GPU):
#load best model
model.load_weights(utils.get_exp_folder_best_valid())
CONST_GAMA = 0.001
# valid_x_1 = np.load("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/X_val.npy", mmap_mode='c')
# valid_y_1 = np.load("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/y_val.npy", mmap_mode='c')
qtd_traning = valid_x_1.shape
print("Loaded",qtd_traning, "samples")
# #normalization
# test_x = utils.shift_and_normalize(test_x, SHIFT_VALUE_X, SCALE_VALUE_X)
# test_y = utils.shift_and_normalize(test_y, SHIFT_VALUE_Y, SCALE_VALUE_Y)
#batches
num_val_minibatches = math.floor(valid_x_1.shape[0]/mini_batch_size)
# test_batches = utils.random_mini_batches(test_x, test_y, None, None, 8, seed=0)
#metrics
val_mse = tf.keras.metrics.MeanSquaredError(name='val_mse')
val_custom_metrics = utils.CustomMetric()
f = open('/home/arthursrr/Documentos/Audio_Inpainting/Datasets/idx_genders_val.json', "r")
idx_gen = json.loads(f.read())
for k in idx_gen:
for i in tqdm(idx_gen[k]):
data_x = valid_x_1[i * mini_batch_size : i * mini_batch_size + mini_batch_size]
data_y = valid_y_1[i * mini_batch_size : i * mini_batch_size + mini_batch_size]
data_x = tf.convert_to_tensor(data_x, dtype=tf.float32)
data_y = tf.convert_to_tensor(data_y, dtype=tf.float32)
data_x = ((data_x+SHIFT_VALUE_X)/SCALE_VALUE_X)+CONST_GAMA
data_y = ((data_y+SHIFT_VALUE_Y)/SCALE_VALUE_Y)+CONST_GAMA
predictions = model(data_x)[1]
val_mse(data_y, predictions)
predictions = predictions.numpy()
data_y = data_y.numpy()
#feed the metric evaluator
val_custom_metrics.feed(data_y, predictions)
#get metric results
psnr, nrmse = val_custom_metrics.result()
val_mse_result = val_mse.result().numpy()
val_custom_metrics.reset_states()
val_mse.reset_states()
print(k ,"\nPSNR:", psnr,"\nNRMSE:", nrmse)
# Closing file
f.close()
```
## Test
```
#load best model
model.load_weights(utils.get_exp_folder_best_valid())
CONST_GAMA = 0.001
test_x = np.load("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/Normalized/X_test.npy", mmap_mode='c')
test_y = np.load("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/Normalized/y_test.npy", mmap_mode='c')
qtd_traning = test_x.shape
print("Loaded",qtd_traning, "samples")
# #normalization
# test_x = utils.shift_and_normalize(test_x, SHIFT_VALUE_X, SCALE_VALUE_X)
# test_y = utils.shift_and_normalize(test_y, SHIFT_VALUE_Y, SCALE_VALUE_Y)
#batches
num_test_minibatches = math.floor(test_x.shape[0]/mini_batch_size)
# test_batches = utils.random_mini_batches(test_x, test_y, None, None, 8, seed=0)
#metrics
test_mse = tf.keras.metrics.MeanSquaredError(name='test_mse')
test_custom_metrics = utils.CustomMetric()
f = open('/home/arthursrr/Documentos/Audio_Inpainting/Datasets/idx_genders_test.json', "r")
idx_gen = json.loads(f.read())
flag = True
data_spec = None
data_np = None
for i in tqdm(range(num_test_minibatches)):
data_x = test_x[i * mini_batch_size : i * mini_batch_size + mini_batch_size]
data_y = test_y[i * mini_batch_size : i * mini_batch_size + mini_batch_size]
data_x = tf.convert_to_tensor(data_x, dtype=tf.float32)
data_y = tf.convert_to_tensor(data_y, dtype=tf.float32)
predictions = model(data_x)[1]
if type(data_spec).__name__ == 'NoneType':
data_spec = predictions
else:
data_spec = tf.concat([data_spec, predictions], axis=0)
if data_spec.shape[0]%29000 == 0:
if flag == True:
data_np = data_spec.numpy()
flag = False
else:
data_np = np.concatenate((data_np, data_spec.numpy()), axis=0)
del data_spec
data_spec = None
if type(data_spec).__name__ != 'NoneType':
data_np = np.concatenate((data_np, data_spec.numpy()), axis=0)
del data_spec
# Closing file
f.close()
np.save("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/Normalized/X_test_predicted.npy", data_np)
with tf.device(GPU):
#load best model
model.load_weights(utils.get_exp_folder_best_valid())
CONST_GAMA = 0.001
test_x = np.load("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/X_test.npy", mmap_mode='c')
test_y = np.load("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/y_test.npy", mmap_mode='c')
qtd_traning = test_x.shape
print("Loaded",qtd_traning, "samples")
# #normalization
# test_x = utils.shift_and_normalize(test_x, SHIFT_VALUE_X, SCALE_VALUE_X)
# test_y = utils.shift_and_normalize(test_y, SHIFT_VALUE_Y, SCALE_VALUE_Y)
#batches
num_test_minibatches = math.floor(test_x.shape[0]/mini_batch_size)
# test_batches = utils.random_mini_batches(test_x, test_y, None, None, 8, seed=0)
#metrics
test_mse = tf.keras.metrics.MeanSquaredError(name='test_mse')
test_custom_metrics = utils.CustomMetric()
f = open('/home/arthursrr/Documentos/Audio_Inpainting/Datasets/idx_genders_test.json', "r")
idx_gen = json.loads(f.read())
for k in idx_gen:
# if k == "Experimental" or k == "Hip-Hop" or k == "Jazz":
for i in tqdm(idx_gen[k]):
data_x = test_x[i * mini_batch_size : i * mini_batch_size + mini_batch_size]
data_y = test_y[i * mini_batch_size : i * mini_batch_size + mini_batch_size]
data_x = tf.convert_to_tensor(data_x, dtype=tf.float32)
data_y = tf.convert_to_tensor(data_y, dtype=tf.float32)
data_x = ((data_x+SHIFT_VALUE_X)/SCALE_VALUE_X)+CONST_GAMA
data_y = ((data_y+SHIFT_VALUE_Y)/SCALE_VALUE_Y)+CONST_GAMA
predictions = model(data_x)[1]
test_mse(data_y, predictions)
predictions = predictions.numpy()
data_y = data_y.numpy()
#feed the metric evaluator
test_custom_metrics.feed(data_y, predictions)
#get metric results
psnr, nrmse = test_custom_metrics.result()
test_mse_result = test_mse.result().numpy()
test_custom_metrics.reset_states()
test_mse.reset_states()
print(k ,"\nPSNR:", psnr,"\nNRMSE:", nrmse)
# Closing file
f.close()
def griffin_lim(S, frame_length=256, fft_length=255, stride=64):
'''
TensorFlow implementation of Griffin-Lim
Based on https://github.com/Kyubyong/tensorflow-exercises/blob/master/Audio_Processing.ipynb
'''
S = tf.expand_dims(S, 0)
S_complex = tf.identity(tf.cast(S, dtype=tf.complex64))
y = tf.signal.inverse_stft(S_complex, frame_length, stride, fft_length=fft_length)
for i in range(100):
est = tf.signal.stft(y, frame_length, stride, fft_length=fft_length)
angles = est / tf.cast(tf.maximum(1e-16, tf.abs(est)), tf.complex64)
y = tf.signal.inverse_stft(S_complex * angles, frame_length, stride, fft_length=fft_length)
return tf.squeeze(y, 0)
with tf.device(GPU):
model.load_weights(utils.get_exp_folder_best_valid())
test_x = np.load("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/X_test.npy", mmap_mode='c')
test_y = np.load("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/y_test.npy", mmap_mode='c')
qtd_traning = test_x.shape
print("Loaded",qtd_traning, "samples")
#batches
num_test_minibatches = math.floor(test_x.shape[0]/mini_batch_size)
#metrics
test_mse = tf.keras.metrics.MeanSquaredError(name='test_mse')
test_custom_metrics = utils.CustomMetric()
CONST_GAMA = 0.001
f = open('/home/arthursrr/Documentos/Audio_Inpainting/Datasets/idx_genders_test.json', "r")
idx_gen = json.loads(f.read())
wave_original = None
wave_corte = None
wave_pred = None
for k in idx_gen:
path_gen = "/mnt/backup/arthur/Free_Music_Archive/Teste_Dual/"+k
if not os.path.exists(path_gen):
os.makedirs(path_gen)
os.makedirs(path_gen+"/original")
os.makedirs(path_gen+"/cortado")
os.makedirs(path_gen+"/predito")
for i in tqdm(idx_gen[k]):
data_x = test_x[i * mini_batch_size : i * mini_batch_size + mini_batch_size]
data_y = test_y[i * mini_batch_size : i * mini_batch_size + mini_batch_size]
data_norm = ((tf.convert_to_tensor(data_x, dtype=tf.float32)+SHIFT_VALUE_X)/SCALE_VALUE_X)+CONST_GAMA
predictions = model(data_norm)[1]
predictions = (((predictions-CONST_GAMA)*SCALE_VALUE_X)-SHIFT_VALUE_X).numpy()
#predictions = utils.inv_shift_and_normalize(predictions, SHIFT_VALUE_Y, SCALE_VALUE_Y)
audio_original = None
audio_corte = None
audio_pred = None
for j in range(mini_batch_size):
if j==0:
audio_original = data_y[j,:,:,0]
audio_corte = data_x[j,:,:,0]
audio_pred = predictions[j,:,:,0]
else:
audio_original = np.concatenate((audio_original, data_y[j,:,:,0]), axis=0)
audio_corte = np.concatenate((audio_corte, data_x[j,:,:,0]), axis=0)
audio_pred = np.concatenate((audio_pred, predictions[j,:,:,0]), axis=0)
wave_original = griffin_lim(audio_original, frame_length=256, fft_length=255, stride=64)
wave_corte = griffin_lim(audio_corte, frame_length=256, fft_length=255, stride=64)
wave_pred = griffin_lim(audio_pred, frame_length=256, fft_length=255, stride=64)
sf.write(path_gen+"/original/"+str(i)+".wav", wave_original, 16000, subtype='PCM_16')
sf.write(path_gen+"/cortado/"+str(i)+".wav", wave_corte, 16000, subtype='PCM_16')
sf.write(path_gen+"/predito/"+str(i)+".wav", wave_pred, 16000, subtype='PCM_16')
audio_pred = None
for i in range(0, 58):
if i==0:
audio_pred = predictions[i,:,:,0]
else:
audio_pred = np.concatenate((audio_pred, predictions[i,:,:,0]), axis=0)
audio_pred.shape
audio_corte = None
for i in range(0, 58):
if i==0:
audio_corte = data_x[i,:,:,0]
else:
audio_corte = np.concatenate((audio_corte, data_x[i,:,:,0]), axis=0)
audio_corte.shape
audio_original = None
for i in range(0, 58):
if i==0:
audio_original = data_y[i,:,:,0]
else:
audio_original = np.concatenate((audio_original, data_y[i,:,:,0]), axis=0)
audio_original.shape
wave_original = griffin_lim(audio_original, frame_length=256, fft_length=255, stride=64)
ipd.Audio(wave_original, rate=16000)
wave_corte = griffin_lim(audio_corte, frame_length=256, fft_length=255, stride=64)
ipd.Audio(wave_corte, rate=16000)
wave_pred = griffin_lim(audio_pred, frame_length=256, fft_length=255, stride=64)
ipd.Audio(wave_pred, rate=16000)
# import soundfile as sf
# sf.write('x.wav', wave_corte, 16000, subtype='PCM_16')
# sf.write('pred.wav', wave_pred, 16000, subtype='PCM_16')
```
| github_jupyter |
# Bite Size Bayes
Copyright 2020 Allen B. Downey
License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
## Review
In [a previous notebook](https://colab.research.google.com/github/AllenDowney/BiteSizeBayes/blob/master/12_binomial.ipynb) we solved the Euro problem, which involved estimating the proportion of heads we get when we spin a coin on edge.
We used the posterior distribution to test whether the coin is fair or biased, but the answer is not entirely satisfying because it depends on how we define "biased".
In general, this kind of hypothesis testing is not the best use of a posterior distribution because it does not answer the question we really care about. For practical purposes, it is less useful to know *whether* a coin is biased and more useful to know *how* biased.
In this notebook we solve the Bayesian bandit problem, which is similar in the sense that it involves estimating proportions, but different in the sense that we use the posterior distribution as part of a decision-making process.
## The Bayesian bandit problem
Suppose you have several "one-armed bandit" slot machines, and there's reason to think that they have different probabilities of paying off.
Each time you play a machine, you either win or lose, and you can use the outcome to update your belief about the probability of winning.
Then, to decide which machine to play next, you can use the "Bayesian bandit" strategy, explained below.
First, let's see how to do the update.
## The prior
If we know nothing about the probability of wining, we can start with a uniform prior.
```
def decorate_bandit(title):
"""Labels the axes.
title: string
"""
plt.xlabel('Probability of winning')
plt.ylabel('PMF')
plt.title(title)
xs = np.linspace(0, 1, 101)
prior = pd.Series(1/101, index=xs)
prior.plot()
decorate_bandit('Prior distribution')
```
## The likelihood function
The likelihood function that computes the probability of an outcome (W or L) for a hypothetical value of x, the probability of winning (from 0 to 1).
```
def update(prior, data):
"""Likelihood function for Bayesian bandit
prior: Series that maps hypotheses to probabilities
data: string, either 'W' or 'L'
"""
xs = prior.index
if data == 'W':
prior *= xs
else:
prior *= 1-xs
prior /= prior.sum()
bandit = prior.copy()
update(bandit, 'W')
update(bandit, 'L')
bandit.plot()
decorate_bandit('Posterior distribution, 1 loss, 1 win')
```
**Exercise 1:** Suppose you play a machine 10 times and win once. What is the posterior distribution of $x$?
```
# Solution goes here
```
## Multiple bandits
Now suppose we have several bandits and we want to decide which one to play.
For this example, we have 4 machines with these probabilities:
```
actual_probs = [0.10, 0.20, 0.30, 0.40]
```
The function `play` simulates playing one machine once and returns `W` or `L`.
```
from random import random
from collections import Counter
# count how many times we've played each machine
counter = Counter()
def flip(p):
"""Return True with probability p."""
return random() < p
def play(i):
"""Play machine i.
returns: string 'W' or 'L'
"""
counter[i] += 1
p = actual_probs[i]
if flip(p):
return 'W'
else:
return 'L'
```
Here's a test, playing machine 3 twenty times:
```
for i in range(20):
result = play(3)
print(result, end=' ')
```
Now I'll make four copies of the prior to represent our beliefs about the four machines.
```
beliefs = [prior.copy() for i in range(4)]
```
This function displays four distributions in a grid.
```
options = dict(xticklabels='invisible', yticklabels='invisible')
def plot(beliefs, **options):
for i, b in enumerate(beliefs):
plt.subplot(2, 2, i+1)
b.plot(label='Machine %s' % i)
plt.gca().set_yticklabels([])
plt.legend()
plt.tight_layout()
plot(beliefs)
```
**Exercise 2:** Write a nested loop that plays each machine 10 times; then plot the posterior distributions.
Hint: call `play` and then `update`.
```
# Solution goes here
# Solution goes here
```
After playing each machine 10 times, we can summarize `beliefs` by printing the posterior mean and credible interval:
```
def pmf_mean(pmf):
"""Compute the mean of a PMF.
pmf: Series representing a PMF
return: float
"""
return np.sum(pmf.index * pmf)
from scipy.interpolate import interp1d
def credible_interval(pmf, prob):
"""Compute the mean of a PMF.
pmf: Series representing a PMF
prob: probability of the interval
return: pair of float
"""
# make the CDF
xs = pmf.index
ys = pmf.cumsum()
# compute the probabilities
p = (1-prob)/2
ps = [p, 1-p]
# interpolate the inverse CDF
options = dict(bounds_error=False,
fill_value=(xs[0], xs[-1]),
assume_sorted=True)
interp = interp1d(ys, xs, **options)
return interp(ps)
for i, b in enumerate(beliefs):
print(pmf_mean(b), credible_interval(b, 0.9))
```
## Bayesian Bandits
To get more information, we could play each machine 100 times, but while we are gathering data, we are not making good use of it. The kernel of the Bayesian Bandits algorithm is that it collects and uses data at the same time. In other words, it balances exploration and exploitation.
The following function chooses among the machines so that the probability of choosing each machine is proportional to its "probability of superiority".
```
def pmf_choice(pmf, n):
"""Draw a random sample from a PMF.
pmf: Series representing a PMF
returns: quantity from PMF
"""
return np.random.choice(pmf.index, p=pmf)
def choose(beliefs):
"""Use the Bayesian bandit strategy to choose a machine.
Draws a sample from each distributions.
returns: index of the machine that yielded the highest value
"""
ps = [pmf_choice(b, 1) for b in beliefs]
return np.argmax(ps)
```
This function chooses one value from the posterior distribution of each machine and then uses `argmax` to find the index of the machine that chose the highest value.
Here's an example.
```
choose(beliefs)
```
**Exercise 3:** Putting it all together, fill in the following function to choose a machine, play once, and update `beliefs`:
```
def choose_play_update(beliefs, verbose=False):
"""Chose a machine, play it, and update beliefs.
beliefs: list of Pmf objects
verbose: Boolean, whether to print results
"""
# choose a machine
machine = ____
# play it
outcome = ____
# update beliefs
update(____)
if verbose:
print(i, outcome, beliefs[machine].mean())
# Solution goes here
```
Here's an example
```
choose_play_update(beliefs, verbose=True)
```
## Trying it out
Let's start again with a fresh set of machines and an empty `Counter`.
```
beliefs = [prior.copy() for i in range(4)]
counter = Counter()
```
If we run the bandit algorithm 100 times, we can see how `beliefs` gets updated:
```
num_plays = 100
for i in range(num_plays):
choose_play_update(beliefs)
plot(beliefs)
```
We can summarize `beliefs` by printing the posterior mean and credible interval:
```
for i, b in enumerate(beliefs):
print(pmf_mean(b), credible_interval(b, 0.9))
```
The credible intervals usually contain the true values (0.1, 0.2, 0.3, and 0.4).
The estimates are still rough, especially for the lower-probability machines. But that's a feature, not a bug: the goal is to play the high-probability machines most often. Making the estimates more precise is a means to that end, but not an end itself.
Let's see how many times each machine got played. If things go according to plan, the machines with higher probabilities should get played more often.
```
for machine, count in sorted(counter.items()):
print(machine, count)
```
**Exercise 4:** Go back and run this section again with a different value of `num_play` and see how it does.
## Summary
The algorithm I presented in this notebook is called [Thompson sampling](https://en.wikipedia.org/wiki/Thompson_sampling). It is an example of a general strategy called [Bayesian decision theory](https://wiki.lesswrong.com/wiki/Bayesian_decision_theory), which is the idea of using a posterior distribution as part of a decision-making process, usually by choosing an action that minimizes the costs we expect on average (or maximizes a benefit).
In my opinion, this strategy is the biggest advantage of Bayesian methods over classical statistics. When we represent knowledge in the form of probability distributions, Bayes's theorem tells us how to change our beliefs as we get more data, and Bayesian decisions theory tells us how to make that knowledge actionable.
| github_jupyter |
# Data Analysis with Python (Pandas)
---
<img src="https://viezeliedjes.files.wordpress.com/2014/09/kbmarskramersceptisch.jpg?w=982&h=760" align="right" width=250/>In the Data Science community, Python is well known for its excellent data manipulation capabilities (e.g. matrix handling). In this chapter, we will review a number of vital external libraries in this respect (numpy, pandas), which allow scholars to load, manipulate and analyze tabular data. As example data, we will work with data from the [Meertens Tune Collection](http://www.liederenbank.nl/mtc/). We will work with the MTC-FS dataset which consists of 4120 digitally encoded vocal folk songs both from *Onder de groene linde* (2503) and from various related written sources (1617).
In this chapter, we will show you some of the basic plotting functionality available in Python. The plots can be embedded in the notebooks by executing the following cell:
```
%matplotlib inline
```
The default colors of matplotlib are not the most pretty ones. We can change that using:
```
import matplotlib
matplotlib.style.use('ggplot')
```
This chapter will introduce you to the basics of the Python Data Analysis Library **Pandas**. The library is very well documented and contains about all the fuctionality you need to work with tabular data and time-indexed data. We begin with importing the library into the Python workspace:
```
import pandas
```
## Reading Data from a csv / excel file
Pandas provides a simple method to read CSV files. To read the metadata file of the Meertens Tune Collection, execute the following cell:
```
df = pandas.read_csv("data/MTC-FS.csv")
```
`pandas.read_csv` returns a `DataFrame` object, which is one of the most important data structures in Pandas. Let's have a look at this data structure:
```
df
```
The IPython notebooks work seamlessly with Pandas and display a nicely formatted HTML table. We can access the first, say, 5 rows of this table using a slicing operation:
```
df[:5]
```
An equivalent, yet slightly better understandable method is to use the `.head` method:
```
df.head()
```
By default `.head` returns the first 5 rows of a DataFrame. The method takes one optional argument `n` with which you can specify how many rows to show:
```
df.head(n=10)
```
The `pandas.read_csv` method takes many optional arguments that allow you to parse all kinds of CSV files (e.g. with different encodings, missing values, different separators etc.). One of the columns in our dataframe contains the date of the recordings. By default, Pandas will parse those columns as string objects. We can specify which columns should be parsed as dates using the following statement:
```
df = pandas.read_csv("data/MTC-FS.csv", parse_dates=['date_of_recording'])
```
Remember, to quickly access the documentation of a method or function, you can use a question mark, as in:
```
pandas.read_csv?
```
Since we will be using many functions such as `.read_csv` it would be convenient not having to type `pandas` all the time. Python allows you to import libraries using an alias, as in:
```
import pandas as pd
```
Now we can use `pd.read_csv` instead of `pandas.read_csv`:
```
df = pd.read_csv("data/MTC-FS.csv", parse_dates=['date_of_recording'])
```
## Accessing Rows & Columns
Pandas DataFrames work much like regular Python dictionaries. You can retrieve the contents of a column as follows:
```
df['tunefamily']
```
If the column name does not contain any spaces, Pandas will turn the column name into an attribute of the DataFrame which can be accessed as follows:
```
df.tunefamily
```
To access a particular row of a DataFrame, Pandas specifies to use the method `.ix`, as in:
```
df.ix[0]
```
---
#### Q1
Write some code that shows the contents of another column. Tip: you can use `df.columns` to see which column names are available.
```
# insert your code here
```
We have seen the method `.head`. What would be the equivalent method to show the *n* last rows of a DataFrame? Hint: think of a dog. Try printing the *n=20* last rows.
```
# insert your code here
```
---
## Selecting Multiple Columns
Pandas allows you to conveniently select multiple columns in a DataFrame. The syntax is the same as with a single column, but here we provide a list of the columns we want to retrieve:
```
df[['tunefamily', 'title']]
```
These objects are reduced DataFrames that behave exactly the same as the orginal DataFrame. We can slice them as follows:
```
df[['tunefamily', 'title']][:5]
```
Or we first slice the DataFrame and then select the columns we're interested in:
```
df[:5][['tunefamily', 'title']]
```
## Counting discrete variables
Most columns in the MTC dataset are categorical. Pandas provides the convenient method `.value_counts` which returns how often a particular value of a column occurs. Let's see that in action:
```
df.tunefamily.head()
```
As you can see, the column tunefamily contains many duplicates. We would like to obtain a frequency distribution of the tune families in the collection. This can be easily done using:
```
df.tunefamily.value_counts()
```
To print only the *n* most frequent tune families, we can use:
```
df.tunefamily.value_counts()[:10]
```
It gets even better. We can plot these frequency distributions by simply adding `.plot`!
```
df.tunefamily.value_counts()[:10].plot(kind='bar')
```
---
#### Q2
Make a bar plot showing the 10 most frequent places where recordings have been made.
```
# insert your code here
```
Make a density plot of the place name frequency distribution. Tip: search for 'density' at: http://pandas.pydata.org/pandas-docs/stable/visualization.html
```
# insert you code here
```
---
The method `.value_counts` returns an object which is called a `Series` in pandas. DataFrames consist of columns which are Series objects. `Series` have many methods available, such as the `.plot` method. Other methods are: `.sum`, `.mean`, `.median` etc:
```
tune_counts = df.tunefamily.value_counts()
print(tune_counts.sum())
print(tune_counts.mean())
print(tune_counts.median())
```
To get a quick summary of a Series object, use the method `.describe`:
```
tune_counts.describe()
```
---
#### Q3
In the next cell, place your cursor behind the dot and hit TAB to retrieve an overview of the many methods available.
```
tune_counts.
```
Write a line of code which computes the cumulative sum of `tune_counts` and plots it as a line plot. A cumulative sum is a series of partial sums over a given sequence, rather than just the final number in that series (which would be the regular sum).
```
# insert your code here
```
## Data selection under conditions
Often we are interested in particular rows of a dataframe that satisfy particular conditions. We might be interested in all recordings that were made in Groningen for example, or all recordings before 1980. In the previous chapters you have written a lot of code that involved looping over an iterable and depending on whether some condition held, append each item in that iterable to a new list. In Pandas people generally avoid looping and prefer to use so-called vectorized operations that take care of the looping internally.
Say we are interested in all records of which the tune family is equal to `Die schuintamboers 1`. Normally we would write a loop, such as:
results = []
for row in df:
if row.tunefamily == 'Drie schuintamboers 1':
results.append(row)
print(results)
(Note that the above code will not actually work, because of the way DataFrames are structured and looped over, but it shows a standard looping approach). In Pandas you can achieve the same thing (and better!) by using the following single line:
```
df[df.tunefamily == 'Drie schuintamboers 1']
```
Let's break that statement down. The inner statement `df.tunefamily == 'Drie schuintamboers 1'` tests for each item in the column `tunefamily` whether it is equal to `'Drie schuintamboers 1'`. When we execute that seperately, Python returns a Series object with boolean values that are either `True` (if it was a match) or `False` (if the item didn't match):
```
df.tunefamily == 'Drie schuintamboers 1'
```
When we *index* our `df` with this Series object, we get just the rows where the condition holds.
---
#### Q4
Write a line of code that returns a new dataframe in which all recordings are from Groningen.
```
# insert your code here
```
What is the tunefamily that is most often attested in Amsterdam? (Tip: remember the value_counts method you can call on (filtered) DataFrames. Also, Series objects have a method called `argmax` that returns the argument corresponding to a maximum value.)
```
# insert your code here
```
---
We can easily select data on the basis of multiple conditions. Here is an example:
```
in_Assen = df.place_of_recording == 'Assen'
is_Suze = df.tunefamily == 'Suze mien lam'
df[in_Assen & is_Suze]
```
Just to show you that the returned object is another DataFrame, we select two columns to construct a reduced DataFrame:
```
df[in_Assen & is_Suze][['filename', 'place_of_recording']]
```
---
#### Q5
When we look at all values in `df.place_of_recording` we notice that many values are missing (NaN values).
```
df.place_of_recording
```
We can use the method `.isnull` (e.g. on a column) to select all items that are NaN. Similarly, the method `.notnull` is to select all items that are not NaN. Write a line of code that returns a new DataFrame in which for all records the place of recording is known.
```
# insert your code here
```
---
## TimeSeries
Pandas is a library with tremendous functionality for tabular data. Another key feature of the library is the way it can work with time series. The MTC contains a column `date_of_recording` which provides for most but not all items the date at which the recording was made. In the following analyses, we would like to work with data for which the date of recording is known. We create the corresponding DataFrame as follows:
```
df = df[df.date_of_recording.notnull()]
```
To conveniently work with time series, we set the index of the dataframe (i.e. the labels of the rows) to hold the contents of the column `date_of_recording`. Note that after this operation, the column `date_of_recording` is no longer part of the dataframe but only as an index.
```
df = df.set_index("date_of_recording")
df.head()
```
The Index of our new `df` is now of the type `DatetimeIndex`. (If not, the CSV was not read in with the argument `parse_dates=['date_of_recording']`)
```
type(df.index)
```
This data structure provides a vast amount of methods and functions to manipulate and index the data. For example, we can retrieve the year of each recording as follows:
```
df.index.year
```
Or the day:
```
df.index.day
```
Or the week:
```
df.index.week
```
---
#### Q6
Write a single line of code which returns a new DataFrame consisting of all recordings made at January first (independent of the year). Tip: type `df.index.` followed by TAB to scan through an overview of all the available methods.
```
# insert your code here
```
---
## Add a column to a Dataframe
Pandas makes it really easy to add new columns to a DataFrame. In the following cell we add a column containing the week number at which the recording was made.
```
df['week'] = df.index.week
df.week[:10]
```
---
#### Q7
Add a column `year` to the DataFrame which contains for each record the year of recording.
```
# insert your code here
```
---
## Grouping data
Often we would like to group our data on the basis of some criterium and perform operations on those groups. We could, for example, group all recordings on the basis of where they were recorded. Pandas provides the convenient (though sometimes challenging) method `.groupby`. It works as follows:
```
grouped = df.groupby('place_of_recording')
grouped
```
A `DataFrameGroupBy` object consists of several smaller DataFrames per group. We grouped the original dataframe on the key `place_of_recording` and therefore the `grouped` object contains a DataFrame for each unique place of recording.
We can use the method `.size` to assess the size (i.e. number of records) of each group:
```
grouped.size()
```
When we add the method `.sort_values` and specify that we want the sorting to be done in descending order we obtain the following Series:
```
grouped.size().sort_values(ascending=False)
```
---
#### Q8
Just to recap, compute the max, min, mean, and median number of items per group.
```
# insert your code here
```
---
Grouped objects are extremely helpful when we want to compute a frequency distribution for each unique item of a particular column. Say we would like to investigate the frequency distribution of tune families per place of recording. Given that we have created a grouped object with `place_of_recording` as key, all we need to do is the following:
```
grouped.tunefamily.value_counts()
```
Let's make it slightly more complex. In the previous example we created a grouped object using a single key. We can also specify multiple keys, as in:
```
grouped = df.groupby(['place_of_recording', 'singer_id_s'])
```
This produces the following data structure:
```
grouped.tunefamily.value_counts()
```
---
#### Q9
Make a plot that visualizes for each year how many recordings were made. Tip: first create a grouped object.
```
# insert your code here
```
---
## String operations
One of my favourite functionalities of Pandas is the way it allows you to do vectorized string operations. These operations can be useful if you want to select particular string cells in your dataframe or when you want to manipulate / correct the data. Let's have a look at some of the string data in the MTC:
```
firstlines = df.firstline
firstlines[:10]
```
Now, `firstlines` is a Series corresponding to the firstline column, which contains strings. We can access the string methods of a column with `.str`. In the following cell, place your cursor after the last dot hit TAB:
```
firstlines.str.
```
As you can see, almost all regular string operations are available. The neat thing is that if you call a method like `.upper`, the method will be automatically applied to all items in the series (this is referred to as vectorized operations). Let's see that in action:
```
firstlines.str.upper().head()
```
The method `.contains` is particularly useful if you want to extract all strings that contain a particular substring. We could, for example, search for the string `boom` to extract all records that mention a tree in the first lines:
```
firstlines.str.contains('boom').head()
```
The method `.contains` returns a boolean series which we can use to index the dataframe:
```
df[firstlines.str.contains('boom')]
```
---
To give another example, let's search for all records of which the first line mentions the word `zomer` (summer):
```
is_summer = firstlines.str.contains('zomer')
is_summer.head()
```
As you can see, this returns a boolean Series object. Note that the original `DatetimeIndex` is still in place. We can use that to conveniently plot all occurrences over the years:
```
is_summer.plot()
```
When we index `firstlines` with `is_summer` we retrieve a new Series object consisting of all records that mention the word `zomer` in their first line:
```
summer_lines = firstlines[is_summer]
summer_lines
```
---
#### Q10
Write some code that computes in which quarter of the year most songs about summer are recorded. Tip: use the `.index` attribute and a `Counter` object from the `collections` library.
```
# insert your code here
```
---
## Exercises
You now should have a basic understanding of some important concepts in the Pandas library. The library contains many more interesting functions and methods and it takes quite some time to master them all. The best way to become more confident in using the library is by working with your own data and try to do some basic analyses.
The following exercises are to give some more practice. Some problems will be tough, and you cannot always rely on the information provided in this chapter. This is something you'll have to get used to. Don't worry, visit the documentation website (see http://pandas.pydata.org/pandas-docs/version/0.17.0/), Google for particular problems or ask questions on the Slack channel. Asking for help is all part of learning a new library.
In the following exercises we begin with working on the Dative alternation dataset of Bresnan et al. (2007). In English, the recipient can be realized either as an Noun Phrase (NP), *John gave Mary the book*, or as a Prepositional Phrase (PP), *John gave the book to Mary*. Bresnan et al. (2007) studied several factors that influence the realization of the dative. The complete dataset of Bresnan et al. (2007) is included in the `data` directory under `data/dative.csv`.
Use Pandas to read the dataset as a DataFrame:
```
dative = # insert your code here
```
Inspect the first 10 lines of the dataframe:
```
# insert your code here
```
The dataset provides for each entry the verb that was used. Create a Series object that counts how often each verb is used:
```
# insert your code here
```
Plot the ten most frequent verbs as a pie chart:
```
# insert your code here
```
The realization of the recipient (NP or PP) is stored in the column `RealizationOfRecipient`. How many NP realizations are there and how many PP realizations?
```
# insert your code here
```
The recipients can be either animate or inanimate. How many PP realizations are inanimate? Tip: use `pd.crosstab`.
```
# insert your code here
```
Make a single bar plot that visualizes for both PP and NP realizations of the recipient how often they are animate or inanimate. Tip: use the crosstab method of the previous exercise.
```
# insert your code here
```
Sometimes it is convenient to sort a dataframe on a particular column. Write some code that sorts the `dative` dataframe on the realization of the recipient.
```
# insert your code here
```
Now sort the data on the length of the recipient in **descending** order:
```
# insert your code here
```
Write some code that prints the mean length of the recipient for both animate and inanimate recipients.
```
# insert your code here
```
Make a boxplot that visualizes for each realization of the recipient (i.e. NP or PP) the length of the theme. Tip: make use of the method `dative.boxplot`.
```
# insert your code here
```
| github_jupyter |
```
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Distributed Training with Reduction Server
## Overview
This notebook demonstrates how to optimize large distributed training jobs with the Vertex Training Reduction Server. The [TensorFlow NLP Modelling Toolkit](https://github.com/tensorflow/models/tree/master/official/nlp#tensorflow-nlp-modelling-toolkit) from the [TensorFlow Model Garden](https://github.com/tensorflow/models) is used to preprocess the data and create the model and training loop. `MultiWorkerMirroredStrategy` from the `tf.distribute` module is used to distribute model training across multiple machines.
### Dataset
The dataset used for this tutorial is the [Multi-Genre Natural Language Inference Corpus (MNLI)](https://www.tensorflow.org/datasets/catalog/glue#gluemnli) from the GLUE benchmark. This dataset is loaded from [TensorFlow Datasets](https://www.tensorflow.org/datasets), and used to fine tune a BERT model for sentence prediction.
### Objective
In this notebook, you create a custom-trained model from a Python script in a Docker container. You learn how to configure, submit, and monitor a Vertex Training job that uses Reduction Server to optimize network bandwith and latency of the gradient reduction operation in distributed training.
The steps performed include:
- Convert the MNLI dataset to the format required by the TensorFlow NLP Modelling Toolkit.
- Create a Vertex AI custom job that uses Reduction Server.
- Submit and monitor the job.
- Cleanup resources.
### Costs
This tutorial uses billable components of Google Cloud:
* Vertex AI
* Cloud Storage
Learn about [Vertex AI
pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage
pricing](https://cloud.google.com/storage/pricing), and use the [Pricing
Calculator](https://cloud.google.com/products/calculator/)
to generate a cost estimate based on your projected usage.
### Set up your local development environment
**If you are using Colab or Google Cloud Notebooks**, your environment already meets
all the requirements to run this notebook. You can skip this step.
**Otherwise**, make sure your environment meets this notebook's requirements.
You need the following:
* The Google Cloud SDK
* Git
* Python 3
* virtualenv
* Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to [Setting up a Python development
environment](https://cloud.google.com/python/setup) and the [Jupyter
installation guide](https://jupyter.org/install) provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)
1. [Install Python 3.](https://cloud.google.com/python/setup#installing_python)
1. [Install
virtualenv](https://cloud.google.com/python/setup#installing_and_using_virtualenv)
and create a virtual environment that uses Python 3. Activate the virtual environment.
1. To install Jupyter, run `pip3 install jupyter` on the
command-line in a terminal shell.
1. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.
1. Open this notebook in the Jupyter Notebook Dashboard.
### Install the required packages
Install the TensorFlow Official Models and TensorFlow Text libraries.
```
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install --upgrade tf-models-official==2.5.0 tensorflow-text==2.5.0
```
Install the latest version of the Vertex SDK for Python.
```
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
```
Install the latest version of the Google Cloud Storage library.
```
! pip3 install --upgrade google-cloud-storage $USER_FLAG
```
### Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
```
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```
## Before you begin
### Set up your Google Cloud project
**The following steps are required, regardless of your notebook environment.**
1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).
1. [Enable the Vertex AI API and Compute Engine API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com,compute_component). {TODO: Update the APIs needed for your tutorial. Edit the API names, and update the link to append the API IDs, separating each one with a comma. For example, container.googleapis.com,cloudbuild.googleapis.com}
1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).
1. Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
#### Set your project ID
**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
```
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
```
Otherwise, set your project ID here.
```
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
! gcloud config set project {PROJECT_ID}
```
### Authenticate your Google Cloud account
**If you are using Google Cloud Notebooks**, your environment is already
authenticated. Skip this step.
**If you are using Colab**, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
**Otherwise**, follow these steps:
1. In the Cloud Console, go to the [**Create service account key**
page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).
2. Click **Create service account**.
3. In the **Service account name** field, enter a name, and
click **Create**.
4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type "Vertex AI"
into the filter box, and select
**Vertex AI Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.
5. Click *Create*. A JSON file that contains your key downloads to your
local environment.
6. Enter the path to your service account key as the
`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell.
```
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
```
### Create a Cloud Storage bucket
**The following steps are required, regardless of your notebook environment.**
In this example, your training application uses Cloud Storage for accessing training and validation datasets and for storing checkpoints.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the `REGION` variable, which is used for operations
throughout the rest of this notebook. Make sure to [choose a region where Vertex AI services are
available](https://cloud.google.com/vertex-ai/docs/general/locations#available_regions). You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
```
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
REGION = "[your-region]" # @param {type:"string"}
```
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
```
! gsutil mb -l $REGION $BUCKET_NAME
```
Finally, validate access to your Cloud Storage bucket by examining its contents:
```
! gsutil ls -al $BUCKET_NAME
```
### Import libraries and define constants
```
import json
import pprint
import time
import tensorflow as tf
from official.nlp.bert import tokenization
from official.nlp.data import classifier_data_lib
from google.cloud import aiplatform
from google.cloud.aiplatform_v1beta1 import types
from google.cloud.aiplatform_v1beta1.services.job_service import JobServiceClient
```
### Set up variables
Next, set up some variables used throughout the tutorial.
- API_ENDPOINT: The Vertex API service endpoint for job services.
- PARENT: The Vertex location root path for job resources.
```
API_ENDPOINT = '{}-aiplatform.googleapis.com'.format(REGION)
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
```
### Set up clients
The Vertex client library works as a client/server model. In your Python script, you create a client that sends requests and receives responses from the Vertex server.
In this example, you use the `JobServiceClient` for submitting and monitoring custom training jobs.
```
client_options = {"api_endpoint": API_ENDPOINT}
job_client = JobServiceClient(client_options=client_options)
```
### Prepare training and validation datasets
The TensorFlow NLP Modelling Toolkit provides reusable and modularized modeling building blocks. The following function uses utility functions from the toolkit to transform the MNLI dataset into the format expected by the BERT model.
```
def generate_mnli_tfrecords(
train_data_output_path,
eval_data_output_path,
metadata_file_path,
vocab_file='gs://cloud-tpu-checkpoints/bert/keras_bert/uncased_L-24_H-1024_A-16/vocab.txt',
mnli_type='matched',
max_seq_length=128,
do_lower_case=True):
"""Generates MNLI training/validation splits in the TFRecord format compatible with the Modelling Toolkit."""
tokenizer = tokenization.FullTokenizer(
vocab_file=vocab_file, do_lower_case=do_lower_case)
processor_text_fn = tokenization.convert_to_unicode
if mnli_type == 'matched':
tfds_params = 'dataset=glue/mnli,text_key=hypothesis,text_b_key=premise,train_split=train,dev_split=validation_matched'
else:
tfds_params = 'dataset=glue/mnli,text_key=hypothesis,text_b_key=premise,train_split=train,dev_split=validation_mismatched'
processor = classifier_data_lib.TfdsProcessor(
tfds_params=tfds_params, process_text_fn=processor_text_fn)
metadata = classifier_data_lib.generate_tf_record_from_data_file(
processor,
None,
tokenizer,
train_data_output_path=train_data_output_path,
eval_data_output_path=eval_data_output_path,
max_seq_length=max_seq_length)
with tf.io.gfile.GFile(metadata_file_path, 'w') as writer:
writer.write(json.dumps(metadata, indent=4) + '\n')
# Define data locations
OUTPUT_LOCATION = f'{BUCKET_NAME}/datasets/MNLI'
TRAIN_FILE = f'{OUTPUT_LOCATION}/mnli_train.tf_record'
EVAL_FILE = f'{OUTPUT_LOCATION}/mnli_valid.tf_record'
METADATA_FILE = f'{OUTPUT_LOCATION}/metadata.json'
generate_mnli_tfrecords(TRAIN_FILE, EVAL_FILE, METADATA_FILE)
# Verify that the files were successfully created.
! gsutil ls {OUTPUT_LOCATION}
# Examine the metadata
! gsutil cat {METADATA_FILE}
```
### Create a training container
#### Write Dockerfile
The first step in containerizing your code is to create a Dockerfile. In the Dockerfile, you'll include all the commands needed to run the image such as installing the necessary libraries and setting up the entry point for the training code.
This Dockerfile uses the Deep Learning Container TensorFlow Enterprise 2.5 GPU Docker image. The Deep Learning Containers on Google Cloud come with many common ML and data science frameworks pre-installed. After downloading that image, this Dockerfile installs the TensorFlow Official Models and TensorFlow Text libraries, and the Reduction Server NCCL plugin.
```
# Create training image directory
! mkdir training_image
%%writefile training_image/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-gpu.2-5
WORKDIR /
# Installs Reduction Server NCCL plugin
RUN apt remove -y google-fast-socket \
&& echo "deb https://packages.cloud.google.com/apt google-fast-socket main" | tee /etc/apt/sources.list.d/google-fast-socket.list \
&& curl -s -L https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - \
&& apt update && apt install -y google-reduction-server
# Installs Official Models and Text libraries
RUN pip install tf-models-official==2.5.0 tensorflow-text==2.5.0
# Copies the trainer code to the docker image.
COPY trainer /trainer
# Sets up the entry point to invoke the trainer.
ENTRYPOINT ["python"]
CMD ["-c", "print('TF Model Garden')"]
```
#### Create training application code
Next, you create a `training_image/trainer` directory with a `train.py` script that contains the code for your training application.
The `train.py` training script is based on [the common training driver](https://github.com/tensorflow/models/blob/master/official/nlp/docs/train.md) from the TensorFlow NLP Modelling Toolkit. The common training driver script supports multiple NLP tasks (e.g., pre-training, GLUE and SQuAD fine-tuning) and multiple models (e.g., BERT, ALBERT).
A set of configurations for a specific NLP task is called an experiment. The toolkit includes a set of pre-defined experiments. When you invoke the script, you specificy an experiment type using the `--experiment` command line argument. There are two options for overriding the default experiment configuration:
- Specify one or multiple YAML configurations with updated settings using the `--config_file` command line argument
- Provide updated settings as a list of key/value pairs through the `--params_override` command line argument
If you specify both `--config_file` and `--params_override`, the settings in `--params_override` take precedence.
Retrieving the default experiment configuration and merging user provided settings is encapsulated in the `official.core.train_utils.parse_configuration` utility function from the TensorFlow Model Garden.
In the following cell, the [common training driver script](https://github.com/tensorflow/models/blob/master/official/nlp/train.py) has been adapted to work seamlessly on a distributed compute environment provisioned when running a Vertex Training job. The TensorFlow NLP Modelling Toolkit uses [Orbit](https://github.com/tensorflow/models/tree/master/orbit) to implement a custom training loop. The custom training loop saves checkpoints, writes Tensorboard summaries, and saves a trained model to a storage location specified through the `--model-dir` command line argument.
Note that when using the base common training driver in a distributed setting, each worker uses the same base storage location. To avoid conflicts when multiple workers write to the same storage location, the driver code has been modified so that each worker uses a different storage location based on its role in a Vertex compute cluster. This logic is implemented in the `_get_model_dir` utility function.
```
# Create trainer directory
! mkdir training_image/trainer
%%writefile training_image/trainer/train.py
# Copyright 2021 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""TFM common training driver."""
import json
import os
from absl import app
from absl import flags
from absl import logging
import gin
from official.common import distribute_utils
from official.common import flags as tfm_flags
from official.common import registry_imports
from official.core import task_factory
from official.core import train_lib
from official.core import train_utils
from official.modeling import performance
FLAGS = flags.FLAGS
def _get_model_dir(model_dir):
"""Defines utility functions for model saving.
In a multi-worker scenario, the chief worker will save to the
desired model directory, while the other workers will save the model to
temporary directories. It’s important that these temporary directories
are unique in order to prevent multiple workers from writing to the same
location. Saving can contain collective ops, so all workers must save and
not just the chief.
"""
def _is_chief(task_type, task_id):
return ((task_type == 'chief' and task_id == 0) or task_type is None)
tf_config = os.getenv('TF_CONFIG')
if tf_config:
tf_config = json.loads(tf_config)
if not _is_chief(tf_config['task']['type'], tf_config['task']['index']):
model_dir = os.path.join(model_dir,
'worker-{}').format(tf_config['task']['index'])
logging.info('Setting model_dir to: %s', model_dir)
return model_dir
def main(_):
model_dir = _get_model_dir(FLAGS.model_dir)
gin.parse_config_files_and_bindings(FLAGS.gin_file, FLAGS.gin_params)
params = train_utils.parse_configuration(FLAGS)
if 'train' in FLAGS.mode:
# Pure eval modes do not output yaml files. Otherwise continuous eval job
# may race against the train job for writing the same file.
train_utils.serialize_config(params, model_dir)
# Sets mixed_precision policy. Using 'mixed_float16' or 'mixed_bfloat16'
# can have significant impact on model speeds by utilizing float16 in case of
# GPUs, and bfloat16 in the case of TPUs. loss_scale takes effect only when
# dtype is float16
if params.runtime.mixed_precision_dtype:
performance.set_mixed_precision_policy(params.runtime.mixed_precision_dtype)
distribution_strategy = distribute_utils.get_distribution_strategy(
distribution_strategy=params.runtime.distribution_strategy,
all_reduce_alg=params.runtime.all_reduce_alg,
num_gpus=params.runtime.num_gpus,
tpu_address=params.runtime.tpu,
**params.runtime.model_parallelism())
with distribution_strategy.scope():
task = task_factory.get_task(params.task, logging_dir=model_dir)
train_lib.run_experiment(
distribution_strategy=distribution_strategy,
task=task,
mode=FLAGS.mode,
params=params,
model_dir=model_dir)
train_utils.save_gin_config(FLAGS.mode, model_dir)
if __name__ == '__main__':
tfm_flags.define_flags()
app.run(main)
```
#### Create base settings for the MNLI fine tuning experiment
The TensorFlow NLP Modelling Toolkit includes a predefined experiment for a set of text classification tasks, the `bert/sentence_prediction` experiment. The base settings in the `bert/sentence_prediction` experiment need to be updated for the MNLI fine tuning task you perform in this example.
In the next cell, you update the settings by creating a YAML configuration file that will be referenced when invoking a training script. As noted earlier, you can fine tune these settings even further for each training run by using the `--params_override` flag.
The configuration file has three sections: `task`, `trainer`, and `runtime`.
* The `task` section contains settings specific to your machine learning task, including a URI to a pre-trained BERT model, training and validation datasets settings, and evaluation metrics.
* The `trainer` section configures the settings that control the custom training loop, like checkpoint interval or a number of training steps.
* The `runtime` section includes the settings for a training runtime: a distributed training strategy, GPU configurations, etc.
```
%%writefile training_image/trainer/glue_mnli_matched.yaml
task:
hub_module_url: 'https://tfhub.dev/tensorflow/bert_en_uncased_L-24_H-1024_A-16/4'
model:
num_classes: 3
init_checkpoint: ''
metric_type: 'accuracy'
train_data:
drop_remainder: true
global_batch_size: 32
input_path: ''
is_training: true
seq_length: 128
label_type: 'int'
validation_data:
drop_remainder: false
global_batch_size: 32
input_path: ''
is_training: false
seq_length: 128
label_type: 'int'
trainer:
checkpoint_interval: 3000
optimizer_config:
learning_rate:
polynomial:
# 100% of train_steps.
decay_steps: 36813
end_learning_rate: 0.0
initial_learning_rate: 3.0e-05
power: 1.0
type: polynomial
optimizer:
type: adamw
warmup:
polynomial:
power: 1
# ~10% of train_steps.
warmup_steps: 3681
type: polynomial
steps_per_loop: 1000
summary_interval: 1000
# Training data size 392,702 examples, 3 epochs.
train_steps: 36813
validation_interval: 6135
# Eval data size = 9815 examples.
validation_steps: 307
best_checkpoint_export_subdir: 'best_ckpt'
best_checkpoint_eval_metric: 'cls_accuracy'
best_checkpoint_metric_comp: 'higher'
runtime:
distribution_strategy: 'multi_worker_mirrored'
all_reduce_alg: 'nccl'
```
### Build the container
In the next cells, you build the container and push it to Google Container Registry.
```
TRAIN_IMAGE = f'gcr.io/{PROJECT_ID}/mnli_finetuning'
! docker build -t {TRAIN_IMAGE} training_image
! docker push {TRAIN_IMAGE}
```
### Create a custom training job
When you run a distributed training job with Vertex AI, you specify multiple machines (nodes) in a training cluster. The training service allocates the resources for the machine types you specify. Your running job on a given node is called a replica. A group of replicas with the same configuration is called a worker pool. Vertex Training provides 4 worker pools to cover the different types of machine tasks. To use the Reduction Server, you'll need to use 3 of the 4 available worker pools.
* **Worker pool 0** configures the Primary, chief, scheduler, or "master". This worker generally takes on some extra work such as saving checkpoints and writing summary files. There is only ever one chief worker in a cluster, so your worker count for worker pool 0 will always be 1.
* **Worker pool 1** is where you configure the rest of the workers for your cluster.
* **Worker pool 2** manages Reduction Server reducers.
Worker pools 0 and 1 run the custom training container you created in the previous step. Worker pool 2 uses the Reduction Server image provided by Vertex AI.
The helper function below creates a custom job specification using the described worker pool topology.
```
def prepare_custom_job_spec(
job_name,
image_uri,
args,
cmd,
replica_count=1,
machine_type='n1-standard-4',
accelerator_count=0,
accelerator_type='ACCELERATOR_TYPE_UNSPECIFIED',
reduction_server_count=0,
reduction_server_machine_type='n1-highcpu-16',
reduction_server_image_uri='us-docker.pkg.dev/vertex-ai-restricted/training/reductionserver:latest'
):
if accelerator_count > 0:
machine_spec = {
'machine_type': machine_type,
'accelerator_type': accelerator_type,
'accelerator_count': accelerator_count,
}
else:
machine_spec = {
'machine_type': machine_type
}
container_spec = {
'image_uri': image_uri,
'args': args,
'command': cmd,
}
chief_spec = {
'replica_count': 1,
'machine_spec': machine_spec,
'container_spec': container_spec
}
worker_pool_specs = [chief_spec]
if replica_count > 1:
workers_spec = {
'replica_count': replica_count - 1,
'machine_spec': machine_spec,
'container_spec': container_spec
}
worker_pool_specs.append(workers_spec)
if reduction_server_count > 1:
workers_spec = {
'replica_count': reduction_server_count,
'machine_spec': {
'machine_type': reduction_server_machine_type,
},
'container_spec': {
'image_uri': reduction_server_image_uri
}
}
worker_pool_specs.append(workers_spec)
custom_job_spec = {
'display_name': job_name,
'job_spec': {
'worker_pool_specs': worker_pool_specs
}
}
return custom_job_spec
```
#### Configure worker pools
When choosing the number and type of reducers, you should consider the network bandwidth supported by a reducer replica’s machine type. In GCP, a VM’s machine type defines its maximum possible egress bandwidth. For example, the egress bandwidth of the `n1-highcpu-16` machine type is limited at 32 Gbps.
Because reducers perform a very limited function, aggregating blocks of gradients, they can run on relatively low-powered and cost effective machines. Even with a large number of gradients this computation does not require accelerated hardware or high CPU or memory resources. However, to avoid network bottlenecks, the total aggregate bandwidth of all replicas in the reducer worker pool must be greater or equal to the total aggregate bandwidth of all replicas in worker pools 0 and 1, which host the GPU workers.
```
REPLICA_COUNT = 2
WORKER_MACHINE_TYPE = 'a2-highgpu-4g'
ACCELERATOR_TYPE = 'NVIDIA_TESLA_A100'
PER_MACHINE_ACCELERATOR_COUNT = 4
PER_REPLICA_BATCH_SIZE = 32
REDUCTION_SERVER_COUNT = 4
REDUCTION_SERVER_MACHINE_TYPE = 'n1-highcpu-16'
```
#### Fine tune the MNLI experiment settings
The default settings for the MNLI fine tuning experiment have been configured in the YAML configuration file created in the previous steps. To override the defaults for a specific training run, use the `--params_override` flag.
`params_override` accepts a string with comma separated key/value pairs for each parameter to be overridden.
In the next cell you update the following settings:
- `trainer.train_step` - The number of training steps.
- `trainer.steps_per_loop` - The training script prints out updates about training progress every `steps_per_loop`.
- `trainer.summary_interval` - The training script logs Tensorboard summaries every `summary_interval`.
- `trainer.validation_interval` - The training script runs validation every `validation_interval`.
- `trainer.checkpoint_interval` - The training script creates a checkpoint every `checkpoint_interval`.
- `task.train_data.global_batch_size` - Global batch size for training data.
- `task.validation_data.global_batch_size` - Global batch size for validation data.
- `task.train_data.input_path` - Location of the training dataset.
- `task.validation_data.input_path` - Location of the validation dataset.
- `runtime.num_gpus` -Number of GPUs to use on each worker.
```
PARAMS_OVERRIDE = ','.join([
'trainer.train_steps=2000',
'trainer.steps_per_loop=100',
'trainer.summary_interval=100',
'trainer.validation_interval=2000',
'trainer.checkpoint_interval=2000',
'task.train_data.global_batch_size=' + str(REPLICA_COUNT*PER_REPLICA_BATCH_SIZE*PER_MACHINE_ACCELERATOR_COUNT),
'task.validation_data.global_batch_size=' + str(REPLICA_COUNT*PER_REPLICA_BATCH_SIZE*PER_MACHINE_ACCELERATOR_COUNT),
'task.train_data.input_path=' + TRAIN_FILE,
'task.validation_data.input_path=' + EVAL_FILE,
'runtime.num_gpus=' + str(PER_MACHINE_ACCELERATOR_COUNT),
])
```
#### Create custom job spec
After the experimentation and configuration parameters have been defined, you create the custom job spec.
```
JOB_NAME = 'MNLI_{}'.format(time.strftime('%Y%m%d_%H%M%S'))
MODEL_DIR = f'{BUCKET_NAME}/{JOB_NAME}/model'
WORKER_CMD = ['python', 'trainer/train.py']
WORKER_ARGS = [
'--experiment=bert/sentence_prediction',
'--mode=train',
'--model_dir=' + MODEL_DIR,
'--config_file=trainer/glue_mnli_matched.yaml',
'--params_override=' + PARAMS_OVERRIDE,
]
custom_job_spec = prepare_custom_job_spec(
job_name=JOB_NAME,
image_uri=TRAIN_IMAGE,
args=WORKER_ARGS,
cmd=WORKER_CMD,
replica_count=REPLICA_COUNT,
machine_type=WORKER_MACHINE_TYPE,
accelerator_count=PER_MACHINE_ACCELERATOR_COUNT,
accelerator_type=ACCELERATOR_TYPE,
reduction_server_count=REDUCTION_SERVER_COUNT,
reduction_server_machine_type=REDUCTION_SERVER_MACHINE_TYPE,
)
# Examine the spec
pp = pprint.PrettyPrinter()
print(pp.pformat(custom_job_spec))
```
### Submit and monitor the job
Use the Vertex AI job client to submit and monitor a training job.
To submit the job, use the job client service's `create_custom_job` method.
```
options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.create_custom_job(
parent=parent, custom_job=custom_job_spec
)
response
```
Use the job client service's `get_custom_job` method to retrieve information about a running job.
```
client.get_custom_job(name=response.name)
```
## Cleaning up
To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud
project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
```
# Delete Cloud Storage objects that were created
! gsutil -m rm -r {BUCKET_NAME}
```
| github_jupyter |
```
import numpy as np
import json
import scipy.interpolate
import matplotlib.pyplot as plt
import xml.etree.ElementTree as ET
from collections import OrderedDict
from pprint import pprint
camera="Xsens"
if camera=="Kinect":
form=".txt"
elif camera=="Xsens":
form=".txt"
file_name="./Données/%s/chris1/chris1_1_transformed%s"%(camera,form)
print(file_name)
with open(file_name) as f:
data = json.load(f, object_pairs_hook=OrderedDict)
Times=list(data['positions'].keys())
positions=data['positions']
fps=30
frames_count=len(list(positions.keys()))
def body_positions(body_Part,Times,positions):
x_bPart_values={}
y_bPart_values={}
z_bPart_values={}
tronq_times=[]
for time in Times:
bParts=list(positions[time].keys())
if body_Part in bParts:
x_bPart_values[time]=positions[time][body_Part][1]
y_bPart_values[time]=positions[time][body_Part][2]
z_bPart_values[time]=positions[time][body_Part][0]
tronq_times.append(time)
tronq_times=np.array(tronq_times)
x_bPart_values_list=list(x_bPart_values.values())
x_bPart_values_array=np.array(x_bPart_values_list)
y_bPart_values_list=list(y_bPart_values.values())
y_bPart_values_array=np.array(y_bPart_values_list)
z=z_bPart_values_list=list(z_bPart_values.values())
z_bPart_values_array=np.array(z_bPart_values_list)
return(x_bPart_values_array,y_bPart_values_array,z_bPart_values_array,tronq_times)
def interpolation(x_bPart_values_array,y_bPart_values_array,z_bPart_values_array,Times_float,new_times_array):
tau = Times_float[-1] - Times_float[0]
#new_times_array = np.arange(0, tau, tau/len(y_bPart_values_array))
#new_times_array = np.arange(0, 1628/60, 1/30)
#new_times_array = np.arange(0, 2*1628/60, 1/30)
new_xbPart_values = np.zeros(new_times_array.shape)
new_ybPart_values = np.zeros(new_times_array.shape)
new_zbPart_values = np.zeros(new_times_array.shape)
y_gen = scipy.interpolate.interp1d(([t-Times_float[0] for t in Times_float]), y_bPart_values_array)
y_gen(new_times_array)
print(len(y_gen(new_times_array)))
for i in range(len(new_times_array)):
new_ybPart_values[i]=y_gen(new_times_array[i])
x_gen = scipy.interpolate.interp1d(([t-Times_float[0] for t in Times_float]), x_bPart_values_array)
x_gen(new_times_array)
for i in range(len(new_times_array)):
new_xbPart_values[i]=x_gen(new_times_array[i])
z_gen = scipy.interpolate.interp1d(([t-Times_float[0] for t in Times_float]), z_bPart_values_array)
z_gen(new_times_array)
for i in range(len(new_times_array)):
new_zbPart_values[i]=z_gen(new_times_array[i])
return(new_xbPart_values,new_ybPart_values,new_zbPart_values,list(new_times_array))
def new_body_positions(body_part,Times,positions,times_array):
x_bPart_values_array,y_bPart_values_array,z_bPart_values_array,tronq_times=body_positions(body_part,Times,positions)
Times_float=[]
for time in tronq_times:
Times_float.append(float(time))
new_xbPart_values,new_ybPart_values,new_zbPart_values,new_Times_float=interpolation(x_bPart_values_array,y_bPart_values_array,z_bPart_values_array,Times_float,new_times_array)
print("t ",len(new_Times_float),"y ",len(new_ybPart_values))
plt.plot(new_Times_float,new_ybPart_values,'blue')
plt.plot(sorted(Times_float),y_bPart_values_array,'red')
plt.title("y values after interpolation %s"%body_part)
plt.show()
plt.plot(new_Times_float,new_xbPart_values,'blue')
plt.title("x values after interpolation %s"%body_part)
plt.show()
new_bPart_Positions=np.stack((new_xbPart_values,new_ybPart_values,new_zbPart_values),axis=-1)
return(new_bPart_Positions,new_Times_float)
def stackPositions(body_Part,Times,positions):
x_bPart_values_array,y_bPart_values_array,z_bPart_values_array=body_positions(body_Part,Times,positions)
All_positions=np.stack((x_bPart_values_array,y_bPart_values_array,z_bPart_values_array),axis=-1)
return(All_positions)
T=float(list(positions.keys())[-1])
bParts=list(list(positions.values())[0].keys())
T=27
#new_body_pos,new_Times_float_mSpine=new_body_positions('mSpine',Times,positions)
new_positions={}
#fps=frames_count/T-1
fps=30
new_times_array = np.arange(0, T, 1/fps)
for time in new_times_array:
new_positions[str(time)]={}
for bpart in bParts:
#if bpart=='mSpine':
# for i in range(len(new_body_pos)):
# new_positions[str(new_Times_float_mSpine[i])][bpart]=list(new_mSpine_positions[i])
#else:
new_body_pos=new_body_positions(bpart,Times,positions,new_times_array)[0]
for i in range(len(new_body_pos)):
new_positions[str(new_times_array[i])][bpart]=list(new_body_pos[i])
interpolated_data={}
interpolated_data['positions']=new_positions
with open("./Données/Xsens/chris1/chris1_1_interpolated.txt", 'w') as outfile:
json.dump(interpolated_data, outfile, sort_keys = True, indent = 4,
ensure_ascii = False)
Times
```
| github_jupyter |
# Classification
```
PATH = "../../../Model_Selection/Classification/Data.csv"
random_state = 42
```
## Decision Tree Classification
### Importing Libraries
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
```
### Data Loading
```
dataset = pd.read_csv(PATH)
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
```
### Train Test Split
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.25,
random_state=random_state)
```
### Feature Scaling
```
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
```
### Model
```
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier(criterion='entropy', random_state=random_state)
classifier.fit(X_train, y_train)
```
### Predictions
```
y_pred = classifier.predict(X_test)
print(np.concatenate((y_test.reshape(-1,1)[:5], y_pred.reshape(-1,1)[:5]), axis=1))
```
### Confusion Matrix
```
import seaborn as sn
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
df_cm = pd.DataFrame(cm, range(len(cm[0])), range(len(cm[0])))
sn.set(font_scale=1.4)
sn.heatmap(df_cm, annot=True, annot_kws={"size": 16})
plt.show()
```
### Accuracy Score
```
from sklearn.metrics import accuracy_score
dt_score = accuracy_score(y_test, y_pred)
print("Decision Tree Accuracy Score: ", dt_score)
```
## Kernel SVM
### Importing Libraries
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
```
### Data Loading
```
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
```
### Train Test Split
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.25,
random_state=random_state)
```
### Feature Scaling
```
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
```
### Model
```
from sklearn.svm import SVC
classifier = SVC(kernel="rbf", random_state=42)
classifier.fit(X_train, y_train)
```
### Predictions
```
y_pred = classifier.predict(X_test)
print(np.concatenate((y_test.reshape(-1,1)[:5], y_pred.reshape(-1,1)[:5]), axis=1))
```
### Confusion Matrix
```
import seaborn as sn
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
df_cm = pd.DataFrame(cm, range(len(cm[0])), range(len(cm[0])))
sn.set(font_scale=1.4)
sn.heatmap(df_cm, annot=True, annot_kws={"size": 16})
plt.show()
```
### Accuracy Score
```
from sklearn.metrics import accuracy_score
ksvm_score = accuracy_score(y_test, y_pred)
print("Kernel SVM Accuracy Score: ", ksvm_score)
```
## K-Nearest Neighbor Classification
### Importing Libraries
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
```
### Data Loading
```
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
```
### Train Test Split
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.25,
random_state=random_state)
```
### Feature Scaling
```
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
```
### Model
```
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors=5, metric='minkowski', p=2)
classifier.fit(X_train, y_train)
```
### Predictions
```
y_pred = classifier.predict(X_test)
print(np.concatenate((y_test.reshape(-1,1)[:5], y_pred.reshape(-1,1)[:5]), axis=1))
```
### Confusion Matrix
```
import seaborn as sn
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
df_cm = pd.DataFrame(cm, range(len(cm[0])), range(len(cm[0])))
sn.set(font_scale=1.4)
sn.heatmap(df_cm, annot=True, annot_kws={"size": 16})
plt.show()
```
### Accuracy Score
```
from sklearn.metrics import accuracy_score
knn_score = accuracy_score(y_test, y_pred)
print("KNN Accuracy Score: ", knn_score)
```
## Logistic Regression
### Importing Libraries
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
```
### Data Loading
```
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
```
### Train Test Split
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.25,
random_state=random_state)
```
### Feature Scaling
```
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
```
### Model
```
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state=42)
classifier.fit(X_train, y_train)
```
### Predictions
```
y_pred = classifier.predict(X_test)
print(np.concatenate((y_test.reshape(-1,1)[:5], y_pred.reshape(-1,1)[:5]), axis=1))
```
### Confusion Matrix
```
import seaborn as sn
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
df_cm = pd.DataFrame(cm, range(len(cm[0])), range(len(cm[0])))
sn.set(font_scale=1.4)
sn.heatmap(df_cm, annot=True, annot_kws={"size": 16})
plt.show()
```
### Accuracy Score
```
from sklearn.metrics import accuracy_score
lreg_score = accuracy_score(y_test, y_pred)
print("Logistic Regression Accuracy Score: ", lreg_score)
```
## Naïve Bayes Classification
### Importing Libraries
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
```
### Data Loading
```
dataset = pd.read_csv(PATH)
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
```
### Train Test Split
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.25,
random_state=random_state)
```
### Feature Scaling
```
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
```
### Model
```
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
classifier.fit(X_train, y_train)
```
### Predictions
```
y_pred = classifier.predict(X_test)
print(np.concatenate((y_test.reshape(-1,1)[:5], y_pred.reshape(-1,1)[:5]), axis=1))
```
### Confusion Matrix
```
import seaborn as sn
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
df_cm = pd.DataFrame(cm, range(len(cm[0])), range(len(cm[0])))
sn.set(font_scale=1.4)
sn.heatmap(df_cm, annot=True, annot_kws={"size": 16})
plt.show()
```
### Accuracy Score
```
from sklearn.metrics import accuracy_score
nb_score = accuracy_score(y_test, y_pred)
print("Naïve Bayes Accuracy Score: ", nb_score)
```
## Random Forest
### Importing Libraries
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
```
### Data Loading
```
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
```
### Train Test Split
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.25,
random_state=random_state)
```
### Feature Scaling
```
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
```
### Model
```
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(n_estimators=100, criterion='entropy', random_state=42)
classifier.fit(X_train, y_train)
```
### Predictions
```
y_pred = classifier.predict(X_test)
print(np.concatenate((y_test.reshape(-1,1)[:5], y_pred.reshape(-1,1)[:5]), axis=1))
```
### Confusion Matrix
```
import seaborn as sn
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
df_cm = pd.DataFrame(cm, range(len(cm[0])), range(len(cm[0])))
sn.set(font_scale=1.4)
sn.heatmap(df_cm, annot=True, annot_kws={"size": 16})
plt.show()
```
### Accuracy Score
```
from sklearn.metrics import accuracy_score
rf_score = accuracy_score(y_test, y_pred)
print("Random Forest Accuracy Score: ", rf_score)
```
## Support Vector Machine (Linear)
### Importing Libraries
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
```
### Data Loading
```
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
```
### Train Test Split
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.25,
random_state=random_state)
```
### Feature Scaling
```
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
```
### Model
```
from sklearn.svm import SVC
classifier = SVC(kernel="linear", random_state=42)
classifier.fit(X_train, y_train)
```
### Predictions
```
y_pred = classifier.predict(X_test)
print(np.concatenate((y_test.reshape(-1,1)[:5], y_pred.reshape(-1,1)[:5]), axis=1))
```
### Confusion Matrix
```
import seaborn as sn
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
df_cm = pd.DataFrame(cm, range(len(cm[0])), range(len(cm[0])))
sn.set(font_scale=1.4)
sn.heatmap(df_cm, annot=True, annot_kws={"size": 16})
plt.show()
```
### Accuracy Score
```
from sklearn.metrics import accuracy_score
svm_score = accuracy_score(y_test, y_pred)
print("Linear SVM Accuracy Score: ", svm_score)
```
## Summary
```
print("Decision Tree Accuracy Score: \t\t\t %.5f" % dt_score)
print("Kernel SVM Accuracy Score: \t\t\t %.5f" % ksvm_score)
print("KNN Accuracy Score: \t\t\t\t %.5f" % knn_score)
print("Logistic Regression Accuracy Score: \t\t %.5f" % lreg_score)
print("Naive Bayes Classification Score: \t\t %.5f" % nb_score)
print("Random Forest Classification Score: \t\t %.5f" % rf_score)
print("Linear SVM Classification Score: \t\t\t %.5f" % svm_score)
X = ['Decision Tree', 'Kernel SVM', 'KNN', 'Logistic Regression', 'Naive Bayes', 'Random Forest', 'Linear SVM']
y = [dt_score, ksvm_score, knn_score, lreg_score, nb_score, rf_score, svm_score]
plt.figure(figsize=(24, 16))
plt.plot(X, y)
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import nltk
import numpy as np
import os
import pandas as pd
import seaborn as sns
from collections import defaultdict
from keras import backend as K
from keras.layers import Dense, LSTM, InputLayer, Bidirectional, TimeDistributed
from keras.layers import Embedding, Activation
from keras.models import Sequential
from keras.optimizers import Adam
from keras.preprocessing.sequence import pad_sequences
from nltk.corpus import brown
from sklearn.model_selection import train_test_split, KFold
#Downloading the dataset
nltk.download('brown')
nltk.download('universal_tagset')
data = nltk.corpus.brown.tagged_sents(tagset='universal')
#Downloading Glove Word Embeddings
!wget http://nlp.stanford.edu/data/glove.6B.zip
!unzip glove.6B.zip
#separating sentences and tags as separate sequences
sentences, sentence_tags =[], []
for tagged_sentence in data:
sentence, tags = zip(*tagged_sentence)
sentences.append(np.array(sentence))
sentence_tags.append(np.array(tags))
#Function to ignore the 0 padding while calculating accuracy
def ignore_class_accuracy(to_ignore=0):
def ignore_accuracy(y_true, y_pred):
y_true_class = K.argmax(y_true, axis=-1)
y_pred_class = K.argmax(y_pred, axis=-1)
ignore_mask = K.cast(K.not_equal(y_pred_class, to_ignore), 'int32')
matches = K.cast(K.equal(y_true_class, y_pred_class), 'int32') * ignore_mask
accuracy = K.sum(matches) / K.maximum(K.sum(ignore_mask), 1)
return accuracy
return ignore_accuracy
#Function to return one code encoding of tags
def one_hot_encoding(tag_sents, n_tags):
tag_one_hot_sent = []
for tag_sent in tag_sents:
tags_one_hot = []
for tag in tag_sent:
tags_one_hot.append(np.zeros(n_tags))
tags_one_hot[-1][tag] = 1.0
tag_one_hot_sent.append(tags_one_hot)
return np.array(tag_one_hot_sent)
#Function to convert output into tags
def logits_to_tags(tag_sentences, index):
tag_sequences = []
for tag_sentence in tag_sentences:
tag_sequence = []
for tag in tag_sentence:
# if index[np.argmax(tag)] == "-PAD-":
# break
# else:
tag_sequence.append(index[np.argmax(tag)])
tag_sequences.append(np.array(tag_sequence))
return tag_sequences
#Use the 300 dimensional GLove Word Embeddings
glove_dir = './'
embeddings_index = {} #initialize dictionary
f = open(os.path.join(glove_dir, 'glove.6B.300d.txt'))
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print('Found %s word vectors.' % len(embeddings_index))
acc = []
conf_matrix = []
precision_fold = []
accuracy_fold = []
recall_fold = []
f1score_fold = []
tag_list=['-PAD-','CONJ','PRT','PRON','DET','VERB','ADP','ADJ','ADV','.','X','NUM','NOUN']
# The integers for each tag are the same as above
MAX_LENGTH = len(max(sentences, key=len)) # maximum words in a sentence
conf_mat_df = pd.DataFrame(columns=tag_list, index=tag_list)
conf_mat_df = conf_mat_df.fillna(0)
num_folds = 5
iteration = 1
kfold = KFold(num_folds)
for train_index, test_index in kfold.split(range(len(data))):
print("Iteration " + str(iteration) + " started.")
train_sentences = np.take(sentences,train_index).tolist()
test_sentences = np.take(sentences,test_index).tolist()
train_tags = np.take(sentence_tags,train_index).tolist()
test_tags = np.take(sentence_tags,test_index).tolist()
true_pos_tag = defaultdict(int)
false_pos_tag = defaultdict(int)
false_neg_tag = defaultdict(int)
precision_tags = defaultdict(float)
accuracy_tags = defaultdict(float)
recall_tags = defaultdict(float)
f1score_tags = defaultdict(float)
words, tags = set([]), set([])
#creating sets of words and tags
for sentence in train_sentences:
for word in sentence:
words.add(word.lower())
for tag_sent in train_tags:
for tag in tag_sent:
tags.add(tag)
#bulding vocabulary of words and tags
word2index = {word: i + 2 for i, word in enumerate(list(words))}
word2index['-PAD-'] = 0 # 0 is assigned for padding
word2index['-OOV-'] = 1 # 1 is assigned for unknown words
tag2index = {tag: i + 1 for i, tag in enumerate(list(tags))}
tag2index['-PAD-'] = 0 # 0 is assigned for padding
#Tokenising words and by their indexes in vocabulary
train_sentences_X, test_sentences_X, train_tags_y, test_tags_y = [], [], [], []
for sentence in train_sentences:
sent_int = []
for word in sentence:
try:
sent_int.append(word2index[word.lower()])
except KeyError:
sent_int.append(word2index['-OOV-'])
train_sentences_X.append(sent_int)
for sentence in test_sentences:
sent_int = []
for word in sentence:
try:
sent_int.append(word2index[word.lower()])
except KeyError:
sent_int.append(word2index['-OOV-'])
test_sentences_X.append(sent_int)
for sent_tags in train_tags:
train_tags_y.append([tag2index[tag] for tag in sent_tags])
for sent_tags in test_tags:
test_tags_y.append([tag2index[tag] for tag in sent_tags])
#Add padding to sentences
train_sentences_X = pad_sequences(train_sentences_X, maxlen=MAX_LENGTH, padding='post')
test_sentences_X = pad_sequences(test_sentences_X, maxlen=MAX_LENGTH, padding='post')
train_tags_y = pad_sequences(train_tags_y, maxlen=MAX_LENGTH, padding='post')
test_tags_y = pad_sequences(test_tags_y, maxlen=MAX_LENGTH, padding='post')
#Building the Embedding Layer
embedding_dim = 300
embedding_matrix = np.zeros((len(word2index), embedding_dim))
for word, i in word2index.items():
embedding_vector = embeddings_index.get(word)
if i < len(word2index):
if embedding_vector is not None:
# Words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
#Building the BiLSTM model
model = Sequential()
model.add(InputLayer(input_shape=(MAX_LENGTH, )))
model.add(Embedding(len(word2index), 300, weights=[embedding_matrix],trainable=False))
model.add(Bidirectional(LSTM(256, return_sequences=True)))
model.add(TimeDistributed(Dense(len(tag2index))))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer=Adam(0.001),
metrics=['accuracy', ignore_class_accuracy(0)])
model.summary()
one_hot_train_tags_y = one_hot_encoding(train_tags_y, len(tag2index))
#Training the model
model.fit(train_sentences_X, one_hot_encoding(train_tags_y, len(tag2index)),
batch_size=128, epochs= 9, validation_split=0.2)
scores = model.evaluate(test_sentences_X, one_hot_encoding(test_tags_y, len(tag2index)))
acc.append(scores[2]*100)
predictions = model.predict(test_sentences_X)
pred_sequence = logits_to_tags(predictions, {i: t for t, i in tag2index.items()})
#y_prob_class = model.predict_classes(test_sentences_X, verbose = 1)
for sen_num in range(len(test_tags)):
for i,tag in enumerate(test_tags[sen_num]):
conf_mat_df[tag][pred_sequence[sen_num][i]] +=1
if test_tags[sen_num][i] == pred_sequence[sen_num][i]:
true_pos_tag[tag] += 1
else:
false_neg_tag[tag] += 1
false_pos_tag[pred_sequence[sen_num][i]] += 1
for tag in tag_list[1:]:
precision_tags[tag] = true_pos_tag[tag] / (true_pos_tag[tag] + false_pos_tag[tag])
recall_tags[tag] = true_pos_tag[tag] / (true_pos_tag[tag] + false_neg_tag[tag])
f1score_tags[tag] = 2 * precision_tags[tag] * recall_tags[tag] / (precision_tags[tag] + recall_tags[tag])
accuracy_tags[tag] = true_pos_tag[tag] / (true_pos_tag[tag] + false_neg_tag[tag] + false_pos_tag[tag])
#conf_matrix.append(conf_mat_df)
accuracy_fold.append(accuracy_tags)
precision_fold.append(precision_tags)
recall_fold.append(recall_tags)
f1score_fold.append(f1score_tags)
if iteration == 1:
break
iteration += 1
len(predictions) #11468 X 180 test
train_tags[0]
predictions[0][2]
tot_acc = np.mean(acc)
tot_acc
conf_mat_df
import seaborn as sns
sns.heatmap(conf_mat_df)
sns.heatmap(conf_mat_df,
cmap='Blues')
avg_pre_tag = defaultdict(float)
avg_rec_tag = defaultdict(float)
avg_fsc_tag = defaultdict(float)
avg_acc_tag = defaultdict(float)
for tag in tag_list[1:]:
for i in range(5):
avg_pre_tag[tag] += precision_fold[i][tag]
avg_acc_tag[tag] += accuracy_fold[i][tag]
avg_rec_tag[tag] += recall_fold[i][tag]
avg_fsc_tag[tag] += f1score_fold[i][tag]
print(tag + "\tprecision: " + str(avg_pre_tag[tag]/5) + "\tf1_score: " + str(avg_fsc_tag[tag]/5) + "\taccuracy: " + str(avg_acc_tag[tag]/5) +"\trecall: " + str(avg_rec_tag[tag]/5))
# #"i am a boy and I can run.".split()
# test_samples = test_sentences
# test_samples_X = []
# for s in test_samples:
# s_int = []
# for w in s:
# try:
# s_int.append(word2index[w.lower()])
# except KeyError:
# s_int.append(word2index['-OOV-'])
# test_samples_X.append(s_int)
# test_samples_X = pad_sequences(test_samples_X, maxlen=MAX_LENGTH, padding='post')
# predictions = model.predict(test_samples_X)
# pred_sequence = logits_to_tags(predictions, {i: t for t, i in tag2index.items()})
# # print(pred_sequence)
# # print(test_tags[0])
# #pred_sequence
```
| github_jupyter |
```
"""
You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
# If you're using Google Colab and not running locally, run this cell.
## Install dependencies
!pip install wget
!apt-get install sox libsndfile1 ffmpeg
!pip install unidecode
# ## Install NeMo
BRANCH = 'r1.0.0rc1'
!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[asr]
## Install TorchAudio
!pip install torchaudio>=0.6.0 -f https://download.pytorch.org/whl/torch_stable.html
```
# **SPEAKER RECOGNITION**
Speaker Recognition (SR) is a broad research area that solves two major tasks: speaker identification (who is speaking?) and
speaker verification (is the speaker who they claim to be?). In this work, we focus on text-independent speaker recognition when the identity of the speaker is based on how the speech is spoken,
not necessarily in what is being said. Typically such SR systems operate on unconstrained speech utterances,
which are converted into vectors of fixed length, called speaker embeddings. Speaker embeddings are also used in
automatic speech recognition (ASR) and speech synthesis.
In this tutorial, we shall first train these embeddings on speaker-related datasets, and then get speaker embeddings from a pretrained network for a new dataset. Since Google Colab has very slow read-write speeds, I'll be demonstrating this tutorial using [an4](http://www.speech.cs.cmu.edu/databases/an4/).
Instead, if you'd like to try on a bigger dataset like [hi-mia](https://arxiv.org/abs/1912.01231) use the [get_hi-mia-data.py](https://github.com/NVIDIA/NeMo/blob/r1.0.0rc1/scripts/dataset_processing/get_hi-mia_data.py) script to download the necessary files, extract them, also re-sample to 16Khz if any of these samples are not at 16Khz.
```
import os
NEMO_ROOT = os.getcwd()
print(NEMO_ROOT)
import glob
import subprocess
import tarfile
import wget
data_dir = os.path.join(NEMO_ROOT,'data')
os.makedirs(data_dir, exist_ok=True)
# Download the dataset. This will take a few moments...
print("******")
if not os.path.exists(data_dir + '/an4_sphere.tar.gz'):
an4_url = 'http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz'
an4_path = wget.download(an4_url, data_dir)
print(f"Dataset downloaded at: {an4_path}")
else:
print("Tarfile already exists.")
an4_path = data_dir + '/an4_sphere.tar.gz'
# Untar and convert .sph to .wav (using sox)
tar = tarfile.open(an4_path)
tar.extractall(path=data_dir)
print("Converting .sph to .wav...")
sph_list = glob.glob(data_dir + '/an4/**/*.sph', recursive=True)
for sph_path in sph_list:
wav_path = sph_path[:-4] + '.wav'
cmd = ["sox", sph_path, wav_path]
subprocess.run(cmd)
print("Finished conversion.\n******")
```
Since an4 is not designed for speaker recognition, this facilitates the opportunity to demonstrate how you can generate manifest files that are necessary for training. These methods can be applied to any dataset to get similar training manifest files.
First get an scp file(s) which has all the wav files with absolute paths for each of the train, dev, and test set. This can be easily done by the `find` bash command
```
!find {data_dir}/an4/wav/an4_clstk -iname "*.wav" > data/an4/wav/an4_clstk/train_all.scp
```
Let's look at the first 3 lines of scp file for train.
```
!head -n 3 {data_dir}/an4/wav/an4_clstk/train_all.scp
```
Since we created the scp file for the train, we use `scp_to_manifest.py` to convert this scp file to a manifest file and then optionally split the files to train \& dev for evaluating the models while training by using the `--split` flag. We wouldn't be needing the `--split` option for the test folder.
Accordingly please mention the `id` number, which is the field num separated by `/` to be considered as the speaker label
After the download and conversion, your `data` folder should contain directories with manifest files as:
* `data/<path>/train.json`
* `data/<path>/dev.json`
* `data/<path>/train_all.json`
Each line in the manifest file describes a training sample - `audio_filepath` contains the path to the wav file, `duration` it's duration in seconds, and `label` is the speaker class label:
`{"audio_filepath": "<absolute path to dataset>data/an4/wav/an4test_clstk/menk/cen4-menk-b.wav", "duration": 3.9, "label": "menk"}`
```
if not os.path.exists('scripts'):
print("Downloading necessary scripts")
!mkdir -p scripts/speaker_recognition
!wget -P scripts/speaker_recognition/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/speaker_recognition/scp_to_manifest.py
!python {NEMO_ROOT}/scripts/speaker_recognition/scp_to_manifest.py --scp {data_dir}/an4/wav/an4_clstk/train_all.scp --id -2 --out {data_dir}/an4/wav/an4_clstk/all_manifest.json --split
```
Generate the scp for the test folder and then convert it to a manifest.
```
!find {data_dir}/an4/wav/an4test_clstk -iname "*.wav" > {data_dir}/an4/wav/an4test_clstk/test_all.scp
!python {NEMO_ROOT}/scripts/speaker_recognition/scp_to_manifest.py --scp {data_dir}/an4/wav/an4test_clstk/test_all.scp --id -2 --out {data_dir}/an4/wav/an4test_clstk/test.json
```
## Path to manifest files
```
train_manifest = os.path.join(data_dir,'an4/wav/an4_clstk/train.json')
validation_manifest = os.path.join(data_dir,'an4/wav/an4_clstk/dev.json')
test_manifest = os.path.join(data_dir,'an4/wav/an4_clstk/dev.json')
```
As the goal of most speaker-related systems is to get good speaker level embeddings that could help distinguish from
other speakers, we shall first train these embeddings in end-to-end
manner optimizing the [QuatzNet](https://arxiv.org/abs/1910.10261) based encoder model on cross-entropy loss.
We modify the decoder to get these fixed-size embeddings irrespective of the length of the input audio. We employ a mean and variance
based statistics pooling method to grab these embeddings.
# Training
Import necessary packages
```
import nemo
# NeMo's ASR collection - this collections contains complete ASR models and
# building blocks (modules) for ASR
import nemo.collections.asr as nemo_asr
from omegaconf import OmegaConf
```
## Model Configuration
The SpeakerNet Model is defined in a config file which declares multiple important sections.
They are:
1) model: All arguments that will relate to the Model - preprocessors, encoder, decoder, optimizer and schedulers, datasets, and any other related information
2) trainer: Any argument to be passed to PyTorch Lightning
```
# This line will print the entire config of sample SpeakerNet model
!mkdir conf
!wget -P conf https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/examples/speaker_recognition/conf/SpeakerNet_recognition_3x2x512.yaml
MODEL_CONFIG = os.path.join(NEMO_ROOT,'conf/SpeakerNet_recognition_3x2x512.yaml')
config = OmegaConf.load(MODEL_CONFIG)
print(OmegaConf.to_yaml(config))
```
## Setting up the datasets within the config
If you'll notice, there are a few config dictionaries called train_ds, validation_ds and test_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.
```
print(OmegaConf.to_yaml(config.model.train_ds))
print(OmegaConf.to_yaml(config.model.validation_ds))
```
You will often notice that some configs have ??? in place of paths. This is used as a placeholder so that the user can change the value at a later time.
Let's add the paths to the manifests to the config above
Also, since an4 dataset doesn't have a test set of the same speakers used in training, we will use validation manifest as test manifest for demonstration purposes
```
config.model.train_ds.manifest_filepath = train_manifest
config.model.validation_ds.manifest_filepath = validation_manifest
config.model.test_ds.manifest_filepath = validation_manifest
```
Also as we are training on an4 dataset, there are 74 speaker labels in training, and we need to set this in the decoder config
```
config.model.decoder.num_classes = 74
```
## Building the PyTorch Lightning Trainer
NeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem!
Let us first instantiate a Trainer object!
```
import torch
import pytorch_lightning as pl
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# Let us modify some trainer configs for this demo
# Checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
# Reduces maximum number of epochs to 5 for quick demonstration
config.trainer.max_epochs = 5
# Remove distributed training flags
config.trainer.accelerator = None
trainer = pl.Trainer(**config.trainer)
```
## Setting up a NeMo Experiment
NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it !
```
from nemo.utils.exp_manager import exp_manager
log_dir = exp_manager(trainer, config.get("exp_manager", None))
# The log_dir provides a path to the current logging directory for easy access
print(log_dir)
```
## Building the SpeakerNet Model
SpeakerNet is an ASR model with a classification task - it generates one label for the entire provided audio stream. Therefore we encapsulate it inside the EncDecSpeakerLabelModel as follows.
```
speaker_model = nemo_asr.models.EncDecSpeakerLabelModel(cfg=config.model, trainer=trainer)
```
Before we begin training, let us first create a Tensorboard visualization to monitor progress
```
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
```
As any NeMo model is inherently a PyTorch Lightning Model, it can easily be trained in a single line - trainer.fit(model)!
We see below that the model begins to get modest scores on the validation set after just 5 epochs of training
```
trainer.fit(speaker_model)
```
This config is not suited and designed for an4 so you may observe unstable val_loss
If you have a test manifest file, we can easily compute test accuracy by running
<pre><code>trainer.test(speaker_model, ckpt_path=None)
</code></pre>
## For Faster Training
We can dramatically improve the time taken to train this model by using Multi GPU training along with Mixed Precision.
For multi-GPU training, take a look at the [PyTorch Lightning Multi-GPU training section](https://pytorch-lightning.readthedocs.io/en/stable/advanced/multi_gpu.html)
For mixed-precision training, take a look at the [PyTorch Lightning Mixed-Precision training section](https://pytorch-lightning.readthedocs.io/en/stable/advanced/amp.html)
### Mixed precision:
<pre><code>trainer = Trainer(amp_level='O1', precision=16)
</code></pre>
### Trainer with a distributed backend:
<pre><code>trainer = Trainer(gpus=2, num_nodes=2, accelerator='ddp')
</code></pre>
Of course, you can combine these flags as well.
## Saving/Restoring a checkpoint
There are multiple ways to save and load models in NeMo. Since all NeMo models are inherently Lightning Modules, we can use the standard way that PyTorch Lightning saves and restores models.
NeMo also provides a more advanced model save/restore format, which encapsulates all the parts of the model that are required to restore that model for immediate use.
In this example, we will explore both ways of saving and restoring models, but we will focus on the PyTorch Lightning method.
## Saving and Restoring via PyTorch Lightning Checkpoints
When using NeMo for training, it is advisable to utilize the exp_manager framework. It is tasked with handling checkpointing and logging (Tensorboard as well as WandB optionally!), as well as dealing with multi-node and multi-GPU logging.
Since we utilized the exp_manager framework above, we have access to the directory where the checkpoints exist.
exp_manager with the default settings will save multiple checkpoints for us -
1) A few checkpoints from certain steps of training. They will have --val_loss= tags
2) A checkpoint at the last epoch of training denotes by --last.
3) If the model finishes training, it will also have a --last checkpoint.
```
# Let us list all the checkpoints we have
checkpoint_dir = os.path.join(log_dir, 'checkpoints')
checkpoint_paths = list(glob.glob(os.path.join(checkpoint_dir, "*.ckpt")))
checkpoint_paths
final_checkpoint = list(filter(lambda x: "-last.ckpt" in x, checkpoint_paths))[0]
print(final_checkpoint)
```
## Restoring from a PyTorch Lightning checkpoint
To restore a model using the LightningModule.load_from_checkpoint() class method.
```
restored_model = nemo_asr.models.EncDecSpeakerLabelModel.load_from_checkpoint(final_checkpoint)
```
# Finetuning
Since we don't have any new manifest file to finetune, I will demonstrate here by using the test manifest file we created earlier.
an4 test dataset has a different set of speakers from the train set (total number: 10). And as we didn't split this dataset for validation I will use the same for validation.
So to finetune all we need to do is, update our model config with these manifest paths and change num of decoder classes to create a new decoder with an updated number of classes
```
test_manifest = os.path.join(data_dir,'an4/wav/an4test_clstk/test.json')
config.model.train_ds.manifest_filepath = test_manifest
config.model.validation_ds.manifest_filepath = test_manifest
config.model.decoder.num_classes = 10
```
Once you set up the necessary model config parameters all we need to do is call setup_finetune_model method
```
restored_model.setup_finetune_model(config.model)
```
So we have set up the data and changed the decoder required for finetune, we now just need to create a trainer and start training with a smaller learning rate for fewer epochs
```
# Setup the new trainer object
# Let us modify some trainer configs for this demo
# Checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
trainer_config = OmegaConf.create(dict(
gpus=cuda,
max_epochs=5,
max_steps=None, # computed at runtime if not set
num_nodes=1,
accumulate_grad_batches=1,
checkpoint_callback=False, # Provided by exp_manager
logger=False, # Provided by exp_manager
log_every_n_steps=1, # Interval of logging.
val_check_interval=1.0, # Set to 0.25 to check 4 times per epoch, or an int for number of iterations
))
print(OmegaConf.to_yaml(trainer_config))
trainer_finetune = pl.Trainer(**trainer_config)
```
## Setting the trainer to the restored model
Setting the trainer to the restored model
```
restored_model.set_trainer(trainer_finetune)
log_dir_finetune = exp_manager(trainer_finetune, config.get("exp_manager", None))
print(log_dir_finetune)
```
## Setup optimizer + scheduler
For a fine-tuning experiment, let us set up the optimizer and scheduler!
We will use a much lower learning rate than before
```
import copy
optim_sched_cfg = copy.deepcopy(restored_model._cfg.optim)
# Struct mode prevents us from popping off elements from the config, so let us disable it
OmegaConf.set_struct(optim_sched_cfg, False)
# Let us change the maximum learning rate to previous minimum learning rate
optim_sched_cfg.lr = 0.001
# Set "min_lr" to lower value
optim_sched_cfg.sched.min_lr = 1e-4
print(OmegaConf.to_yaml(optim_sched_cfg))
# Now let us update the optimizer settings
restored_model.setup_optimization(optim_sched_cfg)
# We can also just directly replace the config inplace if we choose to
restored_model._cfg.optim = optim_sched_cfg
```
## Fine-tune training step
We fine-tune on the subset recognition problem. Note, the model was originally trained on these classes (the subset defined here has already been trained on above).
When fine-tuning on a truly new dataset, we will not see such a dramatic improvement in performance. However, it should still converge a little faster than if it was trained from scratch.
```
## Fine-tuning for 5 epochs¶
trainer_finetune.fit(restored_model)
```
# Saving .nemo file
Now we can save the whole config and model parameters in a single .nemo and we can anytime restore from it
```
restored_model.save_to(os.path.join(log_dir_finetune, '..',"SpeakerNet.nemo"))
!ls {log_dir_finetune}/..
# restore from a save model
restored_model_2 = nemo_asr.models.EncDecSpeakerLabelModel.restore_from(os.path.join(log_dir_finetune, '..', "SpeakerNet.nemo"))
```
# Speaker Verification
Training for a speaker verification model is almost the same as the speaker recognition model with a change in the loss function. Angular Loss is a better function to train for a speaker verification model as the model is trained in an end to end manner with loss optimizing for embeddings cluster to be far from each other for different speaker by maximizing the angle between these clusters
To train for verification we just need to toggle `angular` flag in `config.model.decoder.params.angular = True`
Once we set this, the loss will be changed to angular loss and we can follow the above steps to the model.
Note the scale and margin values to be set for the loss function are present at `config.model.loss.scale` and `config.model.loss.margin`
## Extract Speaker Embeddings
Once you have a trained model or use one of our pretrained nemo checkpoints to get speaker embeddings for any speaker.
To demonstrate this we shall use `nemo_asr.models.ExtractSpeakerEmbeddingsModel` with say 5 audio_samples from our dev manifest set. This model is specifically for inference purposes to extract embeddings from a trained `.nemo` model
```
verification_model = nemo_asr.models.ExtractSpeakerEmbeddingsModel.restore_from(os.path.join(log_dir_finetune, '..', 'SpeakerNet.nemo'))
```
Now, we need to pass the necessary manifest_filepath and params to set up the data loader for extracting embeddings
```
!head -5 {validation_manifest} > embeddings_manifest.json
config.model.train_ds
test_config = OmegaConf.create(dict(
manifest_filepath = os.path.join(NEMO_ROOT,'embeddings_manifest.json'),
sample_rate = 16000,
labels = None,
batch_size = 1,
shuffle = False,
time_length = 8,
embedding_dir='./'
))
print(OmegaConf.to_yaml(test_config))
verification_model.setup_test_data(test_config)
```
Once we set up the test data, we need to create a trainer and just call `trainer.test` to save the embeddings in `embedding_dir`
```
trainer = pl.Trainer(gpus=cuda,accelerator=None)
trainer.test(verification_model)
```
Embeddings are stored in dict structure with key-value pair, key being uniq_name generated based on audio_filepath of the sample present in manifest_file
```
ls embeddings/
```
| github_jupyter |
# Neural Machine Translation
In this notebook we will implement a small transformer model for machine translation task. Our model would be able to translate human readable dates in any format to YYYY-MM-DD format.
We will be using `faker` module to generate our dataset
```
%%capture
!pip install -q faker
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from tqdm.auto import tqdm
import re, random
from faker import Faker
from babel.dates import format_date
pd.options.display.max_colwidth = None
sns.set_style('darkgrid')
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras import losses, callbacks, utils, models, Input
from tensorflow.keras import layers as L
```
# Data Generation
```
# Constants
class config():
SAMPLE_SIZE = 10_00_000
DATE_FORMATS = [
'short', 'medium', 'long', 'full',
'd MMM YYY', 'd MMMM YYY', 'dd/MM/YYY',
'EE d, MMM YYY', 'EEEE d, MMMM YYY', 'd of MMMM YYY',
]
VALIDATION_SIZE = 0.1
BATCH_SIZE = 32
MAX_EPOCHS = 25
EMBED_DIM = 16
DENSE_DIM = 16
NUM_HEADS = 2
X_LEN = 30
Y_LEN = 14
NUM_ENCODER_TOKENS = 35
NUM_DECODER_TOKENS = 14
faker = Faker()
print('Sample dates for each format\n')
for fmt in set(config.DATE_FORMATS):
print(f'{fmt:20} => {format_date(faker.date_object(), format=fmt, locale="en")}')
# a utility data cleaning function
def clean_date(raw_date):
return raw_date.lower().replace(',', '')
# this function will generate our data in a data frame
def create_dataset(num_rows):
dataset = []
for i in tqdm(range(num_rows)):
dt = faker.date_object()
for fmt in config.DATE_FORMATS:
try:
date = format_date(dt, format=fmt, locale='en')
human_readable = clean_date(date)
machine_readable = f"@{dt.isoformat()}#" # adding a start token '@' and end token '#'
except AttributeError as e:
date = None
human_readable = None
machine_readable = None
if human_readable is not None and machine_readable is not None:
dataset.append((human_readable, machine_readable))
return pd.DataFrame(dataset, columns=['human_readable', 'machine_readable'])
# generate the dataset using the function defined above
dataset = create_dataset(config.SAMPLE_SIZE)
dataset = dataset.drop_duplicates(subset=['human_readable']).sample(frac=1.0).reset_index(drop=True)
print(dataset.shape)
dataset.head()
```
**Define tokenizers for both the languages (human readable and machine readable dates)**
```
human_tokenizer = Tokenizer(char_level=True)
machine_tokenizer = Tokenizer(char_level=True)
human_tokenizer.fit_on_texts(dataset['human_readable'].values)
machine_tokenizer.fit_on_texts(dataset['machine_readable'].values)
print(human_tokenizer.word_index)
print(machine_tokenizer.word_index)
# A utility function to clean and tokenize the text and then pad the sequence
def preprocess_input(date, tokenizer, max_len):
seq = [i[0] for i in tokenizer.texts_to_sequences(date.lower().replace(',', ''))]
seq = pad_sequences([seq], padding='post', maxlen=max_len)[0]
return seq
```
**Preprocessing the data**
```
%%time
X = np.array(list(map(lambda x: preprocess_input(x, human_tokenizer, config.X_LEN), dataset['human_readable'])))
y = np.array(list(map(lambda x: preprocess_input(x, machine_tokenizer, config.Y_LEN), dataset['machine_readable'])))
X.shape, y.shape
# Cross Validation
from sklearn.model_selection import train_test_split
X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=config.VALIDATION_SIZE, random_state=19)
```
A utility function to generate batches of data
* encoder input data would be source language text
* decoder input data would be target language text
* decoder output data would be target language text shifted one time stamp forward (one hot encoded)
```
def generate_batch(X, y, batch_size=config.BATCH_SIZE):
''' Generate a batch of data '''
while True:
for j in range(0, len(X), batch_size):
encoder_input_data = X[j:j+batch_size]
decoder_input_data = y[j:j+batch_size]
output = y[j:j+batch_size]
decoder_output_data = np.zeros_like(output)
decoder_output_data[:,:-1] = output[:, 1:]
decoder_target_data = utils.to_categorical(decoder_output_data, num_classes=config.NUM_DECODER_TOKENS)
yield([encoder_input_data, decoder_input_data], decoder_target_data)
```
# Model
Let's define the key components for our model
* Positional Embedding Layer
* Encoder Block
* Decoder Block
```
class PositionalEmbedding(L.Layer):
def __init__(self, sequence_length, input_dim, output_dim, **kwargs):
super().__init__(**kwargs)
self.token_embeddings = L.Embedding(input_dim, output_dim)
self.position_embeddings = L.Embedding(sequence_length, output_dim)
self.sequence_length = sequence_length
self.input_dim = input_dim
self.output_dim = output_dim
def call(self, inputs):
length = tf.shape(inputs)[-1]
positions = tf.range(start=0, limit=length, delta=1)
embedded_tokens = self.token_embeddings(inputs)
embedded_positions = self.position_embeddings(positions)
return embedded_tokens + embedded_positions
def get_config(self):
config = super().get_config()
config.update({
"output_dim": self.output_dim,
"sequence_length": self.sequence_length,
"input_dim": self.input_dim,
})
return config
class TransformerEncoder(L.Layer):
def __init__(self, embed_dim, dense_dim, num_heads, **kwargs):
super().__init__(**kwargs)
self.embed_dim = embed_dim
self.dense_dim = dense_dim
self.num_heads = num_heads
self.attention = L.MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim)
self.dense_proj = keras.Sequential([L.Dense(dense_dim, activation='relu'), L.Dense(embed_dim)])
self.layernorm1 = L.LayerNormalization()
self.layernorm2 = L.LayerNormalization()
def call(self, inputs, mask=None):
if mask is not None:
mask = mask[: tf.newaxis, :]
attention_output = self.attention(inputs, inputs, attention_mask=mask)
proj_input = self.layernorm1(inputs + attention_output)
proj_output = self.dense_proj(proj_input)
return self.layernorm2(proj_input + proj_output)
def get_config(self):
config = super().get_confog()
config.update({
"embed_dim": self.embed_dim,
"num_heads": self.num_heads,
"dense_dim": self.dense_dim
})
return config
class TransformerDecoder(L.Layer):
def __init__(self, embed_dim, dense_dim, num_heads, **kwargs):
super().__init__(**kwargs)
self.embed_dim = embed_dim
self.dense_dim = dense_dim
self.num_heads = num_heads
self.attention_1 = L.MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim)
self.attention_2 = L.MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim)
self.dense_proj = keras.Sequential([
L.Dense(dense_dim, activation='relu'), L.Dense(embed_dim)
])
self.layernorm_1 = L.LayerNormalization()
self.layernorm_2 = L.LayerNormalization()
self.layernorm_3 = L.LayerNormalization()
self.support_masking = False
def get_config(self):
config = super().get_config()
config.update({
'embed_dim': self.embed_dim,
"num_heads": self.num_heads,
"dense_dim": self.dense_dim,
})
return config
def call(self, inputs, encoder_outputs, mask=None):
if mask is not None:
padding_mask = tf.cast(mask[:, tf.newaxis, :], dtype="int32")
padding_mask = tf.minimum(padding_mask, causal_mask)
attention_output_1 = self.attention_1(
query=inputs, value=inputs, key=inputs, attention_mask=None
)
attention_output_1 = self.layernorm_1(inputs + attention_output_1)
attention_output_2 = self.attention_2(
query=attention_output_1, value=encoder_outputs, key=encoder_outputs, attention_mask=None
)
attention_output_2 = self.layernorm_2(
attention_output_1 + attention_output_2
)
proj_output = self.dense_proj(attention_output_2)
return self.layernorm_3(attention_output_2 + proj_output)
```
Lets Define our model
* Positional Embedding is applied to Encoder Inputs
* Embedded Encoder Inputs are fed to the Transformer Encoder
* Positional Embedding is applied to Decoder Inputs
* Encoder Outputs and along with the Embedded Decoder Inputs are fed to the transformer Decoder
* We optionally apply dropout to the outputs from the decoder block before feeding it to the output layer
```
encoder_inputs = keras.Input(shape=(None, ), dtype="int64", name="human")
x = PositionalEmbedding(config.X_LEN, config.NUM_ENCODER_TOKENS, config.EMBED_DIM)(encoder_inputs)
encoder_outputs = TransformerEncoder(config.EMBED_DIM, config.DENSE_DIM, config.NUM_HEADS)(x)
decoder_inputs = keras.Input(shape=(None, ), dtype="int64", name="machine" )
x = PositionalEmbedding(config.Y_LEN, config.NUM_DECODER_TOKENS, config.EMBED_DIM)(decoder_inputs)
x = TransformerDecoder(config.EMBED_DIM, config.DENSE_DIM, config.NUM_HEADS)(x, encoder_outputs)
x = L.Dropout(0.5)(x)
decoder_outputs = L.Dense(config.NUM_DECODER_TOKENS, activation="softmax")(x)
transformer = keras.Model([encoder_inputs, decoder_inputs], decoder_outputs)
transformer.compile(optimizer="adam", loss="categorical_crossentropy", metrics="accuracy")
transformer.summary()
utils.plot_model(transformer, show_shapes=True, expand_nested=True)
es = callbacks.EarlyStopping(monitor="val_loss", patience=3, verbose=1, restore_best_weights=True, min_delta=1e-4)
rlp = callbacks.ReduceLROnPlateau(monitor='val_loss', patience=2, verbose=1)
%%capture training_log
history = transformer.fit(
generate_batch(X_train, y_train), steps_per_epoch = np.ceil(len(X_train)/config.BATCH_SIZE),
validation_data=generate_batch(X_valid, y_valid), validation_steps=np.ceil(len(X_valid)/config.BATCH_SIZE),
epochs=config.MAX_EPOCHS, callbacks=[es, rlp],
)
with open('training.log', 'w') as f: f.write(training_log.stdout)
fig, ax = plt.subplots(figsize=(20, 6))
pd.DataFrame(history.history).plot(ax=ax)
del history
```
# Evaluation
**Lets see our model in action**
```
# this function converts the token ids to token text
def word_for_id(integer, tokenizer):
for word, index in tokenizer.word_index.items():
if index == integer:
return word
return ""
# this function is used to generate predictions
# target sequence is intialised as the start token
# then predictions are taken from the model iteratively
def predict_sequence(source):
target_seq = "@"
for i in range(14):
tok_target_seq = preprocess_input(target_seq, machine_tokenizer, config.Y_LEN).reshape(1, -1)
output_tokens = transformer.predict([source, tok_target_seq])
token_index = np.argmax(output_tokens[0, i, :])
token = word_for_id(token_index, machine_tokenizer)
target_seq += token
return target_seq
# this function decodes the complete sequence of token ids into text
def decode_sequence(tokenizer, source):
target = list()
for i in source:
word = word_for_id(i, tokenizer)
if word is None:
break
target.append(word)
return ''.join(target)
query_text = 'saturday 19 september 1998'
query = np.array(list(map(lambda x: preprocess_input(x, human_tokenizer, config.X_LEN), [query_text])))
print('SOURCE :', query_text)
print('PREDICTION :', predict_sequence(query))
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
import nsta.tcspcdata, nsta.tadata, nsta.analysis
import importlib
nsta.analysis = importlib.reload(nsta.analysis)
ta_data = nsta.tadata.TAData()
ta_data.delta_od = "../../H2TPPF20/2019-02-14/H2TPPF20_CHX_312nm_12ns_2001ms_10ps/TA/0001/H2TPPF20_CHX_312_NS_2D_DeltaOD_uncor.dat"
ta_data.missed_shots = "../../H2TPPF20/2019-02-14/H2TPPF20_CHX_312nm_12ns_2001ms_10ps/TA/0001/H2TPPF20_CHX_312_NS_missed_shots.dat"
ta_data.status_numbers = "../../H2TPPF20/2019-02-14/H2TPPF20_CHX_312nm_12ns_2001ms_10ps/TA/0001/H2TPPF20_CHX_312_NS_cmbstatusnumber.dat"
tcspc_data = nsta.tcspcdata.TCSPCData()
tcspc_data.delays_directory = "../../H2TPPF20/2019-02-14/H2TPPF20_CHX_312nm_12ns_2001ms_10ps/TCSPC"
ta_data.missed_shots
print("num spectra before processing:", ta_data.num_spectra)
ana = nsta.analysis.TATCSCPAnalysis(ta_data, tcspc_data)
ana._delay_offsets = np.zeros(ana.ta_data.num_steps, dtype=np.int) - 1
ana.process_data()
plt.figure()
plt.plot(ana.delay_statistics[0], ana.delay_statistics[1])
plt.show()
t, od = ana.plot_transient(260, 340)
plt.figure()
plt.plot(t, od)
plt.show()
ana._delay_offsets = np.zeros(ana.ta_data.num_steps, dtype=np.int) - 1
ana.process_data()
_, od0 = ana.plot_transient(260, 340)
t = np.arange(*od0.shape) # x-axis as index
ana._delay_offsets = np.zeros(ana.ta_data.num_steps, dtype=np.int) - 2
ana.process_data()
_, od1 = ana.plot_transient(260, 340)
ana._delay_offsets = np.zeros(ana.ta_data.num_steps, dtype=np.int) - 3
ana.process_data()
_, od2 = ana.plot_transient(260, 340)
plt.figure()
plt.plot(t, od0, t, od1, t, od2)
plt.show()
```
# Versuch 5: Iterative Brute Force Geradenregression
```
# Fit mit linregress
ana._delay_offsets = np.zeros(ana.ta_data.num_steps, dtype=np.int) - 1
ana.process_data()
_, od = ana.plot_transient(260, 340)
t = np.arange(*od.shape) # x-axis as index
from scipy import stats
s = slice(15, 75)
slope, intercept, _, _, _ = stats.linregress(t[s], od[s])
fit = intercept + slope * t
plt.figure()
plt.plot(t, od, t, fit)
plt.show()
from scipy.optimize import curve_fit
def err(offsets, ana, slc):
ana._delay_offsets = offsets
ana.process_data()
_, od = ana.plot_transient(260, 340)
popt, _ = curve_fit(lambda x, b: slope*x + b, t[s], od[s])
target = popt[0] + slope * t
err = target - od
err = err[slc]
print('err =', np.sum(err**2))
return np.sum(err**2)
offsets = np.zeros(ana.ta_data.num_steps, dtype=np.int) - 1
slc = slice(90, 140)
test_values = [0, -1, -2]
for i in range(ana.ta_data.num_steps):
new_offsets = []
for val in test_values:
new = offsets.copy()
new[i] = val
new_offsets.append(new)
new_offsets = np.array(new_offsets)
new_errs = [err(off, ana, slc) for off in new_offsets]
min_err_idx = new_errs.index(min(new_errs))
offsets[i] = test_values[min_err_idx]
print(offsets)
print(repr(offsets))
offsets_from_v5 = np.array([-1, 0, -2, 0, 0, -2, 0, -2, -1, 0, -1, -1, -1, -1, -1, -2, 0,
-1, 0, -1, -1, -1, -1, -1, 0, -1, -1, -1, -1, -1, -1, -1, -1, -1,
-1, 0, 0, -1, -1, -1, -2, -1, -1, -1, 0, -1, -1, -1, -1, -1, 0,
-1, -1, 0, -1, -1, -1, -1, -1, -1, 0, -1, -1, -1, -1, -1, 0, -1,
-1, -1, -1, 0, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
0, -1, -1, -1, -1, -1, -1, 0, -1, -1, -1, -1, 0, -1, -1, -1, -1,
-2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
0, -1, -1, -1, -1, 0, -1, -1, 0, 0, -1, 0, -1, -1, -1, 0, -1,
-1, 0, -1, -1, -1, -1, -1, 0, -1, -1])
ana._delay_offsets = offsets_from_v5
ana.process_data()
_, od = ana.plot_transient(260, 340)
t = np.arange(*od.shape) # x-axis as index
plt.figure()
plt.plot(t, od)
plt.show()
np.savetxt('H2TPPF20_CHX_312nm_12ns_2001ms_10psH2TPPF20_CHX_312nm_12ns_2001ms_10ps_delays.dat', ana.tcspc_data_processed, fmt='%.3e')
np.savetxt('H2TPPF20_CHX_312nm_12ns_2001ms_10psH2TPPF20_CHX_312nm_12ns_2001ms_10ps_2d_data.dat', ana.ta_data_processed, fmt='%.12e')
ana.delay_statistics
```
| github_jupyter |
```
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Vertex SDK: AutoML training text entity extraction model for online prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_text_entity_extraction_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_text_entity_extraction_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_text_entity_extraction_online.ipynb">
Open in Google Cloud Notebooks
</a>
</td>
</table>
<br/><br/><br/>
## Overview
This tutorial demonstrates how to use the Vertex SDK to create text entity extraction models and do online prediction using a Google Cloud [AutoML](https://cloud.google.com/vertex-ai/docs/start/automl-users) model.
### Dataset
The dataset used for this tutorial is the [NCBI Disease Research Abstracts dataset](https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/) from [National Center for Biotechnology Information](https://www.ncbi.nlm.nih.gov/). The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket.
### Objective
In this tutorial, you create an AutoML text entity extraction model and deploy for online prediction from a Python script using the Vertex SDK. You can alternatively create and deploy models using the `gcloud` command-line tool or online using the Cloud Console.
The steps performed include:
- Create a Vertex `Dataset` resource.
- Train the model.
- View the model evaluation.
- Deploy the `Model` resource to a serving `Endpoint` resource.
- Make a prediction.
- Undeploy the `Model`.
### Costs
This tutorial uses billable components of Google Cloud:
* Vertex AI
* Cloud Storage
Learn about [Vertex AI
pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage
pricing](https://cloud.google.com/storage/pricing), and use the [Pricing
Calculator](https://cloud.google.com/products/calculator/)
to generate a cost estimate based on your projected usage.
### Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
- The Cloud Storage SDK
- Git
- Python 3
- virtualenv
- Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to [Setting up a Python development environment](https://cloud.google.com/python/setup) and the [Jupyter installation guide](https://jupyter.org/install) provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
1. [Install and initialize the SDK](https://cloud.google.com/sdk/docs/).
2. [Install Python 3](https://cloud.google.com/python/setup#installing_python).
3. [Install virtualenv](https://cloud.google.com/python/setup#installing_and_using_virtualenv) and create a virtual environment that uses Python 3. Activate the virtual environment.
4. To install Jupyter, run `pip3 install jupyter` on the command-line in a terminal shell.
5. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.
6. Open this notebook in the Jupyter Notebook Dashboard.
## Installation
Install the latest version of Vertex SDK for Python.
```
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
```
Install the latest GA version of *google-cloud-storage* library as well.
```
! pip3 install -U google-cloud-storage $USER_FLAG
if os.environ["IS_TESTING"]:
! pip3 install --upgrade tensorflow $USER_FLAG
```
### Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
```
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```
## Before you begin
### GPU runtime
This tutorial does not require a GPU runtime.
### Set up your Google Cloud project
**The following steps are required, regardless of your notebook environment.**
1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)
3. [Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component,storage-component.googleapis.com)
4. If you are running this notebook locally, you will need to install the [Cloud SDK]((https://cloud.google.com/sdk)).
5. Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$`.
```
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
```
#### Region
You can also change the `REGION` variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
- Americas: `us-central1`
- Europe: `europe-west4`
- Asia Pacific: `asia-east1`
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)
```
REGION = "us-central1" # @param {type: "string"}
```
#### Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
```
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
```
### Authenticate your Google Cloud account
**If you are using Google Cloud Notebooks**, your environment is already authenticated. Skip this step.
**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
**Otherwise**, follow these steps:
In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.
**Click Create service account**.
In the **Service account name** field, enter a name, and click **Create**.
In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
```
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
```
### Create a Cloud Storage bucket
**The following steps are required, regardless of your notebook environment.**
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
```
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
```
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
```
! gsutil mb -l $REGION $BUCKET_NAME
```
Finally, validate access to your Cloud Storage bucket by examining its contents:
```
! gsutil ls -al $BUCKET_NAME
```
### Set up variables
Next, set up some variables used throughout the tutorial.
### Import libraries and define constants
```
import google.cloud.aiplatform as aip
```
## Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
```
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
```
# Tutorial
Now you are ready to start creating your own AutoML text entity extraction model.
#### Location of Cloud Storage training data.
Now set the variable `IMPORT_FILE` to the location of the JSONL index file in Cloud Storage.
```
IMPORT_FILE = "gs://cloud-samples-data/language/ucaip_ten_dataset.jsonl"
```
#### Quick peek at your data
This tutorial uses a version of the NCBI Biomedical dataset that is stored in a public Cloud Storage bucket, using a JSONL index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of objects in a JSONL index file (`wc -l`) and then peek at the first few rows.
```
if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
```
### Create the Dataset
Next, create the `Dataset` resource using the `create` method for the `TextDataset` class, which takes the following parameters:
- `display_name`: The human readable name for the `Dataset` resource.
- `gcs_source`: A list of one or more dataset index files to import the data items into the `Dataset` resource.
- `import_schema_uri`: The data labeling schema for the data items.
This operation may take several minutes.
```
dataset = aip.TextDataset.create(
display_name="NCBI Biomedical" + "_" + TIMESTAMP,
gcs_source=[IMPORT_FILE],
import_schema_uri=aip.schema.dataset.ioformat.text.extraction,
)
print(dataset.resource_name)
```
### Create and run training pipeline
To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.
#### Create training pipeline
An AutoML training pipeline is created with the `AutoMLTextTrainingJob` class, with the following parameters:
- `display_name`: The human readable name for the `TrainingJob` resource.
- `prediction_type`: The type task to train the model for.
- `classification`: A text classification model.
- `sentiment`: A text sentiment analysis model.
- `extraction`: A text entity extraction model.
- `multi_label`: If a classification task, whether single (False) or multi-labeled (True).
- `sentiment_max`: If a sentiment analysis task, the maximum sentiment value.
The instantiated object is the DAG (directed acyclic graph) for the training pipeline.
```
dag = aip.AutoMLTextTrainingJob(
display_name="biomedical_" + TIMESTAMP, prediction_type="extraction"
)
print(dag)
```
#### Run the training pipeline
Next, you run the DAG to start the training job by invoking the method `run`, with the following parameters:
- `dataset`: The `Dataset` resource to train the model.
- `model_display_name`: The human readable name for the trained model.
- `training_fraction_split`: The percentage of the dataset to use for training.
- `test_fraction_split`: The percentage of the dataset to use for test (holdout data).
- `validation_fraction_split`: The percentage of the dataset to use for validation.
The `run` method when completed returns the `Model` resource.
The execution of the training pipeline will take upto 20 minutes.
```
model = dag.run(
dataset=dataset,
model_display_name="biomedical_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
)
```
## Review model evaluation scores
After your model has finished training, you can review the evaluation scores for it.
First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
```
# Get model resource ID
models = aip.Model.list(filter="display_name=biomedical_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
```
## Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the `deploy` method.
```
endpoint = model.deploy()
```
## Send a online prediction request
Send a online prediction to your deployed model.
### Make test item
You will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
```
test_item = 'Molecular basis of hexosaminidase A deficiency and pseudodeficiency in the Berks County Pennsylvania Dutch.\tFollowing the birth of two infants with Tay-Sachs disease ( TSD ) , a non-Jewish , Pennsylvania Dutch kindred was screened for TSD carriers using the biochemical assay . A high frequency of individuals who appeared to be TSD heterozygotes was detected ( Kelly et al . , 1975 ) . Clinical and biochemical evidence suggested that the increased carrier frequency was due to at least two altered alleles for the hexosaminidase A alpha-subunit . We now report two mutant alleles in this Pennsylvania Dutch kindred , and one polymorphism . One allele , reported originally in a French TSD patient ( Akli et al . , 1991 ) , is a GT-- > AT transition at the donor splice-site of intron 9 . The second , a C-- > T transition at nucleotide 739 ( Arg247Trp ) , has been shown by Triggs-Raine et al . ( 1992 ) to be a clinically benign " pseudodeficient " allele associated with reduced enzyme activity against artificial substrate . Finally , a polymorphism [ G-- > A ( 759 ) ] , which leaves valine at codon 253 unchanged , is described'
```
### Make the prediction
Now that your `Model` resource is deployed to an `Endpoint` resource, you can do online predictions by sending prediction requests to the `Endpoint` resource.
#### Request
The format of each instance is:
{ 'content': text_string }
Since the predict() method can take multiple items (instances), send your single test item as a list of one test item.
#### Response
The response from the predict() call is a Python dictionary with the following entries:
- `ids`: The internal assigned unique identifiers for each prediction request.
- `displayNames`: The class names for each entity.
- `confidences`: The predicted confidence, between 0 and 1, per entity.
- `textSegmentStartOffsets`: The character offset in the text to the start of the entity.
- `textSegmentEndOffsets`: The character offset in the text to the end of the entity.
- `deployed_model_id`: The Vertex AI identifier for the deployed `Model` resource which did the predictions.
```
instances_list = [{"content": test_item}]
prediction = endpoint.predict(instances_list)
print(prediction)
```
## Undeploy the model
When you are done doing predictions, you undeploy the model from the `Endpoint` resouce. This deprovisions all compute resources and ends billing for the deployed model.
```
endpoint.undeploy_all()
```
# Cleaning up
To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud
project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
- Dataset
- Pipeline
- Model
- Endpoint
- AutoML Training Job
- Batch Job
- Custom Job
- Hyperparameter Tuning Job
- Cloud Storage Bucket
```
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
```
| github_jupyter |
<img src="https://raw.githubusercontent.com/Qiskit/qiskit-tutorials/master/images/qiskit-heading.png" width="500 px" align="center">
# _*Qiskit Aqua: Experiment with classification problem with quantum-enhanced support vector machines*_
This notebook is based on an official notebook by Qiskit team, available at https://github.com/qiskit/qiskit-tutorial under the [Apache License 2.0](https://github.com/Qiskit/qiskit-tutorial/blob/master/LICENSE) license.
The original notebook was developed by Vojtech Havlicek<sup>[1]</sup>, Kristan Temme<sup>[1]</sup>, Antonio Córcoles<sup>[1]</sup>, Peng Liu<sup>[1]</sup>, Richard Chen<sup>[1]</sup>, Marco Pistoia<sup>[1]</sup> and Jay Gambetta<sup>[1]</sup> (<sup>[1]</sup>IBMQ)
Your **TASK** is to execute every step of this notebook while learning to use qiskit-aqua and also understanding how this SVM implementation can be used for classifying breast cancer analysis.
### Introduction
Classification algorithms and methods for machine learning are essential for pattern recognition and data mining applications. Well known techniques such as support vector machines and neural networks have blossomed over the last two decades as a result of the spectacular advances in classical hardware computational capabilities and speed. This progress in computer power made it possible to apply techniques, that were theoretically developed towards the middle of the 20th century, on classification problems that were becoming increasingly challenging.
A key concept in classification methods is that of a kernel. Data cannot typically be separated by a hyperplane in its original space. A common technique used to find such a hyperplane consists on applying a non-linear transformation function to the data. This function is called a feature map, as it transforms the raw features, or measurable properties, of the phenomenon or subject under study. Classifying in this new feature space -and, as a matter of fact, also in any other space, including the raw original one- is nothing more than seeing how close data points are to each other. This is the same as computing the inner product for each pair of data in the set. So, in fact we do not need to compute the non-linear feature map for each datum, but only the inner product of each pair of data points in the new feature space. This collection of inner products is called the kernel and it is perfectly possible to have feature maps that are hard to compute but whose kernels are not.
In this notebook we provide an example of a classification problem that requires a feature map for which computing the kernel is not efficient classically -this means that the required computational resources are expected to scale exponentially with the size of the problem. We show how this can be solved in a quantum processor by a direct estimation of the kernel in the feature space. The method we used falls in the category of what is called supervised learning, consisting of a training phase (where the kernel is calculated and the support vectors obtained) and a test or classification phase (where new unlabelled data is classified according to the solution found in the training phase).
References and additional details:
[1] Vojtech Havlicek, Antonio D. C´orcoles, Kristan Temme, Aram W. Harrow, Abhinav Kandala, Jerry M. Chow, and Jay M. Gambetta1, "Supervised learning with quantum enhanced feature spaces," [arXiv: 1804.11326](https://arxiv.org/pdf/1804.11326.pdf)
```
from qsvm_datasets import *
from qiskit.aqua.utils import split_dataset_to_data_and_labels, map_label_to_class_name
from qiskit.aqua.input import get_input_instance
from qiskit.aqua import run_algorithm
# setup aqua logging
import logging
from qiskit.aqua._logging import set_logging_config, build_logging_config
# set_logging_config(build_logging_config(logging.DEBUG)) # choose INFO, DEBUG to see the log
```
### [Optional] Setup token to run the experiment on a real device
If you would like to run the experiement on a real device, you need to setup your account first.
Note: If you do not store your token yet, use `IBMQ.save_accounts()` to store it first.
```
from qiskit import IBMQ
IBMQ.load_account()
```
First we prepare the dataset, which is used for training, testing and the finally prediction.
*Note: You can easily switch to a different dataset, such as the Breast Cancer dataset, by replacing 'ad_hoc_data' to 'Breast_cancer' below.*
```
feature_dim=2 # we support feature_dim 2 or 3
sample_Total, training_input, test_input, class_labels = ad_hoc_data(
training_size=20, test_size=10, n=feature_dim, gap=0.3, PLOT_DATA=True
)
datapoints, class_to_label = split_dataset_to_data_and_labels(test_input)
print(class_to_label)
```
With the dataset ready we initialize the necessary inputs for the algorithm:
- the input dictionary (params)
- the input object containing the dataset info (algo_input).
```
params = {
'problem': {'name': 'classification', 'random_seed': 10598},
'algorithm': {
'name': 'QSVM'
},
'backend': {'name': 'qasm_simulator', 'shots': 1024},
'feature_map': {'name': 'SecondOrderExpansion', 'depth': 2, 'entanglement': 'linear'}
}
algo_input = get_input_instance('ClassificationInput')
algo_input.training_dataset = training_input
algo_input.test_dataset = test_input
algo_input.datapoints = datapoints[0] # 0 is data, 1 is labels
```
With everything setup, we can now run the algorithm.
For the testing, the result includes the details and the success ratio.
For the prediction, the result includes the predicted labels.
```
result = run_algorithm(params, algo_input)
print("testing success ratio: ", result['testing_accuracy'])
print("predicted classes:", result['predicted_classes'])
print("kernel matrix during the training:")
kernel_matrix = result['kernel_matrix_training']
img = plt.imshow(np.asmatrix(kernel_matrix),interpolation='nearest',origin='upper',cmap='bone_r')
plt.show()
```
### The breast cancer dataset
Now we run our algorithm with the real-world dataset: the breast cancer dataset, we use the first two principal components as features.
```
sample_Total, training_input, test_input, class_labels = Breast_cancer(
training_size=20, test_size=10, n=2, PLOT_DATA=True
)
# n =2 is the dimension of each data point
datapoints, class_to_label = split_dataset_to_data_and_labels(test_input)
label_to_class = {label:class_name for class_name, label in class_to_label.items()}
print(class_to_label, label_to_class)
algo_input = get_input_instance('ClassificationInput')
algo_input.training_dataset = training_input
algo_input.test_dataset = test_input
algo_input.datapoints = datapoints[0]
result = run_algorithm(params, algo_input)
print("testing success ratio: ", result['testing_accuracy'])
print("ground truth: {}".format(map_label_to_class_name(datapoints[1], label_to_class)))
print("predicted: {}".format(result['predicted_classes']))
print("kernel matrix during the training:")
kernel_matrix = result['kernel_matrix_training']
img = plt.imshow(np.asmatrix(kernel_matrix),interpolation='nearest',origin='upper',cmap='bone_r')
plt.show()
```
| github_jupyter |
```
# this file creates a csv of all lat long (to the 1000th place) points in manhattan.
# load necessary packages
import geopandas as gpd
from shapely.geometry import Point, Polygon
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from itertools import product
%matplotlib inline
# read in NYC boroughs shapes
boroughs = gpd.read_file('/Users/allisonhonold/ds0805/walk_proj/walk_risk_engine/data/shapes/boroughboundaries.shp')
print(boroughs.loc[4, 'geometry'])
# understand how bounds works, see borough bounds
# for index in boroughs.index:
# polygons = boroughs.loc[index, 'geometry']
# print(index, polygons.bounds)
# get manhattan geometry
man_poly = boroughs.loc[4, 'geometry']
man_poly.bounds
# set dimensions of grid to test for presence in manhattan
minx = round(man_poly.bounds[0], 3) - .001
miny = round(man_poly.bounds[1], 3) - .001
maxx = round(man_poly.bounds[2], 3) + .001
maxy = round(man_poly.bounds[3], 3) + .001
def create_pt_grid(minx, miny, maxx, maxy):
"""creates a grid of points (lat/longs) in the range specified. lat longs
are rounded to hundredth place
Args:
minx: minimum longitude
miny: minimum latitude
maxx: maximum longitude
maxy: maximum latitude
Returns: DataFrame of all lat/long combinations in region
"""
lats = range(int(miny*1000), int(maxy*1000 +1))
longs = range(int(minx*1000), int(maxx*1000 +1))
ll_df = pd.DataFrame(product(lats, longs),
columns=['lat1000', 'long1000'])
ll_df['longitude'] = ll_df['long1000']/1000
ll_df['latitude'] = ll_df['lat1000']/1000
# ll_df['geometry'] = [Point(x, y) for x, y in zip(ll_df['longitude'],
# ll_df['latitude'])]
return ll_df
pts_df = create_pt_grid(minx, miny, maxx, maxy)
pts_gdf = gpd.GeoDataFrame(pts_df, geometry=gpd.points_from_xy(pts_df['longitude'], pts_df['latitude']))
pts_gdf.head()
pts_gdf['in_man'] = np.nan
man = boroughs.loc[4,'geometry']
for index in range(pts_gdf.shape[0]):
if index%5000==0:
print(index)
# if np.isnan(geo_df.loc[df_index, 'in_man']): # updated col from 'in_nyc'
# geo_df.loc[df_index, 'in_man'] = False
pt = pts_gdf.loc[index, 'geometry']
if pt.intersects(man):
pts_gdf.loc[index, 'in_man'] = True
else:
pts_gdf.loc[index, 'in_man'] = False
man_gdf = pts_gdf.loc[pts_gdf['in_man']==True]
man_gdf.head()
test_df = pd.read_csv('/Users/allisonhonold/ds0805/walk_proj/walk_risk_engine/data/csv/man_lat_longs.csv')
test_df['latitude'].nunique()
man_df.head()
man_df.loc[(man_df['lat1000'] == 40753) & (man_df['long1000'] == -73989)]
man_gdf['longitude'].nunique()
test_df['longitude'].nunique()
man_gdf.info()
test_df.info()
man_df.loc[((man_df['latitude'] == 40.753) & (man_df['longitude'] == -73.989)), :]
test_df['latitude'] = test_df['latitude']*1000/1000
test_df['longitude'] = test_df['longitude'] *1000/1000
test_df['latitude'] = test_df['lat1000']/1000
test_df['longitude'] = test_df['long1000']/1000
test_df.loc[((test_df['latitude'] == 40.753) & (test_df['longitude'] == -73.989))]
test_df.loc[((test_df['lat1000'] == 40753) & (test_df['long1000'] == -73989)), :]
test_df.head()
test_df.loc[test_df['Unnamed: 0'] == 10284, :]
test_df['latitude'] = test_df['latitude'].astype(float)
test_df.loc[test_df['Unnamed: 0'] == 10284, 'latitude']
type(40.753)
# creating a polygon to reduce the load of checking points, esp if doing all of ny
# poly2 =
"""p5 = Point(-74.053, 40.688) #40.688310, -74.053245\n",
"p6 = Point(-74.06, 40.688)\n",
"p7 = Point(-74.06, 40.919)\n",
"p8 = Point(-73.913, 40.919) #40.919607, -73.953894\n",
"pointlist2 = [p5, p6, p7, p8]\n",
"poly2 = gpd.GeoSeries(Polygon([[p.x, p.y] for p in pointlist2]))"
"p1 = Point(-74.06, 40.659)\n",
"p2 = Point(-74.270, 40.655)\n",
"p3 = Point(-74.27, 40.914)\n",
"p4 = Point(-74.831, 40.914)\n",
"pointlist = [p1, p2, p3, p4]\n",
"poly = gpd.GeoSeries(geometry.Polygon([[p.x, p.y] for p in pointList]))"""
# reducing the load of checking points for areas north of staten island
# geo_df.loc[((geo_df['latitude'] > 40.659) & (geo_df['longitude'] < -74.06)), 'in_nyc'] = False
```
#code for all boroughs
for df_index in range(40000, 60000):
if df_index%500==0:
print(df_index)
if np.isnan(geo_df.loc[df_index, 'in_nyc']):
geo_df.loc[df_index, 'in_nyc'] = False
pt = gpd.GeoSeries(geo_df.loc[df_index, 'geometry'])
if not pt.intersects(poly2).any():
for b_index in boroughs.index:
polygons = boroughs.loc[b_index, 'geometry']
if pt.intersects(polygons).any():
geo_df.loc[df_index, 'in_nyc'] = True
break
geo_df[40000:60000].to_csv('geo_df_60k.csv')
```
pt = Point(-73.941368, 40.815709)
pt.intersects(man)
geo_df.shape
man_gdf = gpd.GeoDataFrame(geo_df.loc[geo_df['in_man']==True])
man_gdf.head()
man_df = pd.DataFrame(man_gdf.loc[:, 'latitude':'longitude'].reset_index(drop=True))
type(man_df)
man_df.loc[((man_df['latitude'] == 40.753) & (man_df['longitude'] == -73.989)), :]
man_df['lat1000'] = [int(x*1000) for x in man_df['latitude']]
man_df['long1000'] = [int(x*1000) for x in man_df['longitude']]
man_df.head()
man_gdf.to_csv('/Users/allisonhonold/ds0805/walk_proj/walk_risk_engine/data/csv/man_lat_longs.csv',
index=False)
fig, ax = plt.subplots(figsize=(20,20))
# boroughs.plot(ax=ax, color='white', edgecolor='black')
ax.scatter(man_gdf['latitude'], man_gdf['longitude'], alpha=1)
man_gdf.shape
import pandas as pd
man_pts = pd.read_csv('man_lat_longs.csv')
all_pts = man_pts.head(10)
all_pts.loc[:, 'on_path'] = True
all_pts = all_pts.drop(columns=['Unnamed: 0', 'in_man'])
all_pts
manhattan_pts = man_pts.loc[:, ['latitude', 'longitude', 'in_man']]
all_pts = pd.merge(all_pts, manhattan_pts,
on=['latitude', 'longitude'],
how='left')
all_pts
pd.DataFrame(all_pts.loc[(all_pts['on_path']==True)
& (all_pts['in_man']==True)])
nyc_pts = pd.read_csv('ny_pts.csv')
nyc_pts.shape
```
| github_jupyter |
```
%matplotlib inline
%load_ext fortranmagic
import sys; sys.path.append('..')
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
mpl.rc('figure', figsize=(12, 7))
ran_the_first_cell = True
jan2017 = pd.to_datetime(['2017-01-03 00:00:00+00:00',
'2017-01-04 00:00:00+00:00',
'2017-01-05 00:00:00+00:00',
'2017-01-06 00:00:00+00:00',
'2017-01-09 00:00:00+00:00',
'2017-01-10 00:00:00+00:00',
'2017-01-11 00:00:00+00:00',
'2017-01-12 00:00:00+00:00',
'2017-01-13 00:00:00+00:00',
'2017-01-17 00:00:00+00:00',
'2017-01-18 00:00:00+00:00',
'2017-01-19 00:00:00+00:00',
'2017-01-20 00:00:00+00:00',
'2017-01-23 00:00:00+00:00',
'2017-01-24 00:00:00+00:00',
'2017-01-25 00:00:00+00:00',
'2017-01-26 00:00:00+00:00',
'2017-01-27 00:00:00+00:00',
'2017-01-30 00:00:00+00:00',
'2017-01-31 00:00:00+00:00',
'2017-02-01 00:00:00+00:00'])
calendar = jan2017.values.astype('datetime64[D]')
event_dates = pd.to_datetime(['2017-01-06 00:00:00+00:00',
'2017-01-07 00:00:00+00:00',
'2017-01-08 00:00:00+00:00']).values.astype('datetime64[D]')
event_values = np.array([10, 15, 20])
```
<center>
<h1>The PyData Toolbox</h1>
<h3>Scott Sanderson (Twitter: @scottbsanderson, GitHub: ssanderson)</h3>
<h3><a href="https://github.com/ssanderson/pydata-toolbox">https://github.com/ssanderson/pydata-toolbox</a></h3>
</center>
# About Me:
<img src="images/me.jpg" alt="Drawing" style="width: 300px;"/>
- Senior Engineer at [Quantopian](www.quantopian.com)
- Background in Mathematics and Philosophy
- **Twitter:** [@scottbsanderson](https://twitter.com/scottbsanderson)
- **GitHub:** [ssanderson](github.com/ssanderson)
## Outline
- Built-in Data Structures
- Numpy `array`
- Pandas `Series`/`DataFrame`
- Plotting and "Real-World" Analyses
# Data Structures
> Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms
will almost always be self-evident. Data structures, not algorithms, are central to programming.
- *Notes on Programming in C*, by Rob Pike.
# Lists
```
assert ran_the_first_cell, "Oh noes!"
l = [1, 'two', 3.0, 4, 5.0, "six"]
l
# Lists can be indexed like C-style arrays.
first = l[0]
second = l[1]
print("first:", first)
print("second:", second)
# Negative indexing gives elements relative to the end of the list.
last = l[-1]
penultimate = l[-2]
print("last:", last)
print("second to last:", penultimate)
# Lists can also be sliced, which makes a copy of elements between
# start (inclusive) and stop (exclusive)
sublist = l[1:3]
sublist
# l[:N] is equivalent to l[0:N].
first_three = l[:3]
first_three
# l[3:] is equivalent to l[3:len(l)].
after_three = l[3:]
after_three
# There's also a third parameter, "step", which gets every Nth element.
l = ['a', 'b', 'c', 'd', 'e', 'f', 'g','h']
l[1:7:2]
# This is a cute way to reverse a list.
l[::-1]
# Lists can be grown efficiently (in O(1) amortized time).
l = [1, 2, 3, 4, 5]
print("Before:", l)
l.append('six')
print("After:", l)
# Comprehensions let us perform elementwise computations.
l = [1, 2, 3, 4, 5]
[x * 2 for x in l]
```
## Review: Python Lists
- Zero-indexed sequence of arbitrary Python values.
- Slicing syntax: `l[start:stop:step]` copies elements at regular intervals from `start` to `stop`.
- Efficient (`O(1)`) appends and removes from end.
- Comprehension syntax: `[f(x) for x in l if cond(x)]`.
# Dictionaries
```
# Dictionaries are key-value mappings.
philosophers = {'David': 'Hume', 'Immanuel': 'Kant', 'Bertrand': 'Russell'}
philosophers
# Like lists, dictionaries are size-mutable.
philosophers['Ludwig'] = 'Wittgenstein'
philosophers
del philosophers['David']
philosophers
# No slicing.
philosophers['Bertrand':'Immanuel']
```
## Review: Python Dictionaries
- Unordered key-value mapping from (almost) arbitrary keys to arbitrary values.
- Efficient (`O(1)`) lookup, insertion, and deletion.
- No slicing (would require a notion of order).
<center><img src="images/pacino.gif" alt="Drawing" style="width: 100%;"/></center>
```
# Suppose we have some matrices...
a = [[1, 2, 3],
[2, 3, 4],
[5, 6, 7],
[1, 1, 1]]
b = [[1, 2, 3, 4],
[2, 3, 4, 5]]
def matmul(A, B):
"""Multiply matrix A by matrix B."""
rows_out = len(A)
cols_out = len(B[0])
out = [[0 for col in range(cols_out)] for row in range(rows_out)]
for i in range(rows_out):
for j in range(cols_out):
for k in range(len(B)):
out[i][j] += A[i][k] * B[k][j]
return out
```
<center><img src="images/gross.gif" alt="Drawing" style="width: 50%;"/></center>
```
%%time
matmul(a, b)
import random
def random_matrix(m, n):
out = []
for row in range(m):
out.append([random.random() for _ in range(n)])
return out
randm = random_matrix(2, 3)
randm
%%time
randa = random_matrix(600, 100)
randb = random_matrix(100, 600)
x = matmul(randa, randb)
# Maybe that's not that bad? Let's try a simpler case.
def python_dot_product(xs, ys):
return sum(x * y for x, y in zip(xs, ys))
%%fortran
subroutine fortran_dot_product(xs, ys, result)
double precision, intent(in) :: xs(:)
double precision, intent(in) :: ys(:)
double precision, intent(out) :: result
result = sum(xs * ys)
end
list_data = [float(i) for i in range(100000)]
array_data = np.array(list_data)
%%time
python_dot_product(list_data, list_data)
%%time
fortran_dot_product(array_data, array_data)
```
<center><img src="images/sloth.gif" alt="Drawing" style="width: 1080px;"/></center>
## Why is the Python Version so Much Slower?
```
# Dynamic typing.
def mul_elemwise(xs, ys):
return [x * y for x, y in zip(xs, ys)]
mul_elemwise([1, 2, 3, 4], [1, 2 + 0j, 3.0, 'four'])
#[type(x) for x in _]
# Interpretation overhead.
source_code = 'a + b * c'
bytecode = compile(source_code, '', 'eval')
import dis; dis.dis(bytecode)
```
## Why is the Python Version so Slow?
- Dynamic typing means that every single operation requires dispatching on the input type.
- Having an interpreter means that every instruction is fetched and dispatched at runtime.
- Other overheads:
- Arbitrary-size integers.
- Reference-counted garbage collection.
> This is the paradox that we have to work with when we're doing scientific or numerically-intensive Python. What makes Python fast for development -- this high-level, interpreted, and dynamically-typed aspect of the language -- is exactly what makes it slow for code execution.
- Jake VanderPlas, [*Losing Your Loops: Fast Numerical Computing with NumPy*](https://www.youtube.com/watch?v=EEUXKG97YRw)
# What Do We Do?
<center><img src="images/runaway.gif" alt="Drawing" style="width: 50%;"/></center>
<center><img src="images/thisisfine.gif" alt="Drawing" style="width: 1080px;"/></center>
- Python is slow for numerical computation because it performs dynamic dispatch on every operation we perform...
- ...but often, we just want to do the same thing over and over in a loop!
- If we don't need Python's dynamicism, we don't want to pay (much) for it.
- **Idea:** Dispatch **once per operation** instead of **once per element**.
```
import numpy as np
data = np.array([1, 2, 3, 4])
data
data + data
%%time
# Naive dot product
(array_data * array_data).sum()
%%time
# Built-in dot product.
array_data.dot(array_data)
%%time
fortran_dot_product(array_data, array_data)
# Numpy won't allow us to write a string into an int array.
data[0] = "foo"
# We also can't grow an array once it's created.
data.append(3)
# We **can** reshape an array though.
two_by_two = data.reshape(2, 2)
two_by_two
```
Numpy arrays are:
- Fixed-type
- Size-immutable
- Multi-dimensional
- Fast\*
\* If you use them correctly.
# What's in an Array?
```
arr = np.array([1, 2, 3, 4, 5, 6], dtype='int16').reshape(2, 3)
print("Array:\n", arr, sep='')
print("===========")
print("DType:", arr.dtype)
print("Shape:", arr.shape)
print("Strides:", arr.strides)
print("Data:", arr.data.tobytes())
```
# Core Operations
- Vectorized **ufuncs** for elementwise operations.
- Fancy indexing and masking for selection and filtering.
- Aggregations across axes.
- Broadcasting
# UFuncs
UFuncs (universal functions) are functions that operate elementwise on one or more arrays.
```
data = np.arange(15).reshape(3, 5)
data
# Binary operators.
data * data
# Unary functions.
np.sqrt(data)
# Comparison operations
(data % 3) == 0
# Boolean combinators.
((data % 2) == 0) & ((data % 3) == 0)
# as of python 3.5, @ is matrix-multiply
data @ data.T
```
# UFuncs Review
- UFuncs provide efficient elementwise operations applied across one or more arrays.
- Arithmetic Operators (`+`, `*`, `/`)
- Comparisons (`==`, `>`, `!=`)
- Boolean Operators (`&`, `|`, `^`)
- Trigonometric Functions (`sin`, `cos`)
- Transcendental Functions (`exp`, `log`)
# Selections
We often want to perform an operation on just a subset of our data.
```
sines = np.sin(np.linspace(0, 3.14, 10))
cosines = np.cos(np.linspace(0, 3.14, 10))
sines
# Slicing works with the same semantics as Python lists.
sines[0]
sines[:3] # First three elements
sines[5:] # Elements from 5 on.
sines[::2] # Every other element.
# More interesting: we can index with boolean arrays to filter by a predicate.
print("sines:\n", sines)
print("sines > 0.5:\n", sines > 0.5)
print("sines[sines > 0.5]:\n", sines[sines > 0.5])
# We index with lists/arrays of integers to select values at those indices.
print(sines)
sines[[0, 4, 7]]
# Index arrays are often used for sorting one or more arrays.
unsorted_data = np.array([1, 3, 2, 12, -1, 5, 2])
sort_indices = np.argsort(unsorted_data)
sort_indices
unsorted_data[sort_indices]
market_caps = np.array([12, 6, 10, 5, 6]) # Presumably in dollars?
assets = np.array(['A', 'B', 'C', 'D', 'E'])
# Sort assets by market cap by using the permutation that would sort market caps on ``assets``.
sort_by_mcap = np.argsort(market_caps)
assets[sort_by_mcap]
# Indexers are also useful for aligning data.
print("Dates:\n", repr(event_dates))
print("Values:\n", repr(event_values))
print("Calendar:\n", repr(calendar))
print("Raw Dates:", event_dates)
print("Indices:", calendar.searchsorted(event_dates))
print("Forward-Filled Dates:", calendar[calendar.searchsorted(event_dates)])
```
On multi-dimensional arrays, we can slice along each axis independently.
```
data = np.arange(25).reshape(5, 5)
data
data[:2, :2] # First two rows and first two columns.
data[:2, [0, -1]] # First two rows, first and last columns.
data[(data[:, 0] % 2) == 0] # Rows where the first column is divisible by two.
```
# Selections Review
- Indexing with an integer removes a dimension.
- Slicing operations work on Numpy arrays the same way they do on lists.
- Indexing with a boolean array filters to True locations.
- Indexing with an integer array selects indices along an axis.
- Multidimensional arrays can apply selections independently along different axes.
## Reductions
Functions that reduce an array to a scalar.
$Var(X) = \frac{1}{N}\sqrt{\sum_{i=1}^N (x_i - \bar{x})^2}$
```
def variance(x):
return ((x - x.mean()) ** 2).sum() / len(x)
variance(np.random.standard_normal(1000))
```
- `sum()` and `mean()` are both **reductions**.
- In the simplest case, we use these to reduce an entire array into a single value...
```
data = np.arange(30)
data.mean()
```
- ...but we can do more interesting things with multi-dimensional arrays.
```
data = np.arange(30).reshape(3, 10)
data
data.mean()
data.mean(axis=0)
data.mean(axis=1)
```
## Reductions Review
- Reductions allow us to perform efficient aggregations over arrays.
- We can do aggregations over a single axis to collapse a single dimension.
- Many built-in reductions (`mean`, `sum`, `min`, `max`, `median`, ...).
# Broadcasting
```
row = np.array([1, 2, 3, 4])
column = np.array([[1], [2], [3]])
print("Row:\n", row, sep='')
print("Column:\n", column, sep='')
row + column
```
<center><img src="images/broadcasting.png" alt="Drawing" style="width: 60%;"/></center>
<h5>Source: http://www.scipy-lectures.org/_images/numpy_broadcasting.png</h5>
```
# Broadcasting is particularly useful in conjunction with reductions.
print("Data:\n", data, sep='')
print("Mean:\n", data.mean(axis=0), sep='')
print("Data - Mean:\n", data - data.mean(axis=0), sep='')
```
# Broadcasting Review
- Numpy operations can work on arrays of different dimensions as long as the arrays' shapes are still "compatible".
- Broadcasting works by "tiling" the smaller array along the missing dimension.
- The result of a broadcasted operation is always at least as large in each dimension as the largest array in that dimension.
# Numpy Review
- Numerical algorithms are slow in pure Python because the overhead dynamic dispatch dominates our runtime.
- Numpy solves this problem by:
1. Imposing additional restrictions on the contents of arrays.
2. Moving the inner loops of our algorithms into compiled C code.
- Using Numpy effectively often requires reworking an algorithms to use vectorized operations instead of for-loops, but the resulting operations are usually simpler, clearer, and faster than the pure Python equivalent.
<center><img src="images/unicorn.jpg" alt="Drawing" style="width: 75%;"/></center>
Numpy is great for many things, but...
- Sometimes our data is equipped with a natural set of **labels**:
- Dates/Times
- Stock Tickers
- Field Names (e.g. Open/High/Low/Close)
- Sometimes we have **more than one type of data** that we want to keep grouped together.
- Tables with a mix of real-valued and categorical data.
- Sometimes we have **missing** data, which we need to ignore, fill, or otherwise work around.
<center><img src="images/panda-wrangling.gif" alt="Drawing" style="width: 75%;"/></center>
<center><img src="images/pandas_logo.png" alt="Drawing" style="width: 75%;"/></center>
Pandas extends Numpy with more complex data structures:
- `Series`: 1-dimensional, homogenously-typed, labelled array.
- `DataFrame`: 2-dimensional, semi-homogenous, labelled table.
Pandas also provides many utilities for:
- Input/Output
- Data Cleaning
- Rolling Algorithms
- Plotting
# Selection in Pandas
```
s = pd.Series(index=['a', 'b', 'c', 'd', 'e'], data=[1, 2, 3, 4, 5])
s
# There are two pieces to a Series: the index and the values.
print("The index is:", s.index)
print("The values are:", s.values)
# We can look up values out of a Series by position...
s.iloc[0]
# ... or by label.
s.loc['a']
# Slicing works as expected...
s.iloc[:2]
# ...but it works with labels too!
s.loc[:'c']
# Fancy indexing works the same as in numpy.
s.iloc[[0, -1]]
# As does boolean masking.
s.loc[s > 2]
# Element-wise operations are aligned by index.
other_s = pd.Series({'a': 10.0, 'c': 20.0, 'd': 30.0, 'z': 40.0})
other_s
s + other_s
# We can fill in missing values with fillna().
(s + other_s).fillna(0.0)
# Most real datasets are read in from an external file format.
aapl = pd.read_csv('AAPL.csv', parse_dates=['Date'], index_col='Date')
aapl.head()
# Slicing generalizes to two dimensions as you'd expect:
aapl.iloc[:2, :2]
aapl.loc[pd.Timestamp('2010-02-01'):pd.Timestamp('2010-02-04'), ['Close', 'Volume']]
```
# Rolling Operations
<center><img src="images/rolling.gif" alt="Drawing" style="width: 75%;"/></center>
```
aapl.rolling(5)[['Close', 'Adj Close']].mean().plot();
# Drop `Volume`, since it's way bigger than everything else.
aapl.drop('Volume', axis=1).resample('2W').max().plot();
# 30-day rolling exponentially-weighted stddev of returns.
aapl['Close'].pct_change().ewm(span=30).std().plot();
```
# "Real World" Data
```
from demos.avocados import read_avocadata
avocados = read_avocadata('2014', '2016')
avocados.head()
# Unlike numpy arrays, pandas DataFrames can have a different dtype for each column.
avocados.dtypes
# What's the regional average price of a HASS avocado every day?
hass = avocados[avocados.Variety == 'HASS']
hass.groupby(['Date', 'Region'])['Weighted Avg Price'].mean().unstack().ffill().plot();
def _organic_spread(group):
if len(group.columns) != 2:
return pd.Series(index=group.index, data=0.0)
is_organic = group.columns.get_level_values('Organic').values.astype(bool)
organics = group.loc[:, is_organic].squeeze()
non_organics = group.loc[:, ~is_organic].squeeze()
diff = organics - non_organics
return diff
def organic_spread_by_region(df):
"""What's the difference between the price of an organic
and non-organic avocado within each region?
"""
return (
df
.set_index(['Date', 'Region', 'Organic'])
['Weighted Avg Price']
.unstack(level=['Region', 'Organic'])
.ffill()
.groupby(level='Region', axis=1)
.apply(_organic_spread)
)
organic_spread_by_region(hass).plot();
plt.gca().set_title("Daily Regional Organic Spread");
plt.legend(bbox_to_anchor=(1, 1));
spread_correlation = organic_spread_by_region(hass).corr()
spread_correlation
import seaborn as sns
grid = sns.clustermap(spread_correlation, annot=True)
fig = grid.fig
axes = fig.axes
ax = axes[2]
ax.set_xticklabels(ax.get_xticklabels(), rotation=45);
```
# Pandas Review
- Pandas extends numpy with more complex datastructures and algorithms.
- If you understand numpy, you understand 90% of pandas.
- `groupby`, `set_index`, and `unstack` are powerful tools for working with categorical data.
- Avocado prices are surprisingly interesting :)
# Thanks!
| github_jupyter |
# Will there be Enough Snow in the 2018 and 2022 Winter Olympics?
In this Notebook we will analyze temperature and snow depth in the upcoming Winter Olympics locations - South Korea, __[PyeongChang 2018](https://en.wikipedia.org/wiki/2018_Winter_Olympics)__ and China, __[Beijing 2022](https://en.wikipedia.org/wiki/2020_Summer_Olympics)__.
PyeongChang is located in a temperate continental climate area with rainfall throughout the year. However, the Olympics are held in February which is one of the driest months in PyeongChang. __[Beijing](https://en.wikipedia.org/wiki/Beijing#Climate)__ at the same time has high humidity mostly during summertime due to the East Asian monsoon, and colder, direr and windier winters influenced by the Siberian anticyclone.
For making these analyses we added __[NCEP Climate Forecast System Reanalysis (CFSR)](http://data.planetos.com/datasets/ncep_cfsr_global_03)__ dataset to the __[Planet OS Datahub](http://data.planetos.com/)__. CFSR has 69 variables including temperature, soil data, pressure, wind components and so on, and it covers several decades — from 1979 to 2010. It’s a global, high resolution, coupled atmosphere-ocean-land surface-sea ice system designed to provide the best estimate of the state of these coupled domains.
In this demo we will be using two variables, _surface temperature_ and _temperature at 2m height_ to analyze the snow conditions in PyeongChang and Beijing.
So, we will do the following:
1) use the Planet OS package API to fetch data;
2) will see average temperatures in January;
3) will plot average snow depth in February;
4) find out on how many days in January temperatures were below -3.5 C.
```
%matplotlib inline
import numpy as np
from dh_py_access import package_api
import dh_py_access.lib.datahub as datahub
import xarray as xr
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
from po_data_process import comparison_bar_chart, make_comparison_plot
import warnings
warnings.filterwarnings("ignore")
```
<font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
```
server = 'api.planetos.com'
API_key = open('APIKEY').readlines()[0].strip() #'<YOUR API KEY HERE>'
version = 'v1'
```
At first, we need to define the dataset name and variables we want to use.
```
dh=datahub.datahub(server,version,API_key)
dataset='ncep_cfsr_global_03'
variable_names = 'Temperature_height_above_ground,Snow_depth_surface'
time_start = '1979-01-01T00:00:00'
time_end = '2010-12-31T23:00:00'
```
For the start, let's see where exactly PyeongChang (green) and Beijing (orange) are located.
```
plt.figure(figsize=(10,8))
m = Basemap(projection='merc',llcrnrlat=7,urcrnrlat=58,\
llcrnrlon=62,urcrnrlon=149,lat_ts=20,resolution='l')
x,y = m(128.47,37.55)
x2,y2 = m(116.39,39.99)
m.drawcoastlines()
m.drawcountries()
m.bluemarble()
m.scatter(x,y,50,marker='o',color='#00FF00',zorder=4)
m.scatter(x2,y2,50,marker='o',color='orange',zorder=4)
plt.show()
```
We are selecting the Gangwon province area in South Korea, where the PyeongChang 2018 Winter Olympics is taking place.
```
area_name1 = 'PyeongChang'
latitude_north1 = 37.79; longitude_west1 = 128.76
latitude_south1 = 37.20; longitude_east1 = 127.54
```
At the same time, we are also selecting Beijing, Zhangjiakou and Yanqing in China where the Beijing 2022 Winter Olympics will be taking place.
```
area_name2 = 'Beijing'
latitude_north2 = 41.12; longitude_west2 = 114.46
latitude_south2 = 39.56; longitude_east2 = 116.93
```
### Download the data with package API
1. Create package objects
2. Send commands for the package creation
3. Download the package files
```
package1 = package_api.package_api(dh,dataset,variable_names,longitude_west1,longitude_east1,latitude_south1,latitude_north1,time_start,time_end,area_name=area_name1)
package2 = package_api.package_api(dh,dataset,variable_names,longitude_west2,longitude_east2,latitude_south2,latitude_north2,time_start,time_end,area_name=area_name2)
package1.make_package()
package2.make_package()
```
package1.make_package()
package2.make_package()
```
package1.download_package()
package2.download_package()
```
### Work with downloaded files
To evaluate the snow conditions we start by looking into snow depth. We start by opening the files with xarray and by making data column Temp_celsius where we have temperature data in Celsius, as they are in Kelvins at first.
```
dd1 = xr.open_dataset(package1.local_file_name,decode_cf=False)
del (dd1['Temperature_height_above_ground'].attrs['missing_value'])
del (dd1['Snow_depth_surface'].attrs['missing_value'])
dd1 = xr.conventions.decode_cf(dd1)
dd1['Temp_celsius'] = dd1.Temperature_height_above_ground
dd1['Temp_celsius'].values = dd1['Temp_celsius'].values -272.15
dd1['Temp_celsius'].attrs['units'] = 'Celsius'
dd2 = xr.open_dataset(package2.local_file_name,decode_cf=False)
del (dd2['Temperature_height_above_ground'].attrs['missing_value'])
del (dd2['Snow_depth_surface'].attrs['missing_value'])
dd2 = xr.conventions.decode_cf(dd2)
dd2['Temp_celsius'] = dd2.Temperature_height_above_ground
dd2['Temp_celsius'].values = dd2['Temp_celsius'].values -272.15
dd2['Temp_celsius'].attrs['units'] = 'Celsius'
```
So, we are going to look into snow depth in February, when the Olympic Games take place. First we will filter out values from February and calculate the mean values.
```
i_snow = np.where(dd1.Snow_depth_surface['time.month'].values == 2)
feb_mean_snow1 = dd1.Snow_depth_surface[i_snow].resample(time="1AS").mean('time').mean(axis=(1,2))
feb_mean_snow2 = dd2.Snow_depth_surface[i_snow].resample(time="1AS").mean('time').mean(axis=(1,2))
```
From the plot below we can see that both of the places have snow during February. However, the snow cover tends to be quite low, usually less than 10 cm. Even less for Beijing, where the average snow depth in February is 0.016 meters, while in PyeongChang it is around 0.034 meters.
This means that both of the cities need to produce massive amounts of artificial snow in advance to meet the necessary conditions of the Winter Olympic Games. They will need to start making snow in early January. Producing artificial snow requires quite specific weather conditions of which the most critical one is the temperature. There has to be at least -3,5 degrees Celsius cold or more to successfully make snow.
```
comparison_bar_chart(feb_mean_snow1,area_name1, feb_mean_snow2,area_name2,'Year', np.arange(1979,2011,1),'Snow depth [m]','Mean snow depth in February')
print ('Overall average snow cover in PyeongChang in February is ' + str("%.3f" % np.mean(feb_mean_snow1.values)) + ' m')
print ('Overall average snow cover in Beijing in February is ' + str("%.3f" % np.mean(feb_mean_snow2.values)) + ' m')
```
Fortunately, after we plotted the mean temperature data for January, we discovered that that even though both of the locations fail to meet the natural snow conditions, they at least have quite constantly cold air to enable artificial snowmaking. Beijing tends to have a bit colder degrees than PyeongChang, on average -9.72 C, while in PyeongChang it is -6.20 C.
```
i = np.where(dd1.Temp_celsius['time1.month'].values == 1)
#make_comparison_plot(data1,area_name1,data2,area_name2,title,**kwargs)
make_comparison_plot(dd1.Temp_celsius[i].resample(time1="1AS").mean('time1').mean(axis=(1,2,3)),area_name1, dd2.Temp_celsius[i].resample(time1="1AS").mean('time1').mean(axis=(1,2,3)),area_name2,'Mean temperature at 2 m in January',xaxis_label = 'Year',yaxis_label = 'Temperature [$^oC$]')
print ('Overall mean temperature in PyeongChang in February ' + str("%.2f" % dd1.Temp_celsius[i].resample(time1="1AS").mean('time1').mean(axis=(0,1,2,3)).values))
print ('Overall mean temperature in Beijing in February ' + str("%.2f" % dd2.Temp_celsius[i].resample(time1="1AS").mean('time1').mean(axis=(0,1,2,3)).values))
```
As for making artificial snow, it is important that the temperature drops below -3.5 C, so we will find out on how many days in January the conditions are right.
We have data every 6 hours at the moment, we will compute daily means for temperature at first and then we can plot on how many days temperatures were beloew -3.5 C.
On the plot below, we can see that Beijing has a bit more days where the temperature drops -3.5. However, we can see that there has been some warmer years for both locations - 1989, 1990 and 2002 have had very warm Januaries. During most of the years at least half of the month has had temperatures below -3.5. So, this means that making artificial snow usually wouldn't be a problem.
```
i_jan = np.where(dd1.Temp_celsius['time1.month'].values == 2)
temp_jan1 = dd1.Temp_celsius[i_jan].resample(time1="1D").mean('time1').mean(axis=(1,2,3))
temp_jan2 = dd2.Temp_celsius[i_jan].resample(time1="1D").mean('time1').mean(axis=(1,2,3))
make_comparison_plot(temp_jan1[np.where(temp_jan1.values < -3.5)].groupby('time1.year').count(),area_name1,temp_jan2[np.where(temp_jan2.values < -3.5)].groupby('time1.year').count(),area_name2,'Days in January with temperature below -3.5')
```
In conclusion, after analyzing the snow depth in February and average temperature in January in PyeongChang and Beijing during the recent decades, we discovered that even though both of the locations fail to meet the natural snow conditions, they at least have quite constantly cold air to enable artificial snowmaking. Hence we believe that neither of the Winter Olympic Games locations will fail to meet the necessary snow requirements.
| github_jupyter |
# Welcome
The **Py**thon package **f**or **A**coustics **R**esearch (pyfar) contains classes and functions for the acquisition, inspection, and processing of audio signals. This is the pyfar demo notebook and a good place for getting started. In this notebook, you will see examples of the most important pyfar functionalty.
**Note:** This is not a substitute for the [pyfar documentation](https://pyfar.readthedocs.io/en/latest/) that provides a complete description of the pyfar functionality.
## Contents
[Getting started](#getting_started)
[Handling Audio Data](#handling_audio_data)
- [FFT normalization](#fft_normalization)
- [Energy and power signals](#energy_and_power_signals)
- [Accessing Signal data](#accessing_signal_data)
- [Iterating Signals](#accessing_signal_data)
- [Signal meta data](#signal_meta_data)
- [Arithmetic operations](#arithmetic_operations)
- [Plotting](#plotting)
- [Interactive plots](#interactive_plots)
- [Manipulating plots](#manipulating_plots)
[Coordinates](#coordinates)
- [Entering coordinate points](#coordinates_enter)
- [Retrieving coordinate points](#coordinates_retrieve)
- [Rotating coordinate points](#coordinates_rotate)
[Orientations](#orientations)
- [Entering orientations](#entering_orientations)
- [Retrieving orientations](#retrieving_orientations)
- [Rotating orientations](#rotating_orientations)
[Signals](#signals)
[DSP](#dsp)
- [Filtering](#filtering)
[In'n'out](#in_and_out)
- [Read and write workspace](#io_workspaces)
- [Read and write wav files](#io_wav_files)
- [Read SOFA files](#io_sofa)
Lets start with importing pyfar and numpy:
# Getting started<a class="anchor" id="getting_started"></a>
Please note that this is not a Python tutorial. We assume that you are aware of basic Python coding and concepts including the use of `conda` and `pip`. If you did not install pyfar already please do so by running the command
`pip install pyfar`
After this go to your Python editor of choice and import pyfar
```
# import packages
import pyfar
from pyfar import Signal # managing audio signals
from pyfar import Coordinates # managing satial sampling points
from pyfar import Orientations # managing orientation vectors
import numpy as np
```
# Handling audio data<a class="anchor" id="handling_audio_data"></a>
Audio data are the basis of pyfar and there are three classes for storing and handling it. Most data will be stored in objects of the `Signal` class, which is intended for time and frequency data that was sampled at equi-distant times and frequencies. Examples for this are audio recordings or single sided spectra between 0 Hz and half the sampling rate.
The other two classes `TimeData` and `FrequencyData` are inteded to store inclomplete audio data. For example time signals that were not sampled at equi-distant times or frequency data that are not available for all frequencies between 0 Hz and half the sampling rate. We will only look at `Signals`, however, `TimeData`and `FrequencyData` are very similar.
`Signals` are stored along with information about the sampling rate, the domain (`time`, or `freq`), the FFT type and an optional comment. Lets go ahead and create a single channel signal:
```
# create a dirac signal with a sampling rate of 44.1 kHz
fs = 44100
x = np.zeros(44100)
x[0] = 1
x_energy = Signal(x, fs)
# show information
x_energy
```
## FFT Normalization<a class="anchor" id="fft_normalization"></a>
Different FFT normalization are available that scale the spectrum of a Signal. Pyfar knows six normalizations: `'amplitude'`, `'rms'`, `'power'`, and `'psd'` from [Ahrens, et al. 2020](http://www.aes.org/e-lib/browse.cfm?elib=20838), `'unitary'` (only applies weights for single sided spectra as in Eq. 8 in [Ahrens, et al. 2020](http://www.aes.org/e-lib/browse.cfm?elib=20838)), and `'none'` (applies no normalization). The default normalization is `'none'`, which is usefull for **energy signals**, i.e., signals with finite energy such as impulse responses. The other FFT normalizations are intended for **power signals**, i.e., samples of signals with infinite energy, such as noise or sine signals. Let's create a signal with a different normalization
```
x = np.sin(2 * np.pi * 1000 * np.arange(441) / fs)
x_power = Signal(x, fs, fft_norm='rms')
x_power
```
The normalization can be changed. In this case the spectral data of the signal is converted internally using `pyfar.fft.normalization()`
```
x_power.fft_norm = 'amplitude'
x_power
```
## Energy and power signals<a class="energy_and_power_signals" id="signals"></a>
You might have realized that pyfar distinguishes between energy and power signals, which is required for some operations. Signals with the FFT normalization `'none'` are considered as energy signals while all other FFT normalizations result in power signals.
## Accessing Signal data<a class="accessing_signal_data" id="signals"></a>
You can access the data, i.e., the audio signal, inside a Signal object in the time and frequency domain by simply using
```
time_data = x_power.time
freq_data = x_power.freq
```
Two things are important here:
1. `time_data` and `freq_data` are mutable! That means `x_power.time` changes if you change `time_data`. If this is not what you want use `time_data = x_power.time.copy()` instead.
2. The frequency data of signals depends on the Signal's `fft_norm`.
`Signals` and some other pyfar objects support slicing. Let's illustrate that for a two channel signal
```
# generate two channel time data
time = np.zeros((2, 4))
time[0,0] = 1 # first sample of first channel
time[1,0] = 2 # first sample of second channel
x_two_channels = Signal(time, 44100)
x_first_channel = x_two_channels[0]
```
`x_first_channel` is a `Signal` object itself, which contains the first channel of `x_two_channels`:
```
x_first_channel.time
```
A third option to access `Signals` is to copy it
```
x_copy = x_two_channels.copy()
```
It is important to note that his return an independent copy of `x_two_channels`. Note that `x_copy = x_two_channels` might not be wanted. In this case changes to `x_copy` will also change `x_two_channels`. The `copy()` operation is available for all pyfar object.
## Iterating Signals<a class="signal_meta_data" id="accessing_signal_data"></a>
It is the aim of pyfar that all operations work on N-dimensional `signals`. Nevertheless, you can also iterate `signals` if you need to apply operations depending on the channel. Lets look at a simple example
```
signal = Signal([[0, 0, 0], [1, 1, 1]], 44100) # 2-channel signal
# iterate the signal
for n, channel in enumerate(signal):
print(f"Channel: {n}, time data: {channel.time}")
# do something channel dependent
channel.time = channel.time + n
# write changes to the signal
signal[n] = channel
# q.e.d.
print(f"\nNew signal time data:\n{signal.time}")
```
`Signal` uses the standard `numpy` iterator which always iterates the first dimension. In case of a 2-D array as in the example above these are the channels.
## Signal meta data<a class="signal_meta_data" id="signals"></a>
The `Signal` object also holds useful metadata. The most important might be:
- `Signal.n_samples`: The number of samples in each channel (`Signal.time.shape[-1]`)
- `Signal.n_bins`: The number of frequencies in each channel (`Signal.freq.shape[-1]`)
- `Signal.times`: The sampling times of `Signal.time` in seconds
- `Signal.freqs`: The frequencies of `Signal.freq` in Hz
- `Signal.comment`: A comment for documenting the signal content
## Arithmetic operations<a class="arithmetic_operations" id="signals"></a>
The arithmetic operations `add`, `subtract`, `multiply`, `divide`, and `power` are available for `Signal` (time or frequency domain operations) as well as for `TimeData` and `FrequencyData`. The operations work on arbitrary numbers of Signals and array likes. Lets check out simple examples
```
# add two signals energy signals
x_sum = pyfar.classes.audio.add((x_energy, x_energy), 'time')
x_sum.time
```
In this case, `x_sum` is also an energy Signal. However, if any power Signal is involved in an arithmetic operation, the result will be a power Signal. The FFT normalization of the result is obtained from the first (power) Signal in the input data. You can also apply arithmetic operations on a `Signal` and a vector. Under the hood, the operations use numpy's [array broadcasting](https://numpy.org/doc/stable/user/basics.broadcasting.html?highlight=broadcast#module-numpy.doc.broadcasting). This means you can add scalars, vectors, and matrixes to a signal. Lets have a frequency domain example for this
```
x_sum = pyfar.classes.audio.add((x_energy, 1), 'freq')
x_sum.time
```
The Python operators `+`, `-`, `*`, `/`, and `**` are overloaded with the **frequency domain** arithmetic functions for `Signal` and `FrequencyData`. For `TimeData` they correspond to time domain operations. Thus, the example above can also be shortened to
```
x_sum = x_energy + 1
x_sum.time
```
## Plotting<a class="anchor" id="plotting"></a>
Inspecting signals can be done with the `pyfar.plot` module, which uses common plot functions based on `Matplotlib`. For example a plot of the magnitude spectrum
```
pyfar.plot.freq(x_power)
```
We set the FFT normalization to 'amplitude' before. The plot thus shows the Amplitude (1, or 0 dB) of our sine wave contained in `x_power`. We can also plot the RMS ($1/\sqrt(2)$, or $\approx-3$ dB)
```
x_power.fft_norm = 'rms'
pyfar.plot.line.freq(x_power)
```
### Interactive plots<a class="anchor" id="#interactive_plots"></a>
pyfar provides keyboard shortcuts for switching plots, zooming in and out, moving along the x and y axis, and for zooming and moving the range of colormaps. To do this, you need to use an interactive Matplotlib backend. This can for example be done by including
`%matplotlib qt`
or
`matplotlib.use('Qt5Agg')`
in your code. These are the available keyboard short cuts
```
shortcuts = pyfar.plot.shortcuts()
```
Note that additional controls are available through Matplotlib's [interactive navigation](https://matplotlib.org/3.1.1/users/navigation_toolbar.html).
### Manipulating plots<a class="anchor" id="#manipulating_plots"></a>
In many cases, the layout of the plot should be adjusted, which can be done using Matplotlib and the axes handle that is returned by all plot functions. For example, the range of the x-axis can be changed.
```
ax = pyfar.plot.time(x_power)
ax.set_xlim(0, 2)
```
Note: For an easy use of the pyfar plotstyle (available as 'light' and 'dark' theme) wrappers for Matplotlib's `use` and `context` are available as `pyfar.plot.use` and `pyfar.plot.context`.
# Coordinates<a class="anchor" id="coordinates"></a>
The `Coordinates()` class is designed for storing, manipulating, and accessing coordinate points. It supports a large variety of different coordinate conventions and the conversion between them. Examples for data that can be stored are microphone positions of a spherical microphone array and loudspeaker positions of a sound field synthesis system. Lets create an empty `Coordinates` object and look at the implemented conventions first:
```
c = Coordinates()
c.systems()
```
## Entering coordinate points<a class="anchor" id="coordinates_enter"></a>
Coordinate points can be entered manually or by using one of the available sampling schemes contained in `pyfar.samplings`. We will do the latter using an equal angle sampling and look at the information provided by the coordinates object:
```
c = pyfar.samplings.sph_equal_angle((20, 10))
# show general information
print(c)
# plot the sampling points
c.show()
```
Inside the `Coordinates` object, the points are stored in an N-dimensional array of size `[..., 3]` where - in this case - the last dimension holds the azimuth, colatitude, and radius. Information about coordinate array can be obtained by `c.cshape`, `c.csize`, and `c.cdim`. These properties are similar to numpy's `shape`, `size`, and `dim` but ignore the last dimension, which is always 3.
## Retrieving coordinate points<a class="anchor" id="coordinates_retrieve"></a>
There are different ways to retrieve points from a `Coordinates` object. All points can be obtained in cartesian, spherical, and cylindrical coordinates using the getter functions `c.get_cart()`, `c.get_sph()` and `c.get_cyl()`, e.g.:
```
cartesian_coordinates = c.get_cart()
```
Different methods are available for obtaining a specific subset of coordinates. For example the nearest point(s) can be obtained by
```
c_out = c.get_nearest_k(
270, 90, 1, k=1, domain='sph', convention='top_colat', unit='deg', show=True)
```
To obtain all points within a specified euclidean distance or arc distance, you can use `c.get_nearest_cart()` and `c.get_nearest_sph()`. To obtain more complicated subsets of any coordinate, e.g., the horizontal plane with `colatitude=90` degree, you can use
```
mask_hor = c.get_slice('colatitude', 'deg', 90, show=True)
```
## Rotating coordinates<a class="anchor" id="coordinates_rotate"></a>
You can apply rotations using quaternions, rotation vectors/matrixes and euler angles with `c.rotate()`, which is a wrapper for `scipy.spatial.transform.Rotation`. For example rotating around the y-axis by 45 degrees can be done with
```
c.rotate('y', 45)
c.show()
```
Note that this changes the points inside the `Coordinates` object, which means that you have to be carefull not to apply the rotation multiple times, i.e., when evaluationg cells during debugging.
# Orientations<a class="anchor" id="orientations"></a>
The `Orientations()` class is designed for storing, manipulating, and accessing orientation vectors. Examples for this are orientations of directional loudspeakers during measurements or head orientations. It is good to know that `Orientations` is inherited from `scipy.spatial.transform.Rotation` and that all methods of this class can also be used with `Orientations`.
## Entering orientations<a class="anchor" id="entering_orientations"></a>
Lets go ahead and create an object and show the result
```
views = [[0, 1, 0],
[1, 0, 0],
[0, -1, 0]]
up = [0, 0, 1]
orientations = Orientations.from_view_up(views, up)
orientations.show(show_rights=False)
```
It is also possible to enter `Orientations` from `Coordinates` object or mixtures of `Coordinates` objects and array likes. This is equivalent to the example above
```
views_c = Coordinates([90, 0, 270], 0, 1,
domain='sph', convention='top_elev', unit='deg')
orientations = Orientations.from_view_up(views_c, up)
```
## Retrieving orientations<a class="anchor" id="retrieving_orientations"></a>
Orientaions can be retrieved as view, up, and right-vectors and in any format supported by `scipy.spatial.transform.Rotation`. They can also be converted into any coordinate convention supported by pyfar by putting them into a `Coordinates` object. Lets only check out one way for now
```
views, ups, right, = orientations.as_view_up_right()
views
```
## Rotating orientations<a class="anchor" id="rotating_orientations"></a>
Rotations can be done using the methods inherited from `scipy.spatial.transform.Rotation`. You can for example rotate around the y-axis this way
```
rotation = Orientations.from_euler('y', 30, degrees=True)
orientations_rot = orientations * rotation
orientations_rot.show(show_rights=False)
```
# Signals<a class="signals" id="dsp"></a>
The `pyfar.signals` module contains a variety of common audio signals including sine signals, sweeps, noise and pulsed noise. For brevity, lets have only one example
```
sweep = pyfar.signals.exponential_sweep_time(2**12, [100, 22050])
pyfar.plot.time_freq(sweep)
```
# DSP<a class="dsp" id="dsp"></a>
`pyfar.dsp` offers lots of useful functions to manipulate audio data. Lets take a tour
## Filtering<a class="in_and_out" id="filtering"></a>
`pyfar.dsp.filter` contains wrappers for the most common filters of `scipy.signal`
- Butterworth,
- Chebychev type I and II,
- Elliptic (Cauer), and
- Bessel/Thomson
and other useful filter functions
- Linkwitz-Riley Crossover networks
- Fractional octave filters
- Parametric equalizers
- Shelve filters
They can all be assessed in a similar manner, like this one
```
x_filter = pyfar.dsp.filter.peq(x_energy, center_frequency=1e3, gain=10, quality=2)
pyfar.plot.line.freq(x_filter)
```
# In'n'out<a class="in_and_out" id="signals"></a>
Now that you know what pyfar is about, let's see how you can save your work and read common data types.
## Read and write pyfar data<a class="in_and_out" id="#io_workspaces"></a>
Pyfar contains functions for saving all pyfar objects and common data types such as numpy arrays using `pyfar.io.write()` and `pyfar.io.read()`. This creates .far files that also support compression.
## Read and write wav-files<a class="in_and_out" id="wav_files"></a>
Wav-files are commonly used in the audio community to store and exchange data. You can read them with
`signal = pyfar.io.read_wav(filename)`
and write them with
`pyfar.io.write_wav(signal, filename, overwrite=True)`.
You can write any `signal` to a wav-file also if they have values > 1. Multidimensional `signals` will be reshaped to 2D arrays before writing.
## Read SOFA files<a class="in_and_out" id="#io_sofa"></a>
[SOFA files](https://www.sofaconventions.org) can be used to store spatially distributed acoustical data sets. Examples for this are room acoustic measurements at different positions in a room or a set of head-related transfer functions for different source positions. SOFA files can quickly be read by
`signal, source, receiver = pfar.io.read_sofa(filename)`
which returns the audio data as a `Signal` and the source and receiver coordinates as a `Coordinates` object.
`read_sofa` is a wrapper for `python_sofa`, which can be used to write SOFA files or access more meta data contained in SOFA files.
| github_jupyter |
### Model Exploration
```
import numpy as np
import pandas as pd
import os
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import AdaBoostClassifier, GradientBoostingClassifier, RandomForestClassifier
from sklearn.pipeline import make_pipeline
from matplotlib import pyplot as plt
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import TimeSeriesSplit
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
```
Load in dataset and create training and test subsets. Create a function to run models and output their accuracy.
```
def load_data(bin=False):
project_dir = os.path.dirname(os.path.abspath(''))
df = pd.read_json(os.path.join(project_dir, 'model_prepped_dataset.json'))
X = df.loc[:, ~df.columns.isin(['Outcome', 'Outcome_Bin_H'])]
if bin:
y = df['Outcome_Bin_H']
else:
y = df['Outcome']
X = X.drop([
'Day',
'Season',
'Home_Team_Streak',
'Away_Team_Streak',
'Home_Team_Home_Streak',
'Away_Team_Away_Streak',
'Match_Relevance',
'Home_Team_Home_Form',
'Away_Team_Away_Form',
'Home_Team_Home_Goals',
'Away_Team_Away_Goals'
], axis=1)
return X, y
def prep_datasets(X, y):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
return X_train, X_test, y_train, y_test
def compare_models(models, X_train, y_train, y_test):
for model in models:
model[1].fit(X_train, y_train)
y_pred = model[1].predict(X_test)
accu = accuracy_score(y_test, y_pred) * 100
print(
f"{model[0]}: "
f"Accuracy: {accu:.2f}"
)
return
```
Define various models and run them so that they can be compared. Where scaling is required, it is done so via the inclusion of a pipeline.
```
np.random.seed(2)
models = [
('lgr', make_pipeline(StandardScaler(), LogisticRegression())),
('rfc', RandomForestClassifier(max_depth=2)),
('knn', make_pipeline(StandardScaler(), KNeighborsClassifier())),
('dtc', DecisionTreeClassifier()),
('abc', AdaBoostClassifier()),
('gbc', GradientBoostingClassifier())
]
X, y = load_data(bin=True)
X_train, X_test, y_train, y_test = prep_datasets(X, y)
compare_models(models, X_train, y_train, y_test)
```
(Example only) View the importance of features in the random forest classification model.
```
plt.xticks(rotation=90)
plt.bar(list(X), models[1][1].feature_importances_)
```
As there is very little difference between models with default parameters, the logistic regression model is selected for hyperparameter tuning via grid search.
There are two approaches for this with respect to splitting the dataset:
1) Split dataset into three folds: train, validation, and test. Then, perform the model selection and hyperparameter search, each time training on the train set, and checking the score on the validation set.
2) Split into two folds: train and test, and then perform cross-validations on the train set to do the model selection and hyperparameter search. This time, thre is specific validation set but as many as there are folds in the cross validation, making it more robust.
```
X, y = load_data(bin=True)
X_train, X_test, y_train, y_test = prep_datasets(X, y)
ss = StandardScaler()
X_train = ss.fit_transform(X_train)
X_test = ss.transform(X_test)
model_1 = LogisticRegression()
params = {
'solver': ['newton-cg', 'lbfgs', 'liblinear'],
'penalty': ['none', 'l1', 'l2', 'elasticnet'],
'C': [1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1, 10, 100]}
cv = KFold(n_splits=10, random_state=1, shuffle=True)
grid_search = GridSearchCV(model_1, params, scoring='accuracy', n_jobs=-1, cv=cv)
grid_result = grid_search.fit(X_train, y_train)
print(f'Best Score: {grid_result.best_score_ * 100:.2f}%')
print(f'Best Hyperparameters: {grid_result.best_params_}')
```
Now we know the best model hyperparameters and will apply these to the model.
Using these best parameters we can also experiment with different cross validation techniques to see how these impact the output scores at each fold and to validate that the model is outputting consistent predictions.
```
class BlockingTimeSeriesSplit():
def __init__(self, n_splits):
self.n_splits = n_splits
def get_n_splits(self, X, y, groups):
return self.n_splits
def split(self, X, y=None, groups=None):
n_samples = len(X)
k_fold_size = n_samples // self.n_splits
indices = np.arange(n_samples)
margin = 0
for i in range(self.n_splits):
start = i * k_fold_size
stop = start + k_fold_size
mid = int(0.8 * (stop - start)) + start
yield indices[start: mid], indices[mid + margin: stop]
cv_techniques = [
('K-Fold CV', KFold(n_splits=5, random_state=1, shuffle=True)),
('Time Series CV', TimeSeriesSplit(n_splits=10)),
('Blocking Time Series CV', BlockingTimeSeriesSplit(n_splits=10))
]
model_1 = LogisticRegression(solver='newton-cg', penalty='l2', C=0.01)
for cvs in cv_techniques:
accu = cross_val_score(model_1, X_train, y_train, cv=cvs[1], n_jobs=-1, scoring='accuracy')
accu = [f'{a * 100:.2f}%' for a in accu]
print(f'{cvs[0]}: {accu}')
```
We can also carry out a random search for the best hyperparameters on a different model, in this case Random Forest. This will allows us to statistically compare the models later. As this is a comparator only, only some of the hyperparamaters are tuned, to aid performace.
```
n_estimators = [int(x) for x in np.linspace(start = 10, stop = 500, num = 20)]
max_depth = [int(x) for x in np.linspace(2, 32, num = 4)]
max_depth.append(None)
random_grid = {'n_estimators': n_estimators,
'max_depth': max_depth}
model_2 = RandomForestClassifier()
# Random search of parameters, using 3 fold cross validation, search across 20 different combinations, and use all available cores
rdm_search = RandomizedSearchCV(estimator = model_2, param_distributions = random_grid, n_iter = 20, cv = 3, verbose=2, random_state=42, n_jobs = -1)
rdm_search.fit(X_train, y_train)
print(f'Best Score: {rdm_search.best_score_ * 100:.2f}%')
print(f'Best Hyperparameters: {rdm_search.best_params_}')
```
# WIP UPDTE DOC
To verify that the model is increases the accuracy of predictions and test the null hypothesis, it can be compared to a simple random choice predictor.
https://towardsdatascience.com/validating-your-machine-learning-model-25b4c8643fb7
```
model_1 = LogisticRegression(solver='newton-cg', penalty='l2', C=0.01)
model_1.fit(X_train, y_train)
model_2 = RandomForestClassifier(n_estimators=293, max_depth=12)
model_2.fit(X_train, y_train)
lgr_scores = cross_val_score(model_1, X_train, y_train, cv=cv_techniques[0][1], n_jobs=-1, scoring='accuracy')
rfc_scores = cross_val_score(model_2, X_train, y_train, cv=cv_techniques[0][1], n_jobs=-1, scoring='accuracy')
from scipy.stats import wilcoxon
stat, p = wilcoxon(lgr_scores, rfc_scores, zero_method='zsplit')
stat, p
```
We can apply this significance test for comparing two Machine Learning models. Using k-fold cross-validation we can create, for each model, k accuracy scores. This will result in two samples, one for each model.
Then, we can use the Wilcoxon signed-rank test to test if the two samples differ significantly from each other. If they do, then one is more accurate than the other.
The result will be a p-value. If that value is lower than 0.05 we can reject the null hypothesis that there are no significant differences between the models.
NOTE: It is important that you keep the same folds between the models to make sure the samples are drawn from the same population. This is achieved by simply setting the same random_state in the cross-validation procedure.
McNemar’s test is used to check the extent to which the predictions between one model and another match. This is referred to as the homogeneity of the contingency table. From that table, we can calculate x² which can be used to compute the p-value:
Again, if the p-value is lower than 0.05 we can reject the null hypothesis and see that one model is significantly better than the other.
We can use mlxtend package to create the table and calculate the corresponding p-value:
```
import numpy as np
from mlxtend.evaluate import mcnemar_table, mcnemar
lgr_predict = model_1.predict(X_test)
print(f'{accuracy_score(y_test, lgr_predict) * 100:.2f}%')
rfc_predict = model_2.predict(X_test)
print(f'{accuracy_score(y_test, rfc_predict) * 100:.2f}%')
# Calculate p value
tb = mcnemar_table(y_target = y_test,
y_model1 = lgr_predict,
y_model2 = rfc_predict)
chi2, p = mcnemar(ary=tb, exact=True)
print('chi-squared:', chi2)
print('p-value:', p)
```
The 5x2CV paired t-test is a method often used to compare Machine Learning models due to its strong statistical foundation.
The method works as follows. Let’s say we have two classifiers, A and B. We randomly split the data in 50% training and 50% test. Then, we train each model on the training data and compute the difference in accuracy between the models from the test set, called DiffA. Then, the training and test splits are reversed and the difference is calculated again in DiffB.
This is repeated five times after which the mean variance of the differences is computed (S²). Then, it is used to calculate the t-statistic:
NOTE: You can use the combined 5x2CV F-test instead which was shown to be slightly more robust (Alpaydin, 1999). This method is implemented in mlxtend as
from mlxtend.evluate import combined_ftest_5x2cv.
```
from mlxtend.evaluate import paired_ttest_5x2cv
# Calculate p-value
t, p = paired_ttest_5x2cv(estimator1 = model_1,
estimator2 = model_2,
X = X_train, y = y_train,
random_seed=1)
print('t statistic: %.3f' % t)
print('p value: %.3f' % p)
```
Note that these accuracy values are not used in the paired t test procedure as new test/train splits are generated during the resampling procedure, the values above are just serving the purpose of intuition.
Now, let's assume a significance threshold of α=0.05 for rejecting the null hypothesis that both algorithms perform equally well on the dataset and conduct the 5x2cv t test:
Since p>α, we cannot reject the null hypothesis and may conclude that the performance of the two algorithms is not significantly different.
| github_jupyter |
# The Atoms of Computation
Programming a quantum computer is now something that anyone can do in the comfort of their own home.
But what to create? What is a quantum program anyway? In fact, what is a quantum computer?
These questions can be answered by making comparisons to standard digital computers. Unfortunately, most people don’t actually understand how digital computers work either. In this article, we’ll look at the basics principles behind these devices. To help us transition over to quantum computing later on, we’ll do it using the same tools as we'll use for quantum.
Below is some Python code we'll need to run if we want to use the code in this page:
```
from qiskit import QuantumCircuit, execute, Aer
from qiskit.visualization import plot_histogram
```
## 1. Splitting information into bits <a id="bits"></a>
The first thing we need to know about is the idea of bits. These are designed to be the world’s simplest alphabet. With only two characters, 0 and 1, we can represent any piece of information.
One example is numbers. You are probably used to representing a number through a string of the ten digits 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. In this string of digits, each digit represents how many times the number contains a certain power of ten. For example, when we write 9213, we mean
$$ 9000 + 200 + 10 + 3 $$
or, expressed in a way that emphasizes the powers of ten
$$ (9\times10^3) + (2\times10^2) + (1\times10^1) + (3\times10^0) $$
Though we usually use this system based on the number 10, we can just as easily use one based on any other number. The binary number system, for example, is based on the number two. This means using the two characters 0 and 1 to express numbers as multiples of powers of two. For example, 9213 becomes 10001111111101, since
$$
\begin{aligned}
9213 &= (1 \times 2^{13}) + (0 \times 2^{12}) + (0 \times 2^{11}) + (0 \times 2^{10}) \\
&+ (1 \times 2^9) + (1 \times 2^8) + (1 \times 2^7) + (1 \times 2^6) \\
&+ (1 \times 2^5) + (1 \times 2^4) + (1 \times 2^3) + (1 \times 2^2) \\
&+ (0 \times 2^1) + (1 \times 2^0)
\end{aligned}
$$
In this we are expressing numbers as multiples of 2, 4, 8, 16, 32, etc. instead of 10, 100, 1000, etc.
```
from qiskit_textbook.widgets import binary_widget
binary_widget(nbits=5)
```
These strings of bits, known as binary strings, can be used to represent more than just numbers. For example, there is a way to represent any text using bits. For any letter, number, or punctuation mark you want to use, you can find a corresponding string of at most eight bits using [this table](https://www.ibm.com/support/knowledgecenter/en/ssw_aix_72/com.ibm.aix.networkcomm/conversion_table.htm). Though these are quite arbitrary, this is a widely agreed-upon standard. In fact, it's what was used to transmit this article to you through the internet.
This is how all information is represented in computers. Whether numbers, letters, images, or sound, it all exists in the form of binary strings.
Like our standard digital computers, quantum computers are based on this same basic idea. The main difference is that they use *qubits*, an extension of the bit to quantum mechanics. In the rest of this textbook, we will explore what qubits are, what they can do, and how they do it. In this section, however, we are not talking about quantum at all. So, we just use qubits as if they were bits.
<!-- ::: q-block.exercise -->
## Exercise
Complete these sentences:
1. The number "5" in decimal is [[101|11001|110|001]] in binary.
2. If our computer has 1 bit, it can be in [[2|1|3|4]] different states.
3. If our computer has 2 bits, it can be in [[4|3|2|8]] different states.
4. If our computer has 8 bits, it can be in [[256|128|342]] different states.
5. If you have $n$ bits, they can be in [[$2^n$|$n×2$|$n^2$]] different states.
<!-- ::: -->
## 2. Computation as a diagram <a id="diagram"></a>
Whether we are using qubits or bits, we need to manipulate them in order to turn the inputs we have into the outputs we need. For the simplest programs with very few bits, it is useful to represent this process in a diagram known as a *circuit diagram*. These have inputs on the left, outputs on the right, and operations represented by arcane symbols in between. These operations are called 'gates', mostly for historical reasons.
Here's an example of what a circuit looks like for standard, bit-based computers. You aren't expected to understand what it does. It should simply give you an idea of what these circuits look like.

For quantum computers, we use the same basic idea but have different conventions for how to represent inputs, outputs, and the symbols used for operations. Here is the quantum circuit that represents the same process as above.

In the rest of this section, we will explain how to build circuits. At the end, you'll know how to create the circuit above, what it does, and why it is useful.
## 3. Your first quantum circuit <a id="first-circuit"></a>
In a circuit, we typically need to do three jobs: First, encode the input, then do some actual computation, and finally extract an output. For your first quantum circuit, we'll focus on the last of these jobs. We start by creating a circuit with eight qubits and eight outputs.
```
n = 8
n_q = n
n_b = n
qc_output = QuantumCircuit(n_q,n_b)
```
This circuit, which we have called <code>qc_output</code>, is created by Qiskit using <code>QuantumCircuit</code>. The number <code>n_q</code> defines the number of qubits in the circuit. With <code>n_b</code> we define the number of output bits we will extract from the circuit at the end.
The extraction of outputs in a quantum circuit is done using an operation called <code>measure</code>. Each measurement tells a specific qubit to give an output to a specific output bit. The following code adds a <code>measure</code> operation to each of our eight qubits. The qubits and bits are both labelled by the numbers from 0 to 7 (because that’s how programmers like to do things). The command <code>qc.measure(j,j)</code> adds a measurement to our circuit <code>qc</code> that tells qubit <code>j</code> to write an output to bit <code>j</code>.
```
for j in range(n):
qc_output.measure(j,j)
```
Now that our circuit has something in it, let's take a look at it.
```
qc_output.draw()
```
Qubits are always initialized to give the output <code>0</code>. Since we don't do anything to our qubits in the circuit above, this is exactly the result we'll get when we measure them. We can see this by running the circuit many times and plotting the results in a histogram. We will find that the result is always <code>00000000</code>: a <code>0</code> from each qubit.
```
counts = execute(qc_output,Aer.get_backend('qasm_simulator')).result().get_counts()
plot_histogram(counts)
```
The reason for running many times and showing the result as a histogram is because quantum computers may have some randomness in their results. In this case, since we aren’t doing anything quantum, we get just the [[00000000|11111111|00001111|01010101]] result with certainty.
Note that this result comes from a quantum simulator, which is a standard computer calculating what an ideal quantum computer would do. Simulations are only possible for small numbers of qubits (~30 qubits), but they are nevertheless a very useful tool when designing your first quantum circuits. To run on a real device you simply need to replace <code>Aer.get_backend('qasm_simulator')</code> with the backend object of the device you want to use.
## 4. Example: Creating an Adder Circuit <a id="adder"></a>
### 4.1 Encoding an input <a id="encoding"></a>
Now let's look at how to encode a different binary string as an input. For this, we need what is known as a NOT gate. This is the most basic operation that you can do in a computer. It simply flips the bit value: <code>0</code> becomes <code>1</code> and <code>1</code> becomes <code>0</code>. For qubits, it is an operation called <code>x</code> that does the job of the NOT.
Below we create a new circuit dedicated to the job of encoding and call it <code>qc_encode</code>. For now, we only specify the number of qubits.
```
qc_encode = QuantumCircuit(n)
qc_encode.x(7)
qc_encode.draw()
```
Extracting results can be done using the circuit we have from before: <code>qc_output</code>. Adding the two circuits using <code>qc_encode + qc_output</code> creates a new circuit with everything needed to extract an output added at the end.
```
qc = qc_encode + qc_output
qc.draw()
```
Now we can run the combined circuit and look at the results.
```
counts = execute(qc,Aer.get_backend('qasm_simulator')).result().get_counts()
plot_histogram(counts)
```
Now our computer outputs the string ```10000000``` instead.
The bit we flipped, which comes from qubit 7, lives on the far left of the string. This is because Qiskit numbers the bits in a string from right to left. Some prefer to number their bits the other way around, but Qiskit's system certainly has its advantages when we are using the bits to represent numbers. Specifically, it means that qubit 7 is telling us about how many `2^7`s we have in our number. So by flipping this bit, we’ve now written the number [[128|256|64|32]] in our simple 8-bit computer.
Now try out writing another number for yourself. You could do your age, for example. Just use a search engine to find out what the number looks like in binary (if it includes a <code>0b</code>, just ignore it), and then add some 0s to the left side if you are younger than 128.
```
qc_encode = QuantumCircuit(n)
qc_encode.x(1)
qc_encode.x(5)
qc_encode.draw()
```
Now we know how to encode information in a computer. The next step is to process it: To take an input that we have encoded, and turn it into an output that we need.
### 4.2 Remembering how to add <a id="remembering-add"></a>
To look at turning inputs into outputs, we need a problem to solve. Let’s do some basic maths. In primary school, you will have learned how to take large mathematical problems and break them down into manageable pieces. For example, how would you go about solving the following?
```code
9213
+ 1854
= ????
```
One way is to do it digit by digit, from right to left. So we start with 3+4
```code
9213
+ 1854
= ???7
```
And then 1+5
```code
9213
+ 1854
= ??67
```
Then we have 2+8=10. Since this is a two digit answer, we need to carry the one over to the next column.
```code
9213
+ 1854
= ?067
¹
```
Finally we have 9+1+1=11, and get our answer
```code
9213
+ 1854
= 11067
¹
```
This may just be simple addition, but it demonstrates the principles behind all algorithms. Whether the algorithm is designed to solve mathematical problems or process text or images, we always break big tasks down into small and simple steps.
To run on a computer, algorithms need to be compiled down to the smallest and simplest steps possible. To see what these look like, let’s do the above addition problem again but in binary.
```code
10001111111101
+ 00011100111110
= ??????????????
```
Note that the second number has a bunch of extra 0s on the left. This just serves to make the two strings the same length.
Our first task is to do the 1+0 for the column on the right. In binary, as in any number system, the answer is 1. We get the same result for the 0+1 of the second column.
```code
10001111111101
+ 00011100111110
= ????????????11
```
Next, we have 1+1. As you’ll surely be aware, 1+1=2. In binary, the number 2 is written ```10```, and so requires two bits. This means that we need to carry the 1, just as we would for the number 10 in decimal.
```code
10001111111101
+ 00011100111110
= ???????????011
¹
```
The next column now requires us to calculate ```1+1+1```. This means adding three numbers together, so things are getting complicated for our computer. But we can still compile it down to simpler operations, and do it in a way that only ever requires us to add two bits together. For this, we can start with just the first two 1s.
```code
1
+ 1
= 10
```
Now we need to add this ```10``` to the final ```1``` , which can be done using our usual method of going through the columns.
```code
10
+ 01
= 11
```
The final answer is <code>11</code> (also known as 3).
Now we can get back to the rest of the problem. With the answer of <code>11</code>, we have another carry bit.
```code
10001111111101
+ 00011100111110
= ??????????1011
¹¹
```
So now we have another 1+1+1 to do. But we already know how to do that, so it’s not a big deal.
In fact, everything left so far is something we already know how to do. This is because, if you break everything down into adding just two bits, there are only four possible things you’ll ever need to calculate. Here are the four basic sums (we’ll write all the answers with two bits to be consistent).
```code
0+0 = 00 (in decimal, this is 0+0=0)
0+1 = 01 (in decimal, this is 0+1=1)
1+0 = 01 (in decimal, this is 1+0=1)
1+1 = 10 (in decimal, this is 1+1=2)
```
This is called a *half adder*. If our computer can implement this, and if it can chain many of them together, it can add anything.
### 4.3 Adding with Qiskit <a id="adding-qiskit"></a>
Let's make our own half adder using Qiskit. This will include a part of the circuit that encodes the input, a part that executes the algorithm, and a part that extracts the result. The first part will need to be changed whenever we want to use a new input, but the rest will always remain the same.

The two bits we want to add are encoded in the qubits 0 and 1. The above example encodes a ```1``` in both these qubits, and so it seeks to find the solution of ```1+1```. The result will be a string of two bits, which we will read out from the qubits 2 and 3. All that remains is to fill in the actual program, which lives in the blank space in the middle.
The dashed lines in the image are just to distinguish the different parts of the circuit (although they can have more interesting uses too). They are made by using the `barrier` command.
The basic operations of computing are known as logic gates. We’ve already used the NOT gate, but this is not enough to make our half adder. We could only use it to manually write out the answers. Since we want the computer to do the actual computing for us, we’ll need some more powerful gates.
To see what we need, let’s take another look at what our half adder needs to do.
```code
0+0 = 00
0+1 = 01
1+0 = 01
1+1 = 10
```
The rightmost bit in all four of these answers is completely determined by whether the two bits we are adding are the same or different. So for ```0+0``` and ```1+1```, where the two bits are equal, the rightmost bit of the answer comes out [[0|1]]. For ```0+1``` and ```1+0```, where we are adding different bit values, the rightmost bit is [[1|0]].
To get this part of our solution correct, we need something that can figure out whether two bits are different or not. Traditionally, in the study of digital computation, this is called an [XOR gate](gloss:xor).
| Input 1 | Input 2 | XOR Output |
|:-------:|:-------:|:------:|
| 0 | 0 | 0 |
| 0 | 1 | 1 |
| 1 | 0 | 1 |
| 1 | 1 | 0 |
In quantum computers, the job of the XOR gate is done by the controlled-NOT gate. Since that's quite a long name, we usually just call it the CNOT. In Qiskit its name is ```cx```, which is even shorter. In circuit diagrams, it is drawn as in the image below.
```
qc_cnot = QuantumCircuit(2)
qc_cnot.cx(0,1)
qc_cnot.draw()
```
This is applied to a pair of qubits. One acts as the control qubit (this is the one with the little dot). The other acts as the *target qubit* (with the big circle).
There are multiple ways to explain the effect of the CNOT. One is to say that it looks at its two input bits to see whether they are the same or different. Next, it overwrites the target qubit with the answer. The target becomes ```0``` if they are the same, and ```1``` if they are different.
<img src="images/cnot_xor.svg">
Another way of explaining the CNOT is to say that it does a NOT on the target if the control is ```1```, and does nothing otherwise. This explanation is just as valid as the previous one (in fact, it’s the one that gives the gate its name).
Try the CNOT out for yourself by trying each of the possible inputs. For example, here's a circuit that tests the CNOT with the input ```01```.
```
qc = QuantumCircuit(2,2)
qc.x(0)
qc.cx(0,1)
qc.measure(0,0)
qc.measure(1,1)
qc.draw()
```
If you execute this circuit, you’ll find that the output is ```11```. We can think of this happening because of either of the following reasons.
- The CNOT calculates whether the input values are different and finds that they are, which means that it wants to output ```1```. It does this by writing over the state of qubit 1 (which, remember, is on the left of the bit string), turning ```01``` into ```11```.
- The CNOT sees that qubit 0 is in state ```1```, and so applies a NOT to qubit 1. This flips the ```0``` of qubit 1 into a ```1```, and so turns ```01``` into ```11```.
Here is a table showing all the possible inputs and corresponding outputs of the CNOT gate:
| Input (q1 q0) | Output (q1 q0) |
|:-------------:|:--------------:|
| 00 | 00 |
| 01 | 11 |
| 10 | 10 |
| 11 | 01 |
For our half adder, we don’t want to overwrite one of our inputs. Instead, we want to write the result on a different pair of qubits. For this, we can use two CNOTs.
```
qc_ha = QuantumCircuit(4,2)
# encode inputs in qubits 0 and 1
qc_ha.x(0) # For a=0, remove this line. For a=1, leave it.
qc_ha.x(1) # For b=0, remove this line. For b=1, leave it.
qc_ha.barrier()
# use cnots to write the XOR of the inputs on qubit 2
qc_ha.cx(0,2)
qc_ha.cx(1,2)
qc_ha.barrier()
# extract outputs
qc_ha.measure(2,0) # extract XOR value
qc_ha.measure(3,1)
qc_ha.draw()
```
We are now halfway to a fully working half adder. We just have the other bit of the output left to do: the one that will live on qubit 3.
If you look again at the four possible sums, you’ll notice that there is only one case for which this is ```1``` instead of ```0```: ```1+1```=```10```. It happens only when both the bits we are adding are ```1```.
To calculate this part of the output, we could just get our computer to look at whether both of the inputs are ```1```. If they are — and only if they are — we need to do a NOT gate on qubit 3. That will flip it to the required value of ```1``` for this case only, giving us the output we need.
For this, we need a new gate: like a CNOT but controlled on two qubits instead of just one. This will perform a NOT on the target qubit only when both controls are in state ```1```. This new gate is called the [Toffoli](gloss:toffoli). For those of you who are familiar with Boolean logic gates, it is basically an AND gate.
In Qiskit, the Toffoli is represented with the `ccx` command.
```
qc_ha = QuantumCircuit(4,2)
# encode inputs in qubits 0 and 1
qc_ha.x(0) # For a=0, remove the this line. For a=1, leave it.
qc_ha.x(1) # For b=0, remove the this line. For b=1, leave it.
qc_ha.barrier()
# use cnots to write the XOR of the inputs on qubit 2
qc_ha.cx(0,2)
qc_ha.cx(1,2)
# use ccx to write the AND of the inputs on qubit 3
qc_ha.ccx(0,1,3)
qc_ha.barrier()
# extract outputs
qc_ha.measure(2,0) # extract XOR value
qc_ha.measure(3,1) # extract AND value
qc_ha.draw()
```
In this example, we are calculating ```1+1```, because the two input bits are both ```1```. Let's see what we get.
```
counts = execute(qc_ha,Aer.get_backend('qasm_simulator')).result().get_counts()
plot_histogram(counts)
```
The result is ```10```, which is the binary representation of the number 2. We have built a computer that can solve the famous mathematical problem of 1+1!
Now you can try it out with the other three possible inputs, and show that our algorithm gives the right results for those too.
The half adder contains everything you need for addition. With the NOT, CNOT, and Toffoli gates, we can create programs that add any set of numbers of any size.
These three gates are enough to do everything else in computing too. In fact, we can even do without the CNOT. Additionally, the NOT gate is only really needed to create bits with value ```1```. The Toffoli gate is essentially the atom of mathematics. It is the simplest element, from which every other problem-solving technique can be compiled.
As we'll see, in quantum computing we split the atom.
```
import qiskit
qiskit.__qiskit_version__
```
| github_jupyter |

# Fundamentos de Ciência de Dados
---
# PPGI/UFRJ 2020.3
## Prof Sergio Serra e Jorge Zavaleta
---
# Reprodutibilidade em Python
Fonte: Re-run, Repeat, Reproduce, Reuse, Replicate: Transforming Code into
Scientific Contributions
Fabien C. Y. Benureau and Nicolas P. Rougier
> "Replicability is a cornerstone of science. If an experimental result cannot be re-obtained by an independent party, it merely becomes, at best, an observation that may inspire future research (Mesirov, 2010; Open Science Collaboration, 2015)."
# R0 - Irreproducibility
A program can fail as a scientific contribution in many different ways for
many different reasons, i.e. code errors; depracted methods; older compiler versions, lack od documentation, ...
```
import random
for i in xrange(10): #xrange?
step = random.choice([-1,+1])
x += step
print x, #print?
```
# R1 - Re-Runnable
Re-runnable code should describe—with enough details to be recreated—an execution environment in which it is executable. It is far from being either obvious or easy.
```
# Random walk (R1: re-runnable)
# Tested with Python 3.8
# Where S = steps, D = Data, E= Environmente and R = Results
import random
x = 0 # Inicialization
walk = []
for i in range(10): # Loop - S’= S and D’= D
step = random.choice([-1,+1]) # random.choice() function returns a random element from the non-empty sequence - E’~ E
x += step #
walk.append(x)
print(walk) # Output - R != R’
```
### Run again, again and again...
The output are the same?
#### S’= S and E’~ E and D’= D and R != R’
```
# Random walk (R1: re-runnable)
# Tested with Python 3.8
# Where S = steps, D = Data, E= Environmente and R = Results
import random
x = 0 # Inicialization
walk = []
for i in range(10): # Loop - S’= S and D’= D
step = random.choice([-1,+1]) # random.choice() function returns a random element from the non-empty sequence - E’~ E
x += step #
walk.append(x)
print(walk) # Output - R != R’
```
## R2 - Repeatable
A repeatable code is one that can be rerun and that produces the SAME results on successive runs
Program needs to be deterministic
Control the initialization of pseudo-random number generators
Previous results need to be available (it is possible to compare with current results)
#### S’= S and E’~ E and D’= D and R = R’
```
# Random walk (R2: repeatable)
# Tested with Python 3
import random
random.seed(1) # RNG initialization
x = 0 # Inicialization
walk = []
for i in range(10): # Loop - S’= S and D’= D
step = random.choice([-1,+1]) # random.choice() function returns a random element from the non-empty sequence - E’~ E
x += step # pseudo-random number generator between Python 3.2 and Python 3.3.
walk.append(x)
print(walk)
# Saving output to disk
with open('data/results-R2.txt', 'w') as fd:
fd.write(str(walk)) # Output - R = R’ on the same Python engine!
```
## R3 - Reproducible
A repeatable code is one that can be rerun and that produces the SAME results on successive runs
Program needs to be deterministic
#### S’= S and E’~ E and D’= D and R = R’
```
# Random walk (R3)
# Copyright (c) 2017 N.P. Rougier and F.C.Y. Benureau
# Adapted by Serra
# Release under the Windows 10
# Tested with 64 bit (AMD64)
import sys, subprocess, datetime, random
def compute_walk():
x = 0
walk = []
for i in range(10):
if random.uniform(-1, +1) > 0:
x += 1
else:
x -= 1
walk.append(x)
return walk
# Unit test
random.seed(42)
assert compute_walk() == [1,0,-1,-2,-1,0,1,0,-1,-2]
# Random walk for 10 steps
seed = 1
random.seed(seed)
walk = compute_walk()
# Display & Save scientific results & Poor Provenance
print(walk)
results = {
"data" : walk,
"seed" : seed,
"timestamp": str(datetime.datetime.utcnow()),
"system" : sys.version}
with open("data/results-R3a.txt", "w") as fd:
fd.write(str(results))
```
## R3 - Reproducible
A repeatable code is one that can be rerun and that produces the SAME results on successive runs
Program needs to be deterministic
#### S’= S and E’~ E and D’= D and R = R’
#### Some Provenance
```
# Copyright (c) 2017 N.P. Rougier and F.C.Y. Benureau
# Adapted by Serra
# Release under the Windows 10
# Tested with 64 bit (AMD64)
import sys, subprocess, datetime, random
# Retrospective Provenance
agent = "Sergio Serra"
myseed = 42
def compute_walk():
x = 0
walk = []
for i in range(10):
if random.uniform(-1, +1) > 0:
x += 1
else:
x -= 1
walk.append(x)
return walk
# If repository is dirty, don't run anything
if subprocess.call(("notepad", "diff-index",
"--quiet", "HEAD")):
print("Repository is dirty, please commit first")
sys.exit(1)
# Get git hash if any
hash_cmd = ("notepad", "rev-parse", "HEAD")
revision = subprocess.check_output(hash_cmd)
# Unit test
random.seed(int(myseed))
assert compute_walk() == [1,0,-1,-2,-1,0,1,0,-1,-2]
# Random walk for 10 steps
seed = 1
random.seed(seed)
walk = compute_walk()
# Display & save results & some retrospective provenance
print(walk)
results = {
"data" : walk,
"seed" : seed,
"myseed" : myseed,
"timestamp": str(datetime.datetime.utcnow()),
"revision" : revision,
"system" : sys.version,
"agent" : agent
}
with open("data/results-R3b.txt", "w") as fd:
fd.write(str(results))
```
## R3 - Reproducible - Rich Version
A repeatable code is one that can be rerun and that produces the SAME results on successive runs
Program needs to be deterministic
#### S’= S and E’~ E and D’= D and R = R’
#### Data Provenance - Poor view
```
# Random walk (R4)
# Copyright (c) 2017 N.P. Rougier and F.C.Y. Benureau
# Adapted by Serra
# Release under the Windows 10
# Pyhton 3.8 - Jupyter notebook
# Tested with 64 bit (AMD64)
import sys, subprocess, datetime, random
# Retrospective Provenance
agent = input("Enter the name of the one who is running the program: ") #PROV-Agent
entity = input("Enter the name of the Dataset: ") #PROV-Entity
activity = input("Enter the name of the Essay: ") #PROV-Activity
def compute_walk(count, x0=0, step=1, seed=0):
"""Random walk
count: number of steps
x0 : initial position (default 0)
step : step size (default 1)
seed : seed for the initialization of the
random generator (default 0)
"""
random.seed(seed)
x = x0
walk = []
for i in range(count):
if random.uniform(-1, +1) > 0:
x += 1
else:
x -= 1
walk.append(x)
return walk
def compute_results(count, x0=0, step=1, seed=0):
"""Compute a walk and return it with context"""
# If repository is dirty, don't do anything
if subprocess.call(("notepad", "diff-index",
"--quiet", "HEAD")):
print("Repository is dirty, please commit")
sys.exit(1)
# Get git hash if any
hash_cmd = ("notepad", "rev-parse", "HEAD")
revision = subprocess.check_output(hash_cmd)
# Compute results and Full Retrospective Provenance
walk = compute_walk(count=count, x0=x0,
step=step, seed=seed)
return {
"data" : walk,
"parameters": {"count": count, "x0": x0,
"step": step, "seed": seed},
"timestamp" : str(datetime.datetime.utcnow()),
"revision" : revision,
"system" : sys.version,
"Provenance": {
"PROV-agent " : agent, "wasAttributedTo "
"PROV-entity " : entity, "wasGeneratedBy "
"PROV-activity " : activity}
}
if __name__ == "__main__":
# Unit test checking reproducibility
# (will fail with Python<=3.2)
assert (compute_walk(10, 0, 1, 42) ==
[1,0,-1,-2,-1,0,1,0,-1,-2])
# Simulation parameters
count, x0, seed = 10, 0, 1
results = compute_results(count, x0=x0, seed=seed)
# Save & display results
with open("results-R4.txt", "w") as fd:
fd.write(str(results))
print(results["data"])
```
---
#### © Copyright 2021, Sergio Serra & Jorge Zavaleta
| github_jupyter |
# _MLP_
## Multi Layer Perceptron Model (Feed forward Neural Networks)
An implementation of a Neural Network used for ATR (Automatic Target Recognition)
--------
```
# dependencies
import tensorflow as tf
import numpy as np
import pickle
# load data
pickle_file = 'final_dataset.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
# Now lets test if the file really matches or is corrupted
# train_labels[0] => 2
# so the first image is a BTR70 LETS test this out
print(train_labels[0])
with open('TRAIN_BTR70.pickle','rb') as f:
s = pickle.load(f)
btr_train = s
del s
for image in btr_train:
if (image - train_dataset[0]).any() == 0:
print('no problem')
break
print('done')
```
**Reformat Data - **
Flatten arrays and make labels 1-hot encoded arrays
```
image_size = 128
num_labels = 3
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
# hyper parameters
num_steps = 551
batch_size = 30
num_labels = 3
h_nodes = 200
beta = 0.01
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
train_subset = 30
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
# in tensorflow you create a bunch of nodes or operations - some are constant (do not require tensor input)
# and some are not constant example matrix multilication -the end node that you want as output is supposed
# to be passed as a parameter to the session variable
#placing inside constant means that you have do not perform any computation on these tensors
# everything is an operation the below one produces a matrix
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random values following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
# y = (W*x) + b
logits = tf.matmul(tf_train_dataset, weights) + biases
# S(y)-> will be reduced to one hot encoded values then cross entropy will be calculated
# the log function D(S,L) that is the loss
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# lets add the l2 regularization layer
regularization = tf.nn.l2_loss(weights)
loss = tf.reduce_mean(loss + beta*regularization)
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 50 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
# deeper network
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Why will weigth1 be of the size 784*h_nodes
# Variables.
weights1 = tf.Variable(
tf.truncated_normal([image_size * image_size, h_nodes]))
biases1 = tf.Variable(tf.zeros([h_nodes]))
weights2 = tf.Variable(
tf.truncated_normal([h_nodes, num_labels]))
biases2 = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits1 = tf.matmul(tf_train_dataset, weights1) + biases1
# now send these logits to relu
relu_output = tf.nn.relu(logits1)
# introduce dropout to outputs from the relu layer
keep_prob = 0.5
relu_output = tf.nn.dropout(relu_output,keep_prob)
final_logits = tf.matmul(relu_output,weights2) + biases2
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=final_logits))
# now add regularization to it
regularization = tf.nn.l2_loss(weights1) + tf.nn.l2_loss(weights2)
loss = tf.reduce_mean(loss + beta*regularization)
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(final_logits)
valid_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(
tf.matmul(tf_valid_dataset, weights1) + biases1),weights2)+biases2)
test_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(
tf.matmul(tf_test_dataset, weights1) + biases1),weights2)+biases2)
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 50 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
```
** We conclude that an MLP is not able to perform well on this task **
| github_jupyter |
<hr style="border: 6px solid#003262;" />
<div align="center">
<img src= "/assets/content/datax_logos/DataX_blue_wide_logo.png" align="center" width="100%">
</div>
<br>
# **FLASK (m320):** EASY WEB DEVELOPMENT FOR RAPID DEPLOYMENT
<br>
**Author List (in no particular order):** [Ikhlaq Sidhu](https://ikhlaq-sidhu.com/), [Elias Castro Hernandez](https://www.linkedin.com/in/ehcastroh/), and [Debbie Yuen](http://www.debbiecyuen.me/)
**About (TL/DR):** The following collection of notebooks introduces developers and data scientists to web development using Flask. Flask is one of many available web server gateway interface (WSGI) tools that enable rapid and scalable websites and apps with a relatively accessible learning curve. The barebones capacity of Flask is particularly valuable when prototyping and iterating upon products, services, and machine learning applications.
**Learning Goal(s):** Gain an understanding of how to utilize available libraries and packages to quickly build products and services -- in real-life settings, using web-first methodology, driven by data, and end-to-end. In particular, learn how to build a bare-bones flask environment that can then be repurposed to (1) handle email automation tasks, (2) deploy ML models in real-time, and (3) create engaging dashboards using D3.js.
**Associated Materials:** None
**Keywords (Tags):** flask, flask-sqlalchemy, flask-web, flask-application, website, webapp, web-app, data-x, uc-berkeley-engineering
**Prerequisite Knowledge:** (1) Python, (2) HTML, and (3) CSS
**Target User:** Data scientists, applied machine learning engineers, and developers
**Copyright:** Content curation has been used to expedite the creation of the following learning materials. Credit and copyright belong to the content creators used in facilitating this content. Please support the creators of the resources used by frequenting their sites, and social media.
<hr style="border: 4px solid#003262;" />
<a name='Part_table_contents' id="Part_table_contents"></a>
#### CONTENTS
> #### [PART 0: ABOUT AND MOTIVATION](#Part_0)
> #### [PART 1: SETTING UP FLASK](#Part_1)
> #### [PART 2: DESIGN, BEHAVIORS, AND STORAGE](#Part_2)
> #### [PART 3: WRAP UP AND NEXT STEPS](#Part_3)
#### APPENDIX
> #### [PREREQUISITE KNOWLEDGE AND REQUIREMENTS](#Appendix_1)
> #### [REFERENCES AND ADDITIONAL RESOURCES](#Appendix_2)
> #### [FLASK - HELLO WORLD](#Appendix_3)
<br>
<a id='Part_0'></a>
<hr style="border: 2px solid#003262;" />
#### PART 0
## **ABOUT** AND **MOTIVATION**
<div align="center" style="font-size:12px; font-family:FreeMono; font-weight: 100; font-stretch:ultra-condensed; line-height: 1.0; color:#2A2C2B">
<a href="https://youtu.be/L-7o7dcbQWo">
<img src="assets/content/images/FlaskPart1_youtube.png" align="center" width="50%" padding="10px"/>
</a><br>
</div>
<a id='Part_1'></a>
<hr style="border: 2px solid#003262;" />
#### PART 1
## **SETTING** UP **FLASK**
<div align="center" style="font-size:12px; font-family:FreeMono; font-weight: 100; font-stretch:ultra-condensed; line-height: 1.0; color:#2A2C2B">
<img src="assets/content/images/flask_drawing.png" align="center" width="40%" padding="10"><br>
<br>
Web Microframework Written in Python
</div>
<br>
<br>
[**Flask**](https://flask.palletsprojects.com/en/1.1.x/) is a microframework that is barebones by design. Flask has no [**data abstraction layer**](https://en.wikipedia.org/wiki/Database_abstraction_layer), does not perform [**form validation**](https://developer.mozilla.org/en-US/docs/Learn/Forms/Form_validation#What_is_form_validation), nor does it include custom packages if they already exist as a package/library, elsewhere. This makes Flask particularly easy to customize, and leads to less bloated systems.
>For an accessible overview see [Harvard's CS50 lecture on using Flask](https://youtu.be/zdgYw-3tzfI).<br>
>For a crash course (1-hr intensive), see the ['Learn Flask For Python'](https://youtu.be/Z1RJmh_OqeA) tutorial by [freeCodeCamp.org](https://www.freecodecamp.org/).
<br>
<strong style="color:red">KEY CONSIDERATION:</strong> Some of the following content may be written for machines running on Linux or Mac operating systems. If you are working on a Windows machine, you will need to enable the Linux Bash Shell, or adjust Shell commands to PowerShell syntax. A tutorial on how to enable the Linux Bash Shell on Windows 10 can be found [here](https://youtu.be/xzgwDbe7foQ).
#### CONTENTS:
> [PART 1.1: SET UP AND WORKFLOW](#Part_1_1)<br>
> [PART 1.2: BASIC FLASK UP-AND-RUNNING](#Part_1_2)
<a id='Part_1_1'></a>
<hr style="border: 1px solid#003262;" />
#### PART 1.1: SET UP AND WORKFLOW
<br>
<div align="center" style="font-size:12px; font-family:FreeMono; font-weight: 100; font-stretch:ultra-condensed; line-height: 1.0; color:#2A2C2B">
<img src="/assets/content/images/greens-04.png" align="center" width="40%" padding="10"><br>
<br>
Basic Flask Stack
</div>
<br>
##### **Install Python3**
https://www.python.org/downloads/
```bash
# recommended command-line approach for linux machines
sudo apt-get install python3.6
```
___
##### **Install pip3**
https://pip.pypa.io/en/stable/
```bash
# on a terminal
sudo apt-get install python3-pip
```
___
##### **Install virtualenv**
https://virtualenv.pypa.io/en/latest/
```bash
### Install virtualenv ###
# Debian, Ubuntu
$ sudo apt-get install python-virtualenv
```
```bash
# CentOS, Fedora
$ sudo yum install python-virtualenv
```
```bash
# Arch
$ sudo pacman -S python-virtualenv
```
**Note:** If working Mac OS X, or Windows, follow [these](https://flask.palletsprojects.com/en/1.1.x/installation/#installation) instructions for installing virtualenv.
<br>
**About virtualenv:** virtualenv allows users to create isolated environments for Python projects. This means that each project can be shared, collaborated upon, and modified regardless of the project dependencies -- since any dependencies, packages, and libraries are stored in directory files that are tied to the project itself and not the *local (working directory)* machine. To learn more:
> [**Real Python (Python Virtual Environments: A Primer)**](https://realpython.com/python-virtual-environments-a-primer/)
<a id='Part_1_2'></a>
<hr style="border: 1px solid#003262;" />
#### PART 1.2: BASIC FLASK UP-AND-RUNNING
<br>
<div align="center" style="font-size:12px; font-family:FreeMono; font-weight: 100; font-stretch:ultra-condensed; line-height: 1.0; color:#2A2C2B">
<img src="/assets/content/images/greens-05.png" align="center" width="40%" padding="10"><br>
<br>
Flask in Virtual Environment
</div>
<br>
#### **Create an Environment**
We use virtual environments to manage dependencies for the project. Doing so ensures that each project is self contained and as such we can be certain that all the required dependencies are present and will not affect other projects or the operating system itself. For complete installation instructions, see [here](https://flask.palletsprojects.com/en/1.1.x/installation/#installation).
```bash
# create environment and move to directory newly created folder
$ mkdir flaskTestApp
$ cd flaskTestApp
# create a folder named venv, containing our virtual environment named venv
$ python3 -m venv venv
# Activate Environment
$ . venv/bin/activate
```
<br>
<div align="center" style="font-size:12px; font-family:FreeMono; font-weight: 100; font-stretch:ultra-condensed; line-height: 1.0; color:#2A2C2B">
<img src="/assets/content/images/flask_sanity_check_1.png" align="center" width="35%" padding="20"><br>
<br>
**Sanity Check** You'll know you've done everything correctly if you see the venv referenced in the terminal (green arrow)
<div>
```bash
# Install Flask in venv
$ pip install Flask
```
<!--Navigate back to table of contents-->
<div align="left" style="text-align: left; background-color:#003262;">
<span>
<hr style="border: 8px solid#003262;" />
<a style="color:#FFFFFF; background-color:#003262; border:1px solid #FFFFFF; border-color:#FFFFFF;border-radius:0px;border-width:0px;display:inline-block;font-family:arial,helvetica,sans-serif;font-size:24px;letter-spacing:0px;line-height:20px;padding:24px 40px;text-align:left;text-decoration:none; align:left">
<strong>CONCEPT</strong> CHECK
</a>
</span>
</div>
<!-------------------------------------->
> **To make sure you understand what has happened up to this point, and build the simplest of sites -- i.e. Flask's "Hello World" -- complete the work in [Appendix III](#Appendix_3)**
<br>
<!--Navigate back to table of contents-->
<div alig="right" style="text-align: right">
<span>
<a style="color:#FFFFFF; background-color:#003262; border:1px; solid:#FFFFFF; border-color:#FFFFFF;border-radius:5px;border-width:0px;display:inline-block;font-family:arial,helvetica,sans-serif;font-size:10px;letter-spacing:0px;line-height:10px;padding:10px 20px;text-align:center;text-decoration:none; align:center" href="#Part_table_contents" name="Table of Contents" id="Part_table_contents">
Table of Contents
</a>
</span>
</div>
<!-------------------------------------->
<a id='Part_2'></a>
<hr style="border: 2px solid#003262;" />
#### PART 2
## **STRUCTURE**, DESIGN, **BEHAVIORS**, AND STORAGE
<br>
<div align="center" style="font-size:12px; font-family:FreeMono; font-weight: 100; font-stretch:ultra-condensed; line-height: 1.0; color:#2A2C2B">
<img src="/assets/content/images/flask_drawing.png" align="center" width="30%" padding="10"><br>
<br>
Complete (Basic) Flask Stack
</div>
<br>
Now that we have a [**Flask**](https://flask.palletsprojects.com/en/1.1.x/) environment up-and-running we want to establish a structure that allows us to build our email client site. We then want to utilize [**Hypertext Markup Language (HTML)**](https://en.wikipedia.org/wiki/HTML) and [**Cascading Style Sheets (CSS)**](https://en.wikipedia.org/wiki/Cascading_Style_Sheets)to describe how to structure web pages, and establish how the site should act on the markup language -- e.g. color, layout, fonts, etc.
#### CONTENTS:
> [PART 2.1: SET UP NESTED DIRECTORIES AND FILES](#Part_2_1)<br>
> [PART 2.2: SET BASIC HTML & CSS](#Part_2_2)<br>
> [PART 2.3: LINK TO SQLITE](#Part_2_3)<br>
> [PART 2.4: LINE-BY-LINE BREAKDOWN OF myFlaskApp.py](#Part_2_4)<br>
> [PART 2.5 (OPTIONAL): IMPLEMENT MYSQL DATABASE AND SERVER ENVIRONMENT](#Part_2_5)
<a id='Part_2_1'></a>
<hr style="border: 1px solid#003262;" />
#### PART 2.1: SET UP TEMPLATE'S NESTED DIRECTORIES AND FILES
<br>
<div align="center" style="font-size:12px; font-family:FreeMono; font-weight: 100; font-stretch:ultra-condensed; line-height: 1.0; color:#2A2C2B">
<img src="/assets/content/images/greens-06.png" align="center" width="40%" padding="10"><br>
<br>
Template and HTML
</div>
<br>
#### **Set and Initialize Local Server for Development and Testing**
```bash
# create simple builtin server
$ export FLASK_APP=myFlaskApp.py
```
```bash
# run exported .py file in local server
$ python -m flask run
```
**Note:** the two commands above must be executed within the directory flaskTestApp. The commands will fail if within a subdirectory or superdirectory of flaskTestApp.
**IMPORTANT:** to kill the virtual environment after doing ```ctrl+c``` you must deactivate it by executing ```deactivate``` within the ```flaskTestApp``` directory.
```bash
$ deactivate
'''
<br>
#### **Create Directories**
Using the ```mkdir``` command, or the file explorer on your computer, create two nested folders. Namely ```static``` and ```templates```. **These folders must be named as such**, so that Jinja and Flask know where to look for html files without having to specify the absolute path to the directory when using template inheritance (more bellow).
<br>
<div align="center" style="font-size:12px; font-family:FreeMono; font-weight: 100; font-stretch:ultra-condensed; line-height: 1.0; color:#2A2C2B">
<img src="/assets/content/images/flask_sanity_check_3.png" align="center" width="35%" padding="10"><br>
<br>
**Sanity Check** you should see (venv) at the start of the working directory, and the two requested folders also
<div>
<br>
#### **Create Files**
Using your favorite text editor, we want to create the following two files (see **/flaskTestApp/assets/templates** folder for the files themselves):
```bash
# stores template inheritance and HTML
base.html
```
<br>
<div align="center" style="font-size:12px; font-family:FreeMono; font-weight: 100; font-stretch:ultra-condensed; line-height: 1.0; color:#2A2C2B">
<img src="/assets/content/images/base_html.png" align="center" width="40%" padding="10"><br>
<br>
<div>
```bash
# extends the content into the base.html template and displays
home.html
```
<br>
<div align="center" style="font-size:12px; font-family:FreeMono; font-weight: 100; font-stretch:ultra-condensed; line-height: 1.0; color:#2A2C2B">
<img src="/assets/content/images/home_html.png" align="center" width="40%" padding="10"><br>
<br>
<div>
>**Note:** when using Jinja's template inheritance you use
```html
<!-- used for passing variables, objects, running functions, etc -->
{% %}
```
```html
<!-- used for passing strings -->
{{ }}
```
<a id='Part_2_2'></a>
<hr style="border: 1px solid#003262;" />
#### PART 2.2: SET BASIC HTML & CSS
<br>
<div align="center" style="font-size:12px; font-family:FreeMono; font-weight: 100; font-stretch:ultra-condensed; line-height: 1.0; color:#2A2C2B">
<img src="/assets/content/images/greens-07.png" align="center" width="40%" padding="10"><br>
<br>
Static Content and CSS
</div>
<br>
#### **Create Directories and Files**
Using the ```mkdir``` command, or the file explorer on your computer, inside of the ```static``` directory create a folder named ```CSS```and a file named main.css Using ```main.css``` we can specify the desired global or local CSS design constraints.
<br>
<div align="center" style="font-size:12px; font-family:FreeMono; font-weight: 100; font-stretch:ultra-condensed; line-height: 1.0; color:#2A2C2B">
<img src="/assets/content/images/flask_sanity_check_4.png" align="center" width="40%" padding="10"><br>
<br>
**Sanity Check** you should see (venv) at the start of the working directory, and a folder named CSS within the static folder
<div>
#### **Create Files**
Using your favorite text editor, we want to create the following two files (see **/assets/flaskTestApp/static/CSS** folder for the file):
```bash
# stores stylesheet
main.css
```
<br>
<div align="center" style="font-size:12px; font-family:FreeMono; font-weight: 100; font-stretch:ultra-condensed; line-height: 1.0; color:#2A2C2B">
<img src="/assets/content/images/main_css.png" align="center" width="40%" padding="10"><br>
<br>
<div>
<br>
#### **Enable CSS within Jinja Templating**
Using your favorite text editor, update the ```base.html``` file to contain the following
```html
<link rel="stylesheet" herf="{{ url_for('static', filename='CSS/main.css')}}">
```
Your ```base.html``` file should look like the following:
<br>
<div align="center" style="font-size:12px; font-family:FreeMono; font-weight: 100; font-stretch:ultra-condensed; line-height: 1.0; color:#2A2C2B">
<img src="/assets/content/images/updated_base_html.png" align="center" width="40%" padding="10"><br>
<br>
<div>
<br>
___
**About Template Inheritance:** Jinja allows users to leverage [**template inheritance**](https://jinja.palletsprojects.com/en/2.11.x/templates/#template-inheritance) when constructing a site. Meaning that a bit of time architecting the 'skeleton' and associated templates, a developer can save a lot of time when maintaining and/or updating a site by passing content blocks via the template calls that are available thorughout any desired page on the site being built. To learn more about templating:
> [**Real Python (Primer on Jinja Templating)**](https://realpython.com/primer-on-jinja-templating/)
___
<a id='Part_2_3'></a>
<hr style="border: 1px solid#003262;" />
#### PART 2.3: LINK TO SQLITE
<br>
<div align="center" style="font-size:12px; font-family:FreeMono; font-weight: 100; font-stretch:ultra-condensed; line-height: 1.0; color:#2A2C2B">
<img src="/assets/content/images/greens-08.png" align="center" width="40%" padding="10"><br>
<br>
Static Content and CSS
</div>
<br>
#### **Launch Virtual Environment**
First make sure that you are within ```venv``` by running
```bash
$ source venv/bin/activate
```
<br>
<br>
<div align="center" style="font-size:12px; font-family:FreeMono; font-weight: 100; font-stretch:ultra-condensed; line-height: 1.0; color:#2A2C2B">
<img src="/assets/content/images/flask_sanity_check_9.png" align="center" width="40%" padding="10"><br>
<br>
**Sanity Check**
<div>
<br>
#### **Install SQL Alchemy**
In the terminal with the ```venv``` environment, enter
```bash
$ pip install Flask-SQLAlchemy
```
<br>
<div align="center" style="font-size:12px; font-family:FreeMono; font-weight: 100; font-stretch:ultra-condensed; line-height: 1.0; color:#2A2C2B">
<img src="/assets/content/images/flask_sanity_check_10.png" align="center" width="40%" padding="10"><br>
<br>
**Sanity Check**
<div>
<br>
#### **Enable SQL Alchemy and Create Database Initialization in myFlaskApp**
Using your favorite text editor, update the ```myFlaskApp.py``` file to match the following:
<br>
<div align="center" style="font-size:16px; font-family:FreeMono; font-weight: 100; font-stretch:ultra-condensed; line-height: 1.0; color:#2A2C2B">
<a href="https://youtu.be/NW2qg0dDuW0">
<img src="assets/content/images/flask_myFlaskApp.png" align="center" width="50%" padding="10px"/>
</a>
<div align="center"><br><strong>Click on Image to Play</strong> </div>
</div>
<br>
<!--Navigate back to table of contents-->
<div align="left" style="text-align: left; background-color:#003262;">
<span>
<hr style="border: 8px solid#003262;" />
<a style="color:#FFFFFF; background-color:#003262; border:1px solid #FFFFFF; border-color:#FFFFFF;border-radius:0px;border-width:0px;display:inline-block;font-family:arial,helvetica,sans-serif;font-size:24px;letter-spacing:0px;line-height:20px;padding:24px 40px;text-align:left;text-decoration:none; align:left">
<strong>CONCEPT</strong> CHECK
</a>
</span>
</div>
<!-------------------------------------->
>**You can see the** ```__repr__()``` **method can be used for debugging by doing the following:**<br><br>
>In a terminal window, within the activated virtual environment, start a Python3 session and enter the following
```bash
$ python3
```
```python
# this portion is within the python3 session
>>> from myFlaskApp.models import userEmail
>>> uE = userEmail(username='elias', email='elias@example.com')
>>> uE
# what is the output?
```
<hr style="border: 2px solid#003262;" />
#### **Create SQLite Database**
<br>
Start up a ```python3``` session
```bash
$ python3
```
<br>
then import the SQLite database listed on myFlaskApp
```python
>>> from myFlaskApp import db
```
<br>
Create the database ```myFlaskDB.db``` and store within the relative path (i.e. in current working directory ```flaskTestApp```
```python
>>> db.create_all()
```
<br>
Finally, exit out of the ```python3``` shell
```python
>>> exit()
```
<br>
<div align="center" style="font-size:12px; font-family:FreeMono; font-weight: 100; font-stretch:ultra-condensed; line-height: 1.0; color:#2A2C2B">
<img src="/assets/content/images/flask_sanity_check_11.png" align="center" width="50%" padding="10"><br>
<br>
**Sanity Check**
<div>
<br>
<!--Navigate back to table of contents-->
<div alig="right" style="text-align: right">
<span>
<a style="color:#FFFFFF; background-color:#003262; border:1px; solid:#FFFFFF; border-color:#FFFFFF;border-radius:5px;border-width:0px;display:inline-block;font-family:arial,helvetica,sans-serif;font-size:10px;letter-spacing:0px;line-height:10px;padding:10px 20px;text-align:center;text-decoration:none; align:center" href="#Part_table_contents" name="Table of Contents" id="Part_table_contents">
Table of Contents
</a>
</span>
</div>
<!-------------------------------------->
<a id='Part_2_5'></a>
<hr style="border: 2px solid#003262;" />
#### PART 2.5 (OPTIONAL): IMPLEMENT MYSQL DATABASE AND WSGI ENVIRONMENT
<br>
The following takes you through the process of associating a database server with your Flask environment. It is not needed for this particular effort, but available for those that are interested -- and may be helpful when completing the homework.
<br>
#### **Install MySQL Database Packages**
```bash
# within venv, install the following
$ sudo apt-get install apache2 mysql-client mysql-server
```
**Note:** multiple packages can be installed by adding a single space between package names.
<br>
<div align="center" style="font-size:12px; font-family:FreeMono; font-weight: 100; font-stretch:ultra-condensed; line-height: 1.0; color:#2A2C2B">
<img src="/assets/content/images/flask_sanity_check_5.png" align="center" width="50%" padding="10"><br>
<br>
**Sanity Check**
<div>
#### **Install WSGI to run Flask Environment**
Setting up a WSGI environment makes development and production easier as the site scales beyond the scope of this tutorial.
```bash
# within venv, install the following
$ sudo apt-get install libapache2-mod-wsgi
```
<br>
<div align="center" style="font-size:12px; font-family:FreeMono; font-weight: 100; font-stretch:ultra-condensed; line-height: 1.0; color:#2A2C2B">
<img src="/assets/content/images/flask_sanity_check_6.png" align="center" width="50%" padding="10"><br>
<br>
**Sanity Check**
<div>
#### **Enable WSGI**
```bash
$ sudo a2enmod wsgi
```
<br>
<div align="center" style="font-size:12px; font-family:FreeMono; font-weight: 100; font-stretch:ultra-condensed; line-height: 1.0; color:#2A2C2B">
<img src="/assets/content/images/flask_sanity_check_7.png" align="center" width="35%" padding="10"><br>
<br>
**Sanity Check**
<div>
#### **Check That Flask and WSGI are Working**
```bash
$ sudo python myFlaskApp.py
```
<br>
<div align="center" style="font-size:12px; font-family:FreeMono; font-weight: 100; font-stretch:ultra-condensed; line-height: 1.0; color:#2A2C2B">
<img src="/assets/content/images/flask_sanity_check_8.png" align="center" width="40%" padding="20"><br>
<br>
**Sanity Check**
<div>
___
**About WSGI:** [Web Server Gateway Interface (WSGI)](http://wsgi.tutorial.codepoint.net/) allows us to create an interface by which the server and application can communicate. There is no need for an API. To learn more about WSGI:
>[**WSGI for Web Developer by Ryan Wilson-Perkin**](https://www.youtube.com/watch?v=WqrCnVAkLIo)
___
<a id='Part_3'></a>
<hr style="border: 2px solid#003262;" />
#### PART 3
## **WRAP-UP** AND **NEXT** STEPS
<div align="center">
<img src= "/assets/content/datax_logos/DataX_icon_wide_logo.png" align="center" width="80%" padding="20">
</div>
<br>
As you may have started to notice, that there is much more than can be done using Flask. Wanting to learn other uses of Flask? Visit the [**Data-X website**](https://datax.berkeley.edu/) to learn more, or use the following links to topics of interest:
> [**SENDING EMAIL WITH FLASK + SMTPLIB: url needed**]() Capitalizes on Flask's barebones architecture to create a lightweight email client using SMTPLIB<br>
> [**INTRODUCTION TO AWS + FLASK: url needed**]() Shows you how to deploy your Flask environment in an elastic server <br>
> [**DASHBOARDS USING D3.js + FLASK: url needed**]() Covers how to deploy a dashboard Flask website with dynamic plots using D3.js<br>
> [**PRODUCTIZED MACHINE LEARNING MODELS USING FLASK: url needed**]() Introduces how to deploy machine learning models that are accessible via the web
<br>
<br>
<a id='Appendix_1'></a>
<hr style="border: 6px solid#003262;" />
#### APPENDIX I
## PREREQUISITE **KNOWLEDGE** AND **REQUIREMENTS**
In order to create a barebones website that communicates with existing API's
to generate emails, we will need to connect several components. Specifically, we will spin up an [**AWS server**](https://aws.amazon.com/) account, create an [**Elastic Compute Cloud (EC2)**](https://aws.amazon.com/ec2/) instance, link our [**Flask**](https://flask.palletsprojects.com/en/1.1.x/) environment to the [**Google Sheets API**](https://developers.google.com/sheets/api), and then send emails using [**smtplib**](https://docs.python.org/3/library/smtplib.html) via either a local or remote server. There are many theoretical and applied topics that underlie the various components of the technology stack just mentioned. For the sake of focusing on the implementation, none of the notebooks in this series will deepdive into any of the required components. Intead, the content is purposefully accessible and designed for self-learning with the assumption that the user is familiar with the above concepts. In case you are not yet ready but want to follow along, following are some helpful links to help you with some of the prerequisites.<br>
##### **PYTHON**
This notebook and the executable files are built using [**Python**](https://www.python.org/) and relies on common Python packages (e.g. [**NumPy**](https://numpy.org/)) for operation. If you have a lot of programming experience in a different language (e.g. C/C++/Matlab/Java/Javascript), you will likely be fine, otherwise:
> [**Python (EDX free)**](https://www.edx.org/course?search_query=python)<br>
> [**Python (Coursera free)**](https://www.coursera.org/search?query=python)
##### **HTML+CSS**
[**Hypertext Markup Language (HTML)**](https://en.wikipedia.org/wiki/HTML) and [**Cascading Style Sheets (CSS)**](https://en.wikipedia.org/wiki/Cascading_Style_Sheets) are core technologies for building web pages. HTML describes the structure of web pages, while CSS is an independent language that acts on the markup language, the color, layout, and fonts. Add graphics and scripting to HTML+CSS and you have the basis for building web pages and web applications. Knowing how to utilize HTML+CSS is not required for this notebook. However, if you are wanting to learn more see the following:
> [**W3C.org (Tutorials and Courses)**](https://www.w3.org/2002/03/tutorials.html#webdesign_htmlcss)
##### **JINJA2**
[**Jinja2**](https://jinja.palletsprojects.com/en/2.11.x/) is a design-friendly templating language based on Django's templates. Optimized for working in Python, Jinja2 is fast, secure, and easy to learn. If you want to learn more about Jinja:
> [**Jinja2 (Primer on Jinja Templating)**](https://realpython.com/primer-on-jinja-templating/)
<a id='Appendix_2'></a>
<hr style="border: 2px solid#003262;" />
#### APPENDIX II
## **REFERENCES** AND ADDITIONAL **RESOURCES**
<br>
> * [**Learn Flask for Python by FreeCodeCamp.org**](https://www.youtube.com/watch?v=Z1RJmh_OqeA)
> * [**Flask Web Development in Python 3 - Variables in your HTML by Sentdex**](https://youtu.be/4vvHkziL3oQ)
> * [**Discover Flask by Real Python**](https://realpython.com/introduction-to-flask-part-1-setting-up-a-static-site/)
> * [**Flask Mega Tutorial by Miguel Grinberg**](https://courses.miguelgrinberg.com/courses/flask-mega-tutorial/lectures/5203689)
<a id='Appendix_3'></a>
<hr style="border: 2px solid#003262;" />
#### APPENDIX III
## QUICKSTART **FLASK** - **HELLO WORLD**
<br>
The following quickstart example assumes that Flask is installed. For detailed instructions what the code is doing, see [here](https://flask.palletsprojects.com/en/1.1.x/quickstart/). The following cell should be saved as a .py file. See **Flask_helloworld.py** in **assets/flaskTestApp** folder, for reference.
```python
## -------------------------------------------------------------------- ##
# Import Flask Class
from flask import Flask
# create instance of class '__name__' since we are using a single module
app = Flask(__name__)
## routing binds urls to functions via decorators ##
@app.route('/')
def hello_world():
return 'Hello World!'
## -------------------------------------------------------------------- ##
```
**Note:** using the ```route()``` generator tells flask to generate URLs when that particular function is called, and to return the message and display it on a browser.
___
#### **Quick Start (Run Application -- Hello World)**
For alternate ways of running the application, see [here](https://flask.palletsprojects.com/en/1.1.x/quickstart/).
```bash
# create simple builtin server
$ export FLASK_APP=Flask_helloworld.py
$ python -m flask run
```
___
<div align="center" style="font-size:12px; font-family:FreeMono; font-weight: 100; font-stretch:ultra-condensed; line-height: 1.0; color:#2A2C2B">
<img src="/assets/content/images/flask_sanity_check_2.png" align="center" width="40%" padding="10"><br>
<br>
<div>
**Sanity Check:** head over to the given http, where you should see the 'Hello World!' greeting. If your localhost site does not appear, see how to [debug here](https://flask.palletsprojects.com/en/1.1.x/quickstart/#what-to-do-if-the-server-does-not-start).
<!--Navigate back to table of contents-->
<div alig="right" style="text-align: right">
<span>
<a style="color:#FFFFFF; background-color:#003262; border:1px solid #FFFFFF; border-color:#FFFFFF;border-radius:5px;border-width:0px;display:inline-block;font-family:arial,helvetica,sans-serif;font-size:10px;letter-spacing:0px;line-height:10px;padding:10px 20px;text-align:center;text-decoration:none; align:center" href="#Part_table_contents" name="Table of Contents" id="Part_table_contents">
Table of Contents
</a>
</span>
</div>
<!-------------------------------------->
<hr style="border: 6px solid#003262;" />
| github_jupyter |
<a href="https://colab.research.google.com/github/tancik/fourier-feature-networks/blob/master/Experiments/1d_scatter_plots.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip install -q git+https://www.github.com/google/neural-tangents
import jax
from jax import random, grad, jit, vmap
from jax.config import config
from jax.lib import xla_bridge
import jax.numpy as np
import neural_tangents as nt
from neural_tangents import stax
from jax.experimental import optimizers
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.pylab as pylab
from tqdm.notebook import tqdm as tqdm
from matplotlib.lines import Line2D
import time
import numpy as onp
prop_cycle = plt.rcParams['axes.prop_cycle']
colors = prop_cycle.by_key()['color']
# Utils
fplot = lambda x : np.fft.fftshift(np.log10(np.abs(np.fft.fft(x))))
# Signal makers
def sample_random_signal(key, decay_vec):
N = decay_vec.shape[0]
raw = random.normal(key, [N, 2]) @ np.array([1, 1j])
signal_f = raw * decay_vec
signal = np.real(np.fft.ifft(signal_f))
return signal
def sample_random_powerlaw(key, N, power):
coords = np.float32(np.fft.ifftshift(1 + N//2 - np.abs(np.fft.fftshift(np.arange(N)) - N//2)))
decay_vec = coords ** -power
return sample_random_signal(key, decay_vec) # * 100
# Network
def make_network(num_layers, num_channels, num_outputs=1):
layers = []
for i in range(num_layers-1):
layers.append(stax.Dense(num_channels, parameterization='standard'))
layers.append(stax.Relu(do_backprop=True))
layers.append(stax.Dense(num_outputs, parameterization='standard'))
return stax.serial(*layers)
# Encoding
input_encoder = lambda x, a, b: np.concatenate([a * np.sin((2.*np.pi*x[...,None]) * b),
a * np.cos((2.*np.pi*x[...,None]) * b)], axis=-1) / np.linalg.norm(a) #* np.sqrt(a.shape[0])
### complicated things
def train_model_lite(rand_key, network_size, lr, iters,
train_input, test_input, optimizer, ab):
init_fn, apply_fn, kernel_fn = make_network(*network_size)
kernel_fn = jit(kernel_fn)
run_model = jit(lambda params, ab, x: np.squeeze(apply_fn(params, input_encoder(x, *ab))))
model_loss = jit(lambda params, ab, x, y: .5 * np.sum((run_model(params, ab, x) - y) ** 2))
model_psnr = jit(lambda params, ab, x, y: -10 * np.log10(np.mean((run_model(params, ab, x) - y) ** 2)))
model_grad_loss = jit(lambda params, ab, x, y: jax.grad(model_loss)(params, ab, x, y))
opt_init, opt_update, get_params = optimizer(lr)
opt_update = jit(opt_update)
_, params = init_fn(rand_key, (-1, input_encoder(train_input[0], *ab).shape[-1]))
opt_state = opt_init(params)
pred0 = run_model(get_params(opt_state), ab, test_input[0])
pred0_f = np.fft.fft(pred0)
for i in (range(iters)):
opt_state = opt_update(i, model_grad_loss(get_params(opt_state), ab, *train_input), opt_state)
train_psnr = model_psnr(get_params(opt_state), ab, *train_input)
test_psnr = model_psnr(get_params(opt_state), ab, *test_input)
# theory = predict_psnr(kernel_fn, np.fft.fft(test_input[1]), pred0_f, ab, i * lr)
return get_params(opt_state), train_psnr, test_psnr #, theory
```
# Make fig 4
```
N_train = 1024
data_powers = [.5, 1.0, 1.5]
# data_powers = [1.0]
N_test_signals = 4
N_embed = 16
network_size = (4, 256)
# learning_rate = 5e-3
learning_rate = 2e-3
sgd_iters = 1000
rand_key = random.PRNGKey(0)
def data_maker(rand_key, N_pts, N_signals, p):
rand_key, *ensemble = random.split(rand_key, 1 + N_signals)
data = np.stack([sample_random_powerlaw(ensemble[i], N_pts, p) for i in range(N_signals)])
# data = (data - data.min(-1, keepdims=True)) / (data.max(-1, keepdims=True) - data.min(-1, keepdims=True)) - .5
data = (data - data.min()) / (data.max() - data.min()) - .5
return data, rand_key
# Signal
M = 2 ## Dont change
N = N_train
x_test = np.float32(np.linspace(0,1.,N*M,endpoint=False))
x_train = x_test[::M]
search_vals = 2. ** np.linspace(-5., 4., 1*8+1)
bval_generators = {
'gaussian' : (32, lambda key, sc, N : random.normal(key, [N]) * sc),
'unif' : (64, lambda key, sc, N : random.uniform(key, [N]) * sc),
'power1' : (80, lambda key, sc, N : (sc ** random.uniform(key, [N]))),
'laplace' : (20, lambda key, sc, N : random.laplace(key, [N]) * sc),
}
names = list(bval_generators.keys())
train_fn = lambda s, key, ab : train_model_lite(key, network_size, learning_rate, sgd_iters,
(x_train, s[::2]), (x_test[1::2], s[1::2]), optimizers.adam, ab)
best_powers = [.4, .75, 1.5]
outputs_meta = []
dense_meta = []
s_lists = []
for p, bp in tqdm(zip(data_powers, best_powers)):
s_list, rand_key = data_maker(rand_key, N*M, N_test_signals, p)
s_lists.append(s_list)
b = np.float32(np.arange(1, N//2+1))
ab_dense = (b ** -bp, b)
rand_key, *ensemble_key = random.split(rand_key, 1+s_list.shape[0])
ensemble_key = np.array(ensemble_key)
ab_samples = np.array([ab_dense] * s_list.shape[0])
outputs_dense = vmap(train_fn, in_axes=(0, 0, 0))(s_list, ensemble_key, ab_samples)
dense_meta.append(outputs_dense)
outputs = []
for sc in tqdm(search_vals, leave=False):
outputs.append([])
for k in tqdm(names, leave=False):
rand_key, *ensemble_key = random.split(rand_key, 1+s_list.shape[0])
ensemble_key = np.array(ensemble_key)
factor, b_fn = bval_generators[k]
ab_samples = np.array([(np.ones([N_embed]), b_fn(ensemble_key[i], factor * sc, N_embed)) for i in range(s_list.shape[0])])
rand_key, *ensemble_key = random.split(rand_key, 1+s_list.shape[0])
ensemble_key = np.array(ensemble_key)
z = vmap(train_fn, in_axes=(0, 0, 0))(s_list, ensemble_key, ab_samples)
outputs[-1].append(list(z) + [ab_samples])
outputs_meta.append(outputs)
```
# Make Figure
```
prop_cycle = plt.rcParams['axes.prop_cycle']
colors = prop_cycle.by_key()['color']
params = {'legend.fontsize': 22,
'axes.labelsize': 22,
'axes.titlesize': 26,
'xtick.labelsize':18,
'ytick.labelsize':18}
pylab.rcParams.update(params)
matplotlib.rcParams['mathtext.fontset'] = 'cm'
matplotlib.rcParams['mathtext.rm'] = 'serif'
plt.rcParams["font.family"] = "cmr10"
colors_k = np.array([[0.8872, 0.4281, 0.1875],
[0.8136, 0.6844, 0.0696],
[0.2634, 0.6634, 0.4134],
[0.0943, 0.5937, 0.8793],
[0.3936, 0.2946, 0.6330],
[0.7123, 0.2705, 0.3795]])
legend_map = {
'gaussian' : "Gaussian",
'unif' : "Uniform",
'power1' : r"Uniform $\log$",
'laplace' : "Laplacian",
}
markers = {
'gaussian' : "x",
'unif' : "D",
'power1' : "s",
'laplace' : "v",
}
plt.figure(figsize=(20,4))
plot_ind = 0
letters = ['a','b','c']
for data_power, outputs, outputs_dense in zip(data_powers, outputs_meta, dense_meta):
plot_ind += 1
if plot_ind == 4:
continue
plt.subplot(1,3,plot_ind)
ax = plt.gca()
errs_dense = outputs_dense[2]
errs_dense = 10**(errs_dense/-10)
errs_dense_std = np.std(errs_dense)
errs_dense = np.mean(errs_dense)
marker_list = []
for i, k in enumerate(names):
data = [r[i] for r in outputs]
bvals = np.array([z[-1] for z in data])[:,:,1]
b_stds = np.sqrt(np.mean(bvals**2, -1))
offset_vec = 0.
errs = np.array([z[2] for z in data]) - offset_vec
errs = 10**(errs.ravel()/-10)
plt.scatter(b_stds.ravel(), errs, label=legend_map[k], marker='o', alpha=.75, s=25, c=[colors_k[i]])
plt.grid(True, which='major', alpha=.3)
plt.yscale('log')
if plot_ind == 1:
under_loc = (0.16, 1.05)
over_loc = (0.88, 1.05)
under_end = .32
over_start = .76
elif plot_ind == 2:
under_loc = (0.22, 1.05)
over_loc = (0.84, 1.05)
under_end = .44
over_start = .69
else:
under_loc = (0.17, 1.05)
over_loc = (0.745, 1.05)
under_end = .34
over_start = .5
props = dict(boxstyle='round,pad=0.35,rounding_size=.1', facecolor='white', edgecolor='gray', linewidth=.3, alpha=0.95)
plt.text(*under_loc, 'Underfitting', horizontalalignment='center',
verticalalignment='center', transform=ax.transAxes, fontsize=18)
plt.text(*over_loc, 'Overfitting', horizontalalignment='center',
verticalalignment='center', transform=ax.transAxes, fontsize=18)
kwargs = dict(transform=ax.transAxes, linewidth=4, color='k', clip_on=False)
ax.plot((0.005, under_end), (.99, .99), **kwargs)
ax.plot((over_start, .995), (.99, .99), **kwargs)
plt.hlines(errs_dense, 0, 2**11, alpha=.5, linestyles='--', label='Dense')
plt.fill_between([0,2**11], errs_dense-errs_dense_std, errs_dense+errs_dense_std, color='k', alpha=.05)
ax.set_xscale('log', basex=2)
plt.xlabel('Standard deviation of sampled $b_i$')
plt.xlim((0,2**10))
if plot_ind == 1:
plt.ylabel('Mean squared error' )
plt.title(fr'({letters[plot_ind-1]}) Data sampled from $1/f^' + r'{' + fr'{data_power}' + r'}$', y=-.43)
ax.set_yticklabels([], minor=True)
if plot_ind==3:
plt.legend(loc='center left', bbox_to_anchor=(1,.5), handlelength=1)
plt.savefig('fig_1d_sweeps.pdf', bbox_inches='tight', pad_inches=0)
plt.show()
```
# Make supplement figure
```
prop_cycle = plt.rcParams['axes.prop_cycle']
colors = prop_cycle.by_key()['color']
params = {'legend.fontsize': 22,
'axes.labelsize': 22,
'axes.titlesize': 26,
'xtick.labelsize':18,
'ytick.labelsize':18}
pylab.rcParams.update(params)
matplotlib.rcParams['mathtext.fontset'] = 'cm'
matplotlib.rcParams['mathtext.rm'] = 'serif'
plt.rcParams["font.family"] = "cmr10"
colors_k = np.array([[0.8872, 0.4281, 0.1875],
[0.8136, 0.6844, 0.0696],
[0.2634, 0.6634, 0.4134],
[0.0943, 0.5937, 0.8793],
[0.3936, 0.2946, 0.6330],
[0.7123, 0.2705, 0.3795]])
legend_map = {
'gaussian' : "Gaussian",
'unif' : "Uniform",
'power1' : r"$1/f$",
'laplace' : "Laplacian",
}
markers = {
'gaussian' : "x",
'unif' : "D",
'power1' : "s",
'laplace' : "v",
}
plt.figure(figsize=(20,4))
plot_ind = 0
letters = ['a','b','c']
for data_power, outputs, outputs_dense in zip(data_powers, outputs_meta, dense_meta):
plot_ind += 1
if plot_ind == 4:
continue
plt.subplot(1,3,plot_ind)
ax = plt.gca()
marker_list = []
for i, k in enumerate(names):
data = [r[i] for r in outputs]
bvals = np.array([z[-1] for z in data])[:,:,1]
b_stds = np.sqrt(np.mean(bvals**2, -1)) #.mean(-1)
offset_vec = 0.
errs_train = np.array([z[1] for z in data]) - offset_vec
errs_train = 10**(errs_train.ravel()/-10)
errs = np.array([z[2] for z in data]) - offset_vec
errs = 10**(errs.ravel()/-10)
plt.scatter(b_stds.ravel(), errs, label=legend_map[k], marker=markers[k], alpha=.75, s=25, c=[colors_k[0]])
if i == 0:
plt.scatter(b_stds.ravel(), errs_train, label='Training', marker=markers[k], alpha=.75, s=25, c=[colors_k[3]])
else:
plt.scatter(b_stds.ravel(), errs_train, marker=markers[k], alpha=.75, s=25, c=[colors_k[3]])
plt.grid(True, which='major', alpha=.3)
plt.yscale('log')
ax.set_xscale('log', basex=2)
plt.xlabel('Standard deviation of sampled $b_i$')
plt.xlim((0,2**10))
if plot_ind == 1:
plt.ylabel('Mean squared error' )
plt.title(fr'({letters[plot_ind-1]}) Data sampled from $\alpha={data_power}$', y=-.43)
ax.set_yticklabels([], minor=True)
if plot_ind==3:
custom_lines = [Line2D([], [], marker=markers[names[0]], color='gray', markersize=10, linestyle='None'),
Line2D([], [], marker=markers[names[1]], color='gray', markersize=10, linestyle='None'),
Line2D([], [], marker=markers[names[2]], color='gray', markersize=10, linestyle='None'),
Line2D([], [], marker=markers[names[3]], color='gray', markersize=10, linestyle='None'),
Line2D([], [], marker='o', color=colors_k[0], markersize=15, linestyle='None'),
Line2D([], [], marker='o', color=colors_k[3], markersize=15, linestyle='None')]
ax.legend(custom_lines, ['Gaussian', 'Uniform', 'Uniform log', 'Laplacian', 'Test', 'Train'], loc='center left', bbox_to_anchor=(1,.5), ncol=1, framealpha=.95, handlelength=1.6)
# plt.legend(loc='center left', bbox_to_anchor=(1,.5), handlelength=1)
plt.savefig('fig_1d_sweeps_supp.pdf', bbox_inches='tight', pad_inches=0)
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Shantanu9326/Data-Science-Portfolio/blob/master/Customer_Segmentation_using_(Recency%2C_Frequency%2C_Monetary)RFM_Analysis.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## U.K. Online Retail set using K-Mean Clustering
**Overview**<br>
Online retail is a transnational data set which contains all the transactions occurring between 01/12/2010 and 09/12/2011 for a UK-based and registered non-store online retail.The company mainly sells unique all-occasion gifts. Many customers of the company are wholesalers.We will be using the online reatil trasnational dataset to build a RFM(Recency, Frequency, Monetary) analysis, k-Means clustering and choose the best set of customers.You can read more about the dataset here [https://archive.ics.uci.edu/ml/datasets/Online%20Retail]
```
#Importing Libraries
import pandas as pd
# For Visualisation
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# To Scale our data
from sklearn.preprocessing import scale
# To perform KMeans clustering
from sklearn.cluster import KMeans
# To perform Hierarchical clustering
from scipy.cluster.hierarchy import linkage
from scipy.cluster.hierarchy import dendrogram
from scipy.cluster.hierarchy import cut_tree
#Running or Importing .py Files with Google Colab
from google.colab import drive
drive.mount('/content/drive/')
```
### Reading the Data Set
```
#reading Dataset
retail = pd.read_csv("/content/drive/My Drive/app/OnlineRetail.csv", sep = ',',encoding = "ISO-8859-1", header= 0)
retail
# parse date
retail['InvoiceDate'] = pd.to_datetime(retail['InvoiceDate'], format = "%d-%m-%Y %H:%M")
```
### Data quality check and cleaning
```
# Let's look top 5 rows
retail.head()
#Sanity Check
print(retail.shape)
print(retail.info())
retail.describe()
#Handling Missing Values
#Na Handling
retail.isnull().values.any()
retail.isnull().values.sum()
retail.isnull().sum()*100/retail.shape[0]
```
We can't impute the missing values of customers.
```
#dropping the NA cells
order_wise = retail.dropna()
#Sanity check
order_wise.shape
order_wise.isnull().sum()
```
### Extracting R(Recency), F(Frequency), M(Monetary) columns form the data that we imported in.
```
#RFM implementation
# Extracting amount by multiplying quantity and unit price and saving the data into amount variable.
amount = pd.DataFrame(order_wise.Quantity * order_wise.UnitPrice, columns = ["Amount"])
amount.head()
```
#### Monetary Value
```
#merging amount in order_wise
order_wise = pd.concat(objs = [order_wise, amount], axis = 1, ignore_index = False)
#Monetary Function
# Finding total amount spent per customer
monetary = order_wise.groupby("CustomerID").Amount.sum()
monetary = monetary.reset_index()
monetary.head()
```
#### Frequency Value
How frequent the customer has bought the items in last year.
```
#Frequency function
frequency = order_wise[['CustomerID', 'InvoiceNo']]
# Getting the count of orders made by each customer based on customer ID.
k = frequency.groupby("CustomerID").InvoiceNo.count()
k = pd.DataFrame(k)
k = k.reset_index()
k.columns = ["CustomerID", "Frequency"]
k.head()
```
##### Merging Amount and Frequency columns
```
#creating master dataset
master = monetary.merge(k, on = "CustomerID", how = "inner")
master.head()
```
### Recency Value
```
recency = order_wise[['CustomerID','InvoiceDate']]
maximum = max(recency.InvoiceDate)
#Generating recency function
# Filtering data for customerid and invoice_date
recency = order_wise[['CustomerID','InvoiceDate']]
# Finding max data
maximum = max(recency.InvoiceDate)
# Adding one more day to the max data, so that the max date will have 1 as the difference and not zero.
maximum = maximum + pd.DateOffset(days=1)
recency['diff'] = maximum - recency.InvoiceDate
recency.head()
# recency by customerid
a = recency.groupby('CustomerID')
a
a.diff.min()
#Dataframe merging by recency
df = pd.DataFrame(recency.groupby('CustomerID').diff.min())
df = df.reset_index()
df.columns = ["CustomerID", "Recency"]
df.head()
```
### RFM combined DataFrame
```
#Combining all recency, frequency and monetary parameters
RFM = k.merge(monetary, on = "CustomerID")
RFM = RFM.merge(df, on = "CustomerID")
RFM.head()
```
### Outlier Treatment
K-Means highly affected by outliers.
```
# outlier treatment for Amount
plt.boxplot(RFM.Amount)
Q1 = RFM.Amount.quantile(0.25)
Q3 = RFM.Amount.quantile(0.75)
IQR = Q3 - Q1
RFM = RFM[(RFM.Amount >= Q1 - 1.5*IQR) & (RFM.Amount <= Q3 + 1.5*IQR)] #Tukey's Test
# outlier treatment for Frequency
plt.boxplot(RFM.Frequency)
Q1 = RFM.Frequency.quantile(0.25)
Q3 = RFM.Frequency.quantile(0.75)
IQR = Q3 - Q1
RFM = RFM[(RFM.Frequency >= Q1 - 1.5*IQR) & (RFM.Frequency <= Q3 + 1.5*IQR)] #Tukey's Test
# outlier treatment for Recency
plt.boxplot(RFM.Recency)
Q1 = RFM.Recency.quantile(0.25)
Q3 = RFM.Recency.quantile(0.75)
IQR = Q3 - Q1
RFM = RFM[(RFM.Recency >= Q1 - 1.5*IQR) & (RFM.Recency <= Q3 + 1.5*IQR)] #Tukey's Test
RFM.head(20)
```
### Scaling the RFM data
```
# standardise all parameters
RFM_norm1 = RFM.drop("CustomerID", axis=1)
RFM_norm1.Recency = RFM_norm1.Recency.dt.days
from sklearn.preprocessing import StandardScaler
standard_scaler = StandardScaler()
RFM_norm1 = standard_scaler.fit_transform(RFM_norm1)
RFM_norm1 = pd.DataFrame(RFM_norm1)
RFM_norm1.columns = ['Frequency','Amount','Recency']
RFM_norm1.head()
```
## Hopkins Statistics:
The Hopkins statistic, is a statistic which gives a value which indicates the cluster tendency, in other words: how well the data can be clustered.
- If the value is between {0.01, ...,0.3}, the data is regularly spaced.
- If the value is around 0.5, it is random.
- If the value is between {0.7, ..., 0.99}, it has a high tendency to cluster.
Some usefull links to understand Hopkins Statistics:
- [WikiPedia](https://en.wikipedia.org/wiki/Hopkins_statistic)
- [Article](http://www.sthda.com/english/articles/29-cluster-validation-essentials/95-assessing-clustering-tendency-essentials/)
```
from sklearn.neighbors import NearestNeighbors
from random import sample
from numpy.random import uniform
import numpy as np
from math import isnan
def hopkins(X):
d = X.shape[1]
#d = len(vars) # columns
n = len(X) # rows
m = int(0.1 * n)
nbrs = NearestNeighbors(n_neighbors=1).fit(X.values)
rand_X = sample(range(0, n, 1), m)
ujd = []
wjd = []
for j in range(0, m):
u_dist, _ = nbrs.kneighbors(uniform(np.amin(X,axis=0),np.amax(X,axis=0),d).reshape(1, -1), 2, return_distance=True)
ujd.append(u_dist[0][1])
w_dist, _ = nbrs.kneighbors(X.iloc[rand_X[j]].values.reshape(1, -1), 2, return_distance=True)
wjd.append(w_dist[0][1])
H = sum(ujd) / (sum(ujd) + sum(wjd))
if isnan(H):
print(ujd, wjd)
H = 0
return H
hopkins(RFM_norm1)
```
## K-Means with some K
```
# Kmeans with K=5
model_clus5 = KMeans(n_clusters = 5, max_iter=50)
model_clus5.fit(RFM_norm1)
```
## Silhouette Analysis
$$\text{silhouette score}=\frac{p-q}{max(p,q)}$$
$p$ is the mean distance to the points in the nearest cluster that the data point is not a part of
$q$ is the mean intra-cluster distance to all the points in its own cluster.
* The value of the silhouette score range lies between -1 to 1.
* A score closer to 1 indicates that the data point is very similar to other data points in the cluster,
* A score closer to -1 indicates that the data point is not similar to the data points in its cluster.
```
from sklearn.metrics import silhouette_score
sse_ = []
for k in range(2, 15):
kmeans = KMeans(n_clusters=k).fit(RFM_norm1)
sse_.append([k, silhouette_score(RFM_norm1, kmeans.labels_)])
plt.plot(pd.DataFrame(sse_)[0], pd.DataFrame(sse_)[1],'r');
```
## Sum of Squared Distances
```
# sum of squared distances
ssd = []
for num_clusters in list(range(1,21)):
model_clus = KMeans(n_clusters = num_clusters, max_iter=50)
model_clus.fit(RFM_norm1)
ssd.append(model_clus.inertia_)
plt.plot(ssd,'r')
# analysis of clusters formed
RFM.index = pd.RangeIndex(len(RFM.index))
RFM_km = pd.concat([RFM, pd.Series(model_clus5.labels_)], axis=1)
RFM_km.columns = ['CustomerID', 'Frequency', 'Amount', 'Recency', 'ClusterID']
RFM_km.Recency = RFM_km.Recency.dt.days
km_clusters_amount = pd.DataFrame(RFM_km.groupby(["ClusterID"]).Amount.mean())
km_clusters_frequency = pd.DataFrame(RFM_km.groupby(["ClusterID"]).Frequency.mean())
km_clusters_recency = pd.DataFrame(RFM_km.groupby(["ClusterID"]).Recency.mean())
df = pd.concat([pd.Series([0,1,2,3,4]), km_clusters_amount, km_clusters_frequency, km_clusters_recency], axis=1)
df.columns = ["ClusterID", "Amount_mean", "Frequency_mean", "Recency_mean"]
df.head()
sns.barplot(x=df.ClusterID, y=df.Amount_mean)
sns.barplot(x=df.ClusterID, y=df.Frequency_mean)
sns.barplot(x=df.ClusterID, y=df.Recency_mean)
```
<hr>
## Hierarchical clustering
```
#hierarchical clustering
mergings = linkage(RFM_norm1, method = "single", metric='euclidean')
dendrogram(mergings)
plt.show()
mergings = linkage(RFM_norm1, method = "complete", metric='euclidean')
dendrogram(mergings)
plt.show()
clusterCut = pd.Series(cut_tree(mergings, n_clusters = 5).reshape(-1,))
RFM_hc = pd.concat([RFM, clusterCut], axis=1)
RFM_hc.columns = ['CustomerID', 'Frequency', 'Amount', 'Recency', 'ClusterID']
#summarise
RFM_hc.Recency = RFM_hc.Recency.dt.days
km_clusters_amount = pd.DataFrame(RFM_hc.groupby(["ClusterID"]).Amount.mean())
km_clusters_frequency = pd.DataFrame(RFM_hc.groupby(["ClusterID"]).Frequency.mean())
km_clusters_recency = pd.DataFrame(RFM_hc.groupby(["ClusterID"]).Recency.mean())
df = pd.concat([pd.Series([0,1,2,3,4]), km_clusters_amount, km_clusters_frequency, km_clusters_recency], axis=1)
df.columns = ["ClusterID", "Amount_mean", "Frequency_mean", "Recency_mean"]
df.head()
#plotting barplot
sns.barplot(x=df.ClusterID, y=df.Amount_mean)
sns.barplot(x=df.ClusterID, y=df.Frequency_mean)
sns.barplot(x=df.ClusterID, y=df.Recency_mean)
```
#END
| github_jupyter |
```
# Setup to copy github repo to Google drive and import from colab.
# Based on https://zerowithdot.com/colab-github-workflow/
from google.colab import drive
import os
ROOT = '/content/drive' # Default for the drive.
PROJ = 'My Drive/dsa_repo' # Path to project on drive.
GIT_USERNAME = "PraChetit" # Replace with yours.
GIT_REPOSITORY = "dsa_repo"
drive.mount(ROOT) # Mount the drive at /content/drive.
PROJECT_PATH = os.path.join(ROOT, PROJ)
# !mkdir "{PROJECT_PATH}" # In case it is not created it already.
!git clone https://github.com/PraChetit/dsa_repo.git
!mv ./dsa_repo/* "{PROJECT_PATH}"
!rm -rf ./dsa_repo
!rsync -aP --exclude=data/ "{PROJECT_PATH}"/* ./
```
# Comparing sorting algorithms
In computer science, a sorting algorithm is an algorithm that puts elements of a list in a certain order.
This notebook experimentally compares sorting arrays implemented in this reporitory, and illustrates some good practices in organizing and plotting such experimental data.
```
import dataclasses
import math
import matplotlib.pyplot as plt
import pandas as pd
import random
import seaborn as sns
import time
from algorithms.sorting.bubble import bubble_sort
from algorithms.sorting.comb import comb_sort
from algorithms.sorting.gnome import gnome_sort
from algorithms.sorting.heap import heap_sort
from algorithms.sorting.insertion import insertion_sort
from algorithms.sorting.merge import merge_sort
from algorithms.sorting.quick import quick_sort
from algorithms.sorting.shell import shell_sort
from algorithms.sorting.tree import tree_sort
```
### Preparation
We compare two broad classes of slow and fast algorithms: "slow" and "fast".
The computational complexity of slow algorithms is $ O(n^2) $, while the complexity of fast algorithms is $ O(n \log(n)) $... with the exception of Shell sort, which is a bit [more complicated](https://en.wikipedia.org/wiki/Shellsort#Computational_complexity) but asymptotically slower, yet preferred in certain applications.
```
REPETITIONS=10 # Number of independent repetitions of each trial.
# Slow sorts.
BUBBLE = ('bubble', bubble_sort)
COMB = ('comb', comb_sort)
GNOME = ('gnome', gnome_sort)
INSERT = ('insertion', insertion_sort)
SLOW = [BUBBLE, COMB, GNOME, INSERT]
# Fast sorts.
HEAP = ('heap', heap_sort)
MERGE = ('merge', merge_sort)
QUICK = ('quick', quick_sort)
SHELL = ('shell', shell_sort)
TREE = ('tree', tree_sort)
FAST = [MERGE, QUICK, SHELL, HEAP, TREE]
@dataclasses.dataclass(frozen=True)
class Setup:
"""Container for experiment setup."""
size: int
algs: list
setup_1 = Setup(100, SLOW + FAST)
setup_2 = Setup(300, SLOW + FAST)
setup_3 = Setup(1000, SLOW + FAST)
setup_4 = Setup(3000, SLOW + FAST)
setup_5 = Setup(10000, SLOW + FAST)
# Not running slow algorithms on larger inputs.
setup_6 = Setup(30000, FAST)
setup_7 = Setup(100000, FAST)
setup_8 = Setup(300000, FAST)
setups = [setup_1, setup_2, setup_3, setup_4, setup_5, setup_6, setup_7, setup_8]
# setup_0 = Setup(100, [STOOGE])
```
### Running experiments
We use pandas `DataFrame` to organize the results from experiments. This is useful for data exploration afterwards, and for compatibility with other libraries for visualizing data.
We will create a randomly reshuffled array as the experimental input. Note that other inputs would also be interesting to look at, for instance partially shuffled, reverse ordered, etc. For simlicity, we focus only on this one scenario.
The interesting part is comparing the algorithms relative to each other, not necessarily the value of an absolute metric, such as time.
```
# Some utilities for running experiments.
def create_test_array(size: int, seed: int):
"""Given `seed`, returns a randomly ordered array of `size` elements."""
array = list(range(size))
random.seed(seed)
random.shuffle(array)
return array
def run(sorting_alg: Callable, size: int, seed: int):
"""Sorts an array and returns time required to do so."""
array = create_test_array(size, seed)
start_t = time.time()
sorting_alg(array)
end_t = time.time()
return end_t - start_t
def experiment(df: pd.DataFrame, setup: Setup, repetitions: int):
"""Runs an experiment for a number of algorithms."""
for round in range(repetitions):
random.seed() # Resets seeding.
seed = random.randrange(2**32)
for name, sorting_alg in setup.algs:
t = run(sorting_alg, setup.size, seed)
df = df.append({'Algorithm': name,
'Round': round,
'Array_size': setup.size,
'Time': t},
ignore_index=True)
return df
# Run the experiment.
df = pd.DataFrame()
for setup in setups:
df = experiment(df, setup, REPETITIONS)
```
# Analysis
Create some helper methods for slicing parts of the `DataFrame` we created.
```
def filter_slow_algs(df):
return df[df['Algorithm'].isin(list(zip(*SLOW))[0])]
def filter_fast_algs(df):
return df[df['Algorithm'].isin(list(zip(*FAST))[0])]
```
#### Slow algorithms
Below, we plot the array size vs. the time it takes to sort the array in several ways.
```
plt.rcParams["figure.figsize"] = (15,4)
# Create the new column in DataFrame.
df['Time_by_n2'] = df.apply(lambda r: r.Time / (r.Array_size * r.Array_size),
axis=1)
plt.subplots(1, 3)
plt.subplot(1, 3, 1)
sns.lineplot(data=filter_slow_algs(df),
x='Array_size', y='Time', hue='Algorithm')
plt.subplot(1, 3, 2)
sns.lineplot(data=filter_slow_algs(df),
x='Array_size', y='Time_by_n2', hue='Algorithm', legend=False)
plt.ylabel('Time / n^2')
plt.subplot(1, 3, 3)
sns.lineplot(data=filter_slow_algs(df),
x='Array_size', y='Time_by_n2', hue='Algorithm', legend=False)
plt.xscale('log')
plt.ylabel('Time / n^2')
plt.show()
```
Plotting directly against time (left plot) is not as insightful as normalizing the y-axis by $n^2$ (middle and right plot) -- recall that the slow algorithms have the computational complexity of $O(n^2)$. Note also, given how we chose the experiment sizes, it is better to use log-scale on the x-axis (right plot), to not have the data squeezed in the left-hand side of the plot.
Given the repetitions of each experiment in the data frame ('Round' column), the `seaborn` library will by default include a visualization of the variance of the values across the repetitions.
In the latter two plots, the relationship seems to be nearly constant for all algorithms, which is as expected given the theoretical upper bounds.
Insertion sort seems to be the fastest, with little variance overall.
The other three algorithms have similar speed, with higher variance for small inputs, but the variance decreases as inputs get larger.
#### Fast algorithms
A similar comparison for fast algorithms, but we normalize by $n\log(n)$.
```
plt.rcParams["figure.figsize"] = (15,4)
# Create the new column in DataFrame.
df['Time_by_nlogn'] = df.apply(
lambda r: r.Time / (r.Array_size * math.log(r.Array_size)),
axis=1)
plt.subplots(1, 3)
plt.subplot(1, 3, 1)
sns.lineplot(data=filter_fast_algs(df),
x='Array_size', y='Time', hue='Algorithm')
plt.subplot(1, 3, 2)
sns.lineplot(data=filter_fast_algs(df),
x='Array_size', y='Time_by_nlogn', hue='Algorithm', legend=False)
plt.ylabel('Time / n*log(n)')
plt.subplot(1, 3, 3)
sns.lineplot(data=filter_fast_algs(df),
x='Array_size', y='Time_by_nlogn', hue='Algorithm', legend=False)
plt.xscale('log')
plt.ylabel('Time / n*log(n)')
plt.show()
```
Heapsort seems to be overall the slowest. The heap data structure is more generlly useful, and if used directly for sorting (as done here), it requires more comparisions than other fast sorts typically need. A variant of this sort, [bottom-up heapsort](https://en.wikipedia.org/wiki/Heapsort#Bottom-up_heapsort) would likely be faster.
Treesort is doing something strange. Since this is done based on a naive implementation of the binary search tree, there is a chance to be "unlucky" and the input array results in a very unbalanced tree in the process. This would likely be improved by a better implementation of the tree. An interesting regime seems to be around array size of `3000` which seems to result in a high variance in time. Not sure why, but I wonder if something could be said about this in theory.
The other three algorithms seem to perform similarly. Let's look at them closer.
```
plt.rcParams["figure.figsize"] = (5,4)
sns.lineplot(data=df[df['Algorithm'].isin(list(zip(*[MERGE, QUICK, SHELL]))[0])],
x='Array_size', y='Time_by_nlogn', hue='Algorithm')
plt.ylabel('Time / n*log(n)')
plt.xscale('log')
```
Shell sort seems to be consistently increasing with array size. This is as expected, as its asymptotic complexity is not $O(n\log(n))$. However, it can clearly be very effective in the non-asymptotic regimes.
The two other sorts, merge and quick, seem to have some constant overhead for small array sizes, which result in the lines with negative slope initially, but appear roughly constant for larger array sizes.
The implemented quick sort is its basic variant. There are numerous ways to make quick sort more effective, especially when selecting the pivot, and I expect the performance could be improved further.
#### Log-log plot
Assume we do not know about the expected relationship between array size and the time to sort the array ($O(n^2)$ and $O(n\log(n))$). Instead, we want to read it from the experimental data.
We can do this using log-log plots, where relationships of the form $y = ax^b$ will be linear with slope $b$.
From the plot below, it is immediately clear that the algorithms fall into two distinct groups with qualitatively different relationship between the array size and the time it takes to sort them.
```
plt.rcParams["figure.figsize"] = (5,5)
sns.lineplot(data=df, x='Array_size', y='Time', hue='Algorithm')
plt.yscale('log')
plt.xscale('log')
```
The complexity of the slow algorithms should be $O(n^2)$, which would be reflected in the log-log plot by a line with slope $2$. This seems to be roughly the case for the slow algorithms.
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Plot style
sns.set()
%pylab inline
pylab.rcParams['figure.figsize'] = (4, 4)
%%html
<style>
.pquote {
text-align: left;
margin: 40px 0 40px auto;
width: 70%;
font-size: 1.5em;
font-style: italic;
display: block;
line-height: 1.3em;
color: #5a75a7;
font-weight: 600;
border-left: 5px solid rgba(90, 117, 167, .1);
padding-left: 6px;
}
.notes {
font-style: italic;
display: block;
margin: 40px 10%;
}
img + em {
text-align: center;
display: block;
color: gray;
font-size: 0.9em;
font-weight: 600;
}
</style>
```
$$
\newcommand\norm[1]{\left\lVert#1\right\rVert}
\DeclareMathOperator{\Tr}{Tr}
\newcommand\bs[1]{\boldsymbol{#1}}
$$
<span class='notes'>
This content is part of a series following the chapter 2 on linear algebra from the [Deep Learning Book](http://www.deeplearningbook.org/) by Goodfellow, I., Bengio, Y., and Courville, A. (2016). It aims to provide intuitions/drawings/python code on mathematical theories and is constructed as my understanding of these concepts. You can check the syllabus in the [introduction post](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-Introduction/).
</span>
# Introduction
This chapter is very light! I can assure you that you will read it in 1 minute! It is nice after the last two chapters that were quite big! We will see what is the Trace of a matrix. It will be needed for the last chapter on the Principal Component Analysis (PCA).
# 2.10 The Trace Operator
<img src="images/trace-matrix.png" width="200" alt="Calculating the trace of a matrix" title="Calculating the trace of a matrix">
<em>The trace of matrix</em>
The trace is the sum of all values in the diagonal of a square matrix.
$$
\bs{A}=
\begin{bmatrix}
2 & 9 & 8 \\\\
4 & 7 & 1 \\\\
8 & 2 & 5
\end{bmatrix}
$$
$$
\mathrm{Tr}(\bs{A}) = 2 + 7 + 5 = 14
$$
Numpy provides the function `trace()` to calculate it:
```
A = np.array([[2, 9, 8], [4, 7, 1], [8, 2, 5]])
A
A_tr = np.trace(A)
A_tr
```
GoodFellow et al. explain that the trace can be used to specify the Frobenius norm of a matrix (see [2.5](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.5-Norms/)). The Frobenius norm is the equivalent of the $L^2$ norm for matrices. It is defined by:
$$
\norm{\bs{A}}_F=\sqrt{\sum_{i,j}A^2_{i,j}}
$$
Take the square of all elements and sum them. Take the square root of the result. This norm can also be calculated with:
$$
\norm{\bs{A}}_F=\sqrt{\Tr({\bs{AA}^T})}
$$
We can check this. The first way to compute the norm can be done with the simple command `np.linalg.norm()`:
```
np.linalg.norm(A)
```
The Frobenius norm of $\bs{A}$ is 17.549928774784245.
With the trace the result is identical:
```
np.sqrt(np.trace(A.dot(A.T)))
```
Since the transposition of a matrix doesn't change the diagonal, the trace of the matrix is equal to the trace of its transpose:
$$
\Tr(\bs{A})=\Tr(\bs{A}^T)
$$
## Trace of a product
$$
\Tr(\bs{ABC}) = \Tr(\bs{CAB}) = \Tr(\bs{BCA})
$$
### Example 1.
Let's see an example of this property.
$$
\bs{A}=
\begin{bmatrix}
4 & 12 \\\\
7 & 6
\end{bmatrix}
$$
$$
\bs{B}=
\begin{bmatrix}
1 & -3 \\\\
4 & 3
\end{bmatrix}
$$
$$
\bs{C}=
\begin{bmatrix}
6 & 6 \\\\
2 & 5
\end{bmatrix}
$$
```
A = np.array([[4, 12], [7, 6]])
B = np.array([[1, -3], [4, 3]])
C = np.array([[6, 6], [2, 5]])
np.trace(A.dot(B).dot(C))
np.trace(C.dot(A).dot(B))
np.trace(B.dot(C).dot(A))
```
$$
\bs{ABC}=
\begin{bmatrix}
360 & 432 \\\\
180 & 171
\end{bmatrix}
$$
$$
\bs{CAB}=
\begin{bmatrix}
498 & 126 \\\\
259 & 33
\end{bmatrix}
$$
$$
\bs{BCA}=
\begin{bmatrix}
-63 & -54 \\\\
393 & 594
\end{bmatrix}
$$
$$
\Tr(\bs{ABC}) = \Tr(\bs{CAB}) = \Tr(\bs{BCA}) = 531
$$
<span class='notes'>
Feel free to drop me an email or a comment. The syllabus of this series can be found [in the introduction post](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-Introduction/). All the notebooks can be found on [Github](https://github.com/hadrienj/deepLearningBook-Notes).
</span>
# References
[Trace (linear algebra) - Wikipedia](https://en.wikipedia.org/wiki/Trace_(linear_algebra))
[Numpy Trace operator](https://docs.scipy.org/doc/numpy/reference/generated/numpy.trace.html)
| github_jupyter |
# 2D Rosenbrock, constrained
Let's do another example with the Rosenbrock problem that we just solved.
Let's try adding a constraint to the problem that we previously solved. Recall that the unconstrained optimum that we found occured at $x=1$, $y=1$.
What if we want to know the minimum function value that is still within the unit circle. Clearly, $(1, 1)$ is not inside the unit circle, so we expect to find a different answer. Here's what the problem looks like, constrained to be inside the unit circle:
```
def rosenbrock(x, y):
return (1 - x) ** 2 + 100 * (y - x ** 2) ** 2
### Don't worry about this code block; this is just here to visualize the function.
from aerosandbox.tools.pretty_plots import plt, show_plot, contour, mpl, equal
import aerosandbox.numpy as np
fig, ax = plt.subplots(figsize=(7.5, 6), dpi=200)
xi, eta = np.meshgrid(np.linspace(-1, 1, 300), np.linspace(-1, 1, 300))
X, Y = xi * np.sqrt(1 - eta ** 2 / 2), eta * np.sqrt(1 - xi ** 2 / 2)
rng = (1e-1, 1e3)
_, _, cbar = contour(X, Y, rosenbrock(X, Y),
levels=np.geomspace(*rng, 21), norm=mpl.colors.LogNorm(*rng, clip=True),
linelabels_fontsize=8, cmap=plt.get_cmap("viridis"), zorder=3)
equal()
cbar.ax.yaxis.set_major_locator(mpl.ticker.LogLocator()); cbar.ax.yaxis.set_major_formatter(mpl.ticker.LogFormatter())
show_plot("The Rosenbrock Function", "$x$", "$y$")
```
Seems like the optimum is going to be some point vaguely near $(1,1)$, but loosely "projected" back onto the constraint boundary - roughly $(0.8, 0.6)$.
----
Let's see what we get when we optimize this new constrained problem with the `Opti` stack.
First, we set up the optimization environment and variables:
```
import aerosandbox as asb
opti = asb.Opti()
### Define optimization variables
x = opti.variable(init_guess=1) # Let's change our initial guess to the value we found before, (1, 1).
y = opti.variable(init_guess=1) # As above, change 0 -> 1
```
Next, we define the objective. Recall that we defined `rosenbrock()` above - we'll use that here!
The lesson: you can use functions, classes, etc. arbitrarily to simplify and organize your code. Don't forget that this is Python, with all its bells and whistles! :)
```
### Define objective
opti.minimize(rosenbrock(x, y))
```
Now, we set up our constraint. Note the natural syntax to add a constraint with `opti.subject_to()`:
```
### Define constraint
r = (x ** 2 + y ** 2) ** 0.5 # r is the distance from the origin
opti.subject_to(
r <= 1 # Constrain the distance from the origin to be less than or equal to 1.
)
```
A few notes here:
* In continuous optimization, there is no practical difference between "less than" and "less than or equal to". (At least, not when we solve them numerically on a computer - this is not true for analytical solutions, but that's not relevant here.) So, use `<` or `<=`, whatever feels best to you.
* Our initial guess $(1, 1)$ is *infeasible* - it's outside of the unit circle. Not to worry, as initial guesses do not need to be feasible - this will solve just fine.
Let's continue and solve our problem:
```
# Optimize
sol = opti.solve()
```
Again, we get lots of info about how the solve went that will be useful later on. For now, let's see what our optimal solution looks like:
```
# Extract values at the optimum
x_opt = sol.value(x)
y_opt = sol.value(y)
# Print values
print(f"x = {x_opt}")
print(f"y = {y_opt}")
```
The optimal point that we've found is $(0.786, 0.618)$. Let's check that it lies on the unit circle.
```
r_opt = sol.value(r) # Note that sol.value() can be used to evaluate any object, not just design variables.
print(f"r = {r_opt}")
```
It does!
We could prove optimality here once again via hand calculations should we choose, but for now we'll take the optimizer's word for it. (The hand-calc process is a bit more complicated than before, but not bad. First, we would show that first-order optimality (i.e. $\nabla \vec{x} = 0$) can't be satisfied anywhere in the feasible space, which would indicate that our inequality constraint is tight. Then, we would derive the KKT conditions for constrained optimality and then solve for $x$, $y$, and the Lagrange multiplier associated with our constraint.)
| github_jupyter |
The `contextlib` module contains utilities for working with context managers and the with statement.
## Context Manager API
A context manager is responsible for a resource within a code block, possibly creating it when the block is entered and then cleaning it up after the block is exited. For example, files support the context manager API to make it easy to ensure they are closed after all reading or writing is done.
```
with open('tmp/pymotw.txt', 'wt') as f:
f.write('contents go here')
```
A context manager is enabled by the with statement, and the API involves two methods. The `__enter__()` method is run when execution flow enters the code block inside the with. It returns an object to be used within the context. When execution flow leaves the with block, the `__exit__()` method of the context manager is called to clean up any resources being used.
```
class Context:
def __init__(self):
print('__init__()')
def __enter__(self):
print('__enter__()')
return self
def __exit__(self, exc_type, exc_val, exc_tb):
print('__exit__()')
with Context():
print('Doing work in the context')
```
The `__enter__()` method can return any object to be associated with a name specified in the as clause of the with statement. In this example, the Context returns an object that uses the open context.
```
class WithinContext:
def __init__(self, context):
print('WithinContext.__init__({})'.format(context))
def do_something(self):
print('WithinContext.do_something()')
def __del__(self):
print('WithinContext.__del__')
class Context:
def __init__(self):
print('Context.__init__()')
def __enter__(self):
print('Context.__enter__()')
return WithinContext(self)
def __exit__(self, exc_type, exc_val, exc_tb):
print('Context.__exit__()')
with Context() as c:
c.do_something()
```
The value associated with the variable c is the object returned by `__enter__()`, which is not necessarily the Context instance created in the with statement.
The `__exit__()` method receives arguments containing details of any exception raised in the with block.
```
class Context:
def __init__(self, handle_error):
print('__init__({})'.format(handle_error))
self.handle_error = handle_error
def __enter__(self):
print('__enter__()')
return self
def __exit__(self, exc_type, exc_val, exc_tb):
print('__exit__()')
print(' exc_type =', exc_type)
print(' exc_val =', exc_val)
print(' exc_tb =', exc_tb)
return self.handle_error
with Context(True):
raise RuntimeError('error message handled')
print()
with Context(False):
raise RuntimeError('error message propagated')
```
If the context manager can handle the exception, `__exit__()` should return a true value to indicate that the exception does not need to be propagated. Returning false causes the exception to be re-raised after `__exit__()` returns.
## Context Managers as Function Decorators
The class ContextDecorator adds support to regular context manager classes to let them be used as function decorators as well as context managers.
```
import contextlib
class Context(contextlib.ContextDecorator):
def __init__(self, how_used):
self.how_used = how_used
print('__init__({})'.format(how_used))
def __enter__(self):
print('__enter__({})'.format(self.how_used))
return self
def __exit__(self, exc_type, exc_val, exc_tb):
print('__exit__({})'.format(self.how_used))
@Context('as decorator')
def func(message):
print(message)
print()
with Context('as context manager'):
print('Doing work in the context')
print()
func('Doing work in the wrapped function')
```
One difference with using the context manager as a decorator is that the value returned by __enter__() is not available inside the function being decorated, unlike when using with and as. Arguments passed to the decorated function are available in the usual way.
## From Generator to Context Manager
Creating context managers the traditional way, by writing a class with `__enter__()` and `__exit__()` methods, is not difficult. But sometimes writing everything out fully is extra overhead for a trivial bit of context. In those sorts of situations, use the `contextmanager()` decorator to convert a **generator function** into a context **manager**.
```
import contextlib
@contextlib.contextmanager
def make_context():
print(' entering')
try:
yield {}
except RuntimeError as err:
print(' ERROR:', err)
finally:
print(' exiting')
print('Normal:')
with make_context() as value:
print(' inside with statement:', value)
print('\nHandled error:')
with make_context() as value:
raise RuntimeError('showing example of handling an error')
print('\nUnhandled error:')
with make_context() as value:
raise ValueError('this exception is not handled')
```
The generator should initialize the context, yield exactly one time, then clean up the context. The value yielded, if any, is bound to the variable in the as clause of the with statement. Exceptions from within the with block are re-raised inside the generator, so they can be handled there.
The context manager returned by contextmanager() is derived from ContextDecorator, so it also works as a function decorator.
```
import contextlib
@contextlib.contextmanager
def make_context():
print(' entering')
try:
# Yield control, but not a value, because any value
# yielded is not available when the context manager
# is used as a decorator.
yield
except RuntimeError as err:
print(' ERROR:', err)
finally:
print(' exiting')
@make_context()
def normal():
print(' inside with statement')
@make_context()
def throw_error(err):
raise err
print('Normal:')
normal()
print('\nHandled error:')
throw_error(RuntimeError('showing example of handling an error'))
print('\nUnhandled error:')
throw_error(ValueError('this exception is not handled'))
```
## Closing Open Handles
The file class supports the context manager API directly, but some other objects that represent open handles do not. The example given in the standard library documentation for contextlib is the object returned from urllib.urlopen(). There are other legacy classes that use a close() method but do not support the context manager API. To ensure that a handle is closed, use closing() to create a context manager for it.
```
import contextlib
class Door:
def __init__(self):
print(' __init__()')
self.status = 'open'
def close(self):
print(' close()')
self.status = 'closed'
print('Normal Example:')
with contextlib.closing(Door()) as door:
print(' inside with statement: {}'.format(door.status))
print(' outside with statement: {}'.format(door.status))
print('\nError handling example:')
try:
with contextlib.closing(Door()) as door:
print(' raising from inside with statement')
raise RuntimeError('error message')
except Exception as err:
print(' Had an error:', err)
```
## Ignoring Exceptions
It is frequently useful to ignore exceptions raised by libraries, because the error indicates that the desired state has already been achieved, or it can otherwise be ignored. The most common way to ignore exceptions is with a try:except statement with only a pass statement in the except block.
```
import contextlib
class NonFatalError(Exception):
pass
def non_idempotent_operation():
raise NonFatalError(
'The operation failed because of existing state'
)
try:
print('trying non-idempotent operation')
non_idempotent_operation()
print('succeeded!')
except NonFatalError:
pass
print('done')
```
The try:except
form can be replaced with contextlib.suppress() to more explicitly suppress a class of exceptions happening anywhere in the with block.
```
import contextlib
class NonFatalError(Exception):
pass
def non_idempotent_operation():
raise NonFatalError(
'The operation failed because of existing state'
)
with contextlib.suppress(NonFatalError):
print('trying non-idempotent operation')
non_idempotent_operation()
print('succeeded!')
print('done')
```
## Redirecting Output Streams
Poorly designed library code may write directly to sys.stdout or sys.stderr, without providing arguments to configure different output destinations. The `redirect_stdout()` and `redirect_stderr()` context managers can be used to capture output from functions like this, for which the source cannot be changed to accept a new output argument.
```
from contextlib import redirect_stdout, redirect_stderr
import io
import sys
def misbehaving_function(a):
sys.stdout.write('(stdout) A: {!r}\n'.format(a))
sys.stderr.write('(stderr) A: {!r}\n'.format(a))
capture = io.StringIO()
with redirect_stdout(capture), redirect_stderr(capture):
misbehaving_function(5)
print(capture.getvalue())
```
## Dynamic Context Manager Stacks
Most context managers operate on one object at a time, such as a single file or database handle. In these cases, the object is known in advance and the code using the context manager can be built around that one object. In other cases, a program may need to create an unknown number of objects in a context, while wanting all of them to be cleaned up when control flow exits the context. ExitStack was created to handle these more dynamic cases.
An ExitStack instance maintains a stack data structure of cleanup callbacks. The callbacks are populated explicitly within the context, and any registered callbacks are called in the reverse order when control flow exits the context. The result is like having multple nested with statements, except they are established dynamically.
### Stacking Context Managers
There are several ways to populate the ExitStack. This example uses enter_context() to add a new context manager to the stack.
```
import contextlib
@contextlib.contextmanager
def make_context(i):
print('{} entering'.format(i))
yield {}
print('{} exiting'.format(i))
def variable_stack(n, msg):
with contextlib.ExitStack() as stack:
for i in range(n):
stack.enter_context(make_context(i))
print(msg)
variable_stack(2, 'inside context')
```
The context managers given to ExitStack are treated as though they are in a series of nested with statements. Errors that happen anywhere within the context propagate through the normal error handling of the context managers. These context manager classes illustrate the way errors propagate.
```
import contextlib
class Tracker:
"Base class for noisy context managers."
def __init__(self, i):
self.i = i
def msg(self, s):
print(' {}({}): {}'.format(
self.__class__.__name__, self.i, s))
def __enter__(self):
self.msg('entering')
class HandleError(Tracker):
"If an exception is received, treat it as handled."
def __exit__(self, *exc_details):
received_exc = exc_details[1] is not None
if received_exc:
self.msg('handling exception {!r}'.format(
exc_details[1]))
self.msg('exiting {}'.format(received_exc))
# Return Boolean value indicating whether the exception
# was handled.
return received_exc
class PassError(Tracker):
"If an exception is received, propagate it."
def __exit__(self, *exc_details):
received_exc = exc_details[1] is not None
if received_exc:
self.msg('passing exception {!r}'.format(
exc_details[1]))
self.msg('exiting')
# Return False, indicating any exception was not handled.
return False
class ErrorOnExit(Tracker):
"Cause an exception."
def __exit__(self, *exc_details):
self.msg('throwing error')
raise RuntimeError('from {}'.format(self.i))
class ErrorOnEnter(Tracker):
"Cause an exception."
def __enter__(self):
self.msg('throwing error on enter')
raise RuntimeError('from {}'.format(self.i))
def __exit__(self, *exc_info):
self.msg('exiting')
```
The examples using these classes are based around variable_stack(), which uses the context managers passed to construct an ExitStack, building up the overall context one by one. The examples below pass different context managers to explore the error handling behavior. First, the normal case of no exceptions.
```
print('No errors:')
variable_stack([
HandleError(1),
PassError(2),
],"test error")
```
| github_jupyter |
```
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.13'
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/babyweight/preproc; then
gsutil mb -l ${REGION} gs://${BUCKET}
# copy canonical set of preprocessed files if you didn't do previous notebook
gsutil -m cp -R gs://cloud-training-demos/babyweight gs://${BUCKET}
fi
%%bash
gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*
```
### Train on AI Platform
- Python 패키지 만들기
- gcloud로 코드를 AI Platform으로 보냄
- babyweight/trainer/task.py
```
%%writefile babyweight/trainer/task.py
import argparse
import json
import os
from . import model
import tensorflow as tf
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'--bucket',
help = 'GCS path to data. We assume that data is in gs://BUCKET/babyweight/preproc/',
required = True
)
parser.add_argument(
'--output_dir',
help = 'GCS location to write checkpoints and export models',
required = True
)
parser.add_argument(
'--batch_size',
help = 'Number of examples to compute gradient over.',
type = int,
default = 512
)
parser.add_argument(
'--job-dir',
help = 'this model ignores this field, but it is required by gcloud',
default = 'junk'
)
parser.add_argument(
'--nnsize',
help = 'Hidden layer sizes to use for DNN feature columns -- provide space-separated layers',
nargs = '+',
type = int,
default=[128, 32, 4]
)
parser.add_argument(
'--nembeds',
help = 'Embedding size of a cross of n key real-valued parameters',
type = int,
default = 3
)
## TODO 1: add the new arguments here
parser.add_argument(
'--train_examples',
help = 'Number of examples (in thousands) to run the training job over. If this is more than actual # of examples available, it cycles through them. So specifying 1000 here when you have only 100k examples makes this 10 epochs.',
type = int,
default = 5000
)
parser.add_argument(
'--pattern',
help = 'Specify a pattern that has to be in input files. For example 00001-of will process only one shard',
default = 'of'
)
parser.add_argument(
'--eval_steps',
help = 'Positive number of steps for which to evaluate model. Default to None, which means to evaluate until input_fn raises an end-of-input exception',
type = int,
default = None
)
## parse all arguments
args = parser.parse_args()
arguments = args.__dict__
# unused args provided by service
arguments.pop('job_dir', None)
arguments.pop('job-dir', None)
## assign the arguments to the model variables
output_dir = arguments.pop('output_dir')
model.BUCKET = arguments.pop('bucket')
model.BATCH_SIZE = arguments.pop('batch_size')
model.TRAIN_STEPS = (arguments.pop('train_examples') * 1000) / model.BATCH_SIZE
model.EVAL_STEPS = arguments.pop('eval_steps')
print ("Will train for {} steps using batch_size={}".format(model.TRAIN_STEPS, model.BATCH_SIZE))
model.PATTERN = arguments.pop('pattern')
model.NEMBEDS= arguments.pop('nembeds')
model.NNSIZE = arguments.pop('nnsize')
print ("Will use DNN size of {}".format(model.NNSIZE))
# Append trial_id to path if we are doing hptuning
# This code can be removed if you are not using hyperparameter tuning
output_dir = os.path.join(
output_dir,
json.loads(
os.environ.get('TF_CONFIG', '{}')
).get('task', {}).get('trial', '')
)
# Run the training job
model.train_and_evaluate(output_dir)
```
- babyweight/trainer/model.py
```
%%writefile babyweight/trainer/model.py
import shutil
import numpy as np
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.INFO)
BUCKET = None # set from task.py
PATTERN = 'of' # gets all files
# Determine CSV, label, and key columns
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks,key'.split(',')
LABEL_COLUMN = 'weight_pounds'
KEY_COLUMN = 'key'
# Set default values for each CSV column
DEFAULTS = [[0.0], ['null'], [0.0], ['null'], [0.0], ['nokey']]
# Define some hyperparameters
TRAIN_STEPS = 10000
EVAL_STEPS = None
BATCH_SIZE = 512
NEMBEDS = 3
NNSIZE = [64, 16, 4]
# Create an input function reading a file using the Dataset API
# Then provide the results to the Estimator API
def read_dataset(prefix, mode, batch_size):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults=DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
# Use prefix to create file path
file_path = 'gs://{}/babyweight/preproc/{}*{}*'.format(BUCKET, prefix, PATTERN)
# Create list of files that match pattern
file_list = tf.gfile.Glob(file_path)
# Create dataset from file list
dataset = (tf.data.TextLineDataset(file_list) # Read text file
.map(decode_csv)) # Transform each elem by applying decode_csv fn
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn
# Define feature columns
def get_wide_deep():
# Define column types
is_male,mother_age,plurality,gestation_weeks = \
[\
tf.feature_column.categorical_column_with_vocabulary_list('is_male',
['True', 'False', 'Unknown']),
tf.feature_column.numeric_column('mother_age'),
tf.feature_column.categorical_column_with_vocabulary_list('plurality',
['Single(1)', 'Twins(2)', 'Triplets(3)',
'Quadruplets(4)', 'Quintuplets(5)','Multiple(2+)']),
tf.feature_column.numeric_column('gestation_weeks')
]
# Discretize
age_buckets = tf.feature_column.bucketized_column(mother_age,
boundaries=np.arange(15,45,1).tolist())
gestation_buckets = tf.feature_column.bucketized_column(gestation_weeks,
boundaries=np.arange(17,47,1).tolist())
# Sparse columns are wide, have a linear relationship with the output
wide = [is_male,
plurality,
age_buckets,
gestation_buckets]
# Feature cross all the wide columns and embed into a lower dimension
crossed = tf.feature_column.crossed_column(wide, hash_bucket_size=20000)
embed = tf.feature_column.embedding_column(crossed, NEMBEDS)
# Continuous columns are deep, have a complex relationship with the output
deep = [mother_age,
gestation_weeks,
embed]
return wide, deep
# Create serving input function to be able to serve predictions later using provided inputs
def serving_input_fn():
feature_placeholders = {
'is_male': tf.placeholder(tf.string, [None]),
'mother_age': tf.placeholder(tf.float32, [None]),
'plurality': tf.placeholder(tf.string, [None]),
'gestation_weeks': tf.placeholder(tf.float32, [None]),
KEY_COLUMN: tf.placeholder_with_default(tf.constant(['nokey']), [None])
}
features = {
key: tf.expand_dims(tensor, -1)
for key, tensor in feature_placeholders.items()
}
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)
# create metric for hyperparameter tuning
def my_rmse(labels, predictions):
pred_values = predictions['predictions']
return {'rmse': tf.metrics.root_mean_squared_error(labels, pred_values)}
# Create estimator to train and evaluate
def train_and_evaluate(output_dir):
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
wide, deep = get_wide_deep()
EVAL_INTERVAL = 300 # seconds
## TODO 2a: set the save_checkpoints_secs to the EVAL_INTERVAL
run_config = tf.estimator.RunConfig(save_checkpoints_secs = EVAL_INTERVAL,
keep_checkpoint_max = 3)
## TODO 2b: change the dnn_hidden_units to NNSIZE
estimator = tf.estimator.DNNLinearCombinedRegressor(
model_dir = output_dir,
linear_feature_columns = wide,
dnn_feature_columns = deep,
dnn_hidden_units = NNSIZE,
config = run_config)
# illustrates how to add an extra metric
estimator = tf.contrib.estimator.add_metrics(estimator, my_rmse)
# for batch prediction, you need a key associated with each instance
estimator = tf.contrib.estimator.forward_features(estimator, KEY_COLUMN)
## TODO 2c: Set the third argument of read_dataset to BATCH_SIZE
## TODO 2d: and set max_steps to TRAIN_STEPS
train_spec = tf.estimator.TrainSpec(
input_fn = read_dataset('train', tf.estimator.ModeKeys.TRAIN, BATCH_SIZE),
max_steps = TRAIN_STEPS)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn, exports_to_keep=None)
## TODO 2e: Lastly, set steps equal to EVAL_STEPS
eval_spec = tf.estimator.EvalSpec(
input_fn = read_dataset('eval', tf.estimator.ModeKeys.EVAL, 2**15), # no need to batch in eval
steps = EVAL_STEPS,
start_delay_secs = 60, # start evaluating after N seconds
throttle_secs = EVAL_INTERVAL, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
%%bash
echo "bucket=${BUCKET}"
rm -rf babyweight_trained
export PYTHONPATH=${PYTHONPATH}:${PWD}/babyweight
python -m trainer.task \
--bucket=${BUCKET} \
--output_dir=babyweight_trained \
--job-dir=./tmp \
--pattern="00000-of-" --train_examples=1 --eval_steps=1
%%writefile inputs.json
{"key": "b1", "is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
{"key": "g1", "is_male": "False", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
%%bash
MODEL_LOCATION=$(ls -d $(pwd)/babyweight_trained/export/exporter/* | tail -1)
echo $MODEL_LOCATION1
gcloud ai-platform local predict --model-dir=$MODEL_LOCATION --json-instances=inputs.json
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/babyweight/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--runtime-version=$TFVERSION \
-- \
--bucket=${BUCKET} \
--output_dir=${OUTDIR} \
--train_examples=200000
%%writefile hyperparam.yaml
trainingInput:
scaleTier: STANDARD_1
hyperparameters:
hyperparameterMetricTag: rmse
goal: MINIMIZE
maxTrials: 20
maxParallelTrials: 5
enableTrialEarlyStopping: True
params:
- parameterName: batch_size
type: INTEGER
minValue: 8
maxValue: 512
scaleType: UNIT_LOG_SCALE
- parameterName: nembeds
type: INTEGER
minValue: 3
maxValue: 30
scaleType: UNIT_LINEAR_SCALE
- parameterName: nnsize
type: INTEGER
minValue: 64
maxValue: 512
scaleType: UNIT_LOG_SCALE
%%bash
OUTDIR=gs://${BUCKET}/babyweight/hyperparam
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/babyweight/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--config=hyperparam.yaml \
--runtime-version=$TFVERSION \
-- \
--bucket=${BUCKET} \
--output_dir=${OUTDIR} \
--eval_steps=10 \
--train_examples=20000
```
### Repeate training
```
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model_tuned
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/babyweight/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--runtime-version=$TFVERSION \
-- \
--bucket=${BUCKET} \
--output_dir=${OUTDIR} \
--train_examples=20000 --batch_size=35 --nembeds=16 --nnsize=281
```
| github_jupyter |
<h1> 1. Exploring natality dataset </h1>
This notebook illustrates:
<ol>
<li> Exploring a BigQuery dataset using AI Platform Notebooks.
</ol>
```
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
```
<h2> Explore data </h2>
The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that -- this way, twins born on the same day won't end up in different cuts of the data.
```
# Create SQL query using natality data after the year 2000
query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ABS(FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING)))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
"""
# Call BigQuery and examine in dataframe
from google.cloud import bigquery
df = bigquery.Client().query(query + " LIMIT 100").to_dataframe()
df.head()
```
Let's write a query to find the unique values for each of the columns and the count of those values.
This is important to ensure that we have enough examples of each data value, and to verify our hunch that the parameter has predictive value.
```
# Create function that finds the number of records and the average weight for each value of the chosen column
def get_distinct_values(column_name):
sql = """
SELECT
{0},
COUNT(1) AS num_babies,
AVG(weight_pounds) AS avg_wt
FROM
publicdata.samples.natality
WHERE
year > 2000
GROUP BY
{0}
""".format(column_name)
return bigquery.Client().query(sql).to_dataframe()
# Bar plot to see is_male with avg_wt linear and num_babies logarithmic
df = get_distinct_values('is_male')
df.plot(x='is_male', y='num_babies', kind='bar');
df.plot(x='is_male', y='avg_wt', kind='bar');
# Line plots to see mother_age with avg_wt linear and num_babies logarithmic
df = get_distinct_values('mother_age')
df = df.sort_values('mother_age')
df.plot(x='mother_age', y='num_babies');
df.plot(x='mother_age', y='avg_wt');
# Bar plot to see plurality(singleton, twins, etc.) with avg_wt linear and num_babies logarithmic
df = get_distinct_values('plurality')
df = df.sort_values('plurality')
df.plot(x='plurality', y='num_babies', logy=True, kind='bar');
df.plot(x='plurality', y='avg_wt', kind='bar');
# Bar plot to see gestation_weeks with avg_wt linear and num_babies logarithmic
df = get_distinct_values('gestation_weeks')
df = df.sort_values('gestation_weeks')
df.plot(x='gestation_weeks', y='num_babies', logy=True, kind='bar');
df.plot(x='gestation_weeks', y='avg_wt', kind='bar');
```
All these factors seem to play a part in the baby's weight. Male babies are heavier on average than female babies. Teenaged and older moms tend to have lower-weight babies. Twins, triplets, etc. are lower weight than single births. Preemies weigh in lower as do babies born to single moms. In addition, it is important to check whether you have enough data (number of babies) for each input value. Otherwise, the model prediction against input values that doesn't have enough data may not be reliable.
<p>
In the next notebook, I will develop a machine learning model to combine all of these factors to come up with a prediction of a baby's weight.
Copyright 2017-2018 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
# Spatial Analysis
### MAPPING TIZI OUZOU
+ **library and Data**
```
import geopandas as gpd
import matplotlib.pyplot as plt
import numpy as np
from shapely.geometry import box
from shapely.geometry import Polygon, Point
from matplotlib.patches import RegularPolygon
full_data=gpd.read_file("C:/Users/Salif SAWADOGO/dynamic segmentation/data/algeria_administrative_level_data/dza_admbnda_adm1_unhcr_20200120.shp")
```
+ **Dataframe dimensions**
```
full_data.shape
```
+ **Columns names and informations**
```
full_data.columns
full_data.info()
```
+ **Description**
```
data=full_data[['ADM1_EN','geometry']]
data_fruital=data.set_index("ADM1_EN")
data_fruital.head()
data_fruital=data_fruital.loc[fruital]
```
**check if the territory of eccbc is well write**
```
fruital=["Alger",'Tizi Ouzou','Boumerdes','Blida','Medea','Tipaza','Bouira',"Bordj Bou Arrer",'Ain-Defla','Djelfa','Ghardaia','Laghouat','Tamanrasset',"M'Sila",'Chlef','Ouargla']
data.head()
fruital=["Alger",'Tizi Ouzou','Boumerdes','Blida','Medea','Tipaza','Bouira',"Bordj Bou Arrer",'Ain-Defla','Djelfa','Ghardaia','Laghouat','Tamanrasset',"M'Sila",'Chlef','Ouargla']
data_fruital.replace("Bordj Bou Arrer","BBA",inplace=True)
data_fruital.replace("Tizi Ouzou","Tizi",inplace=True)
data_fruital=data_fruital.reset_index()
```
*Algeria with fruital territory*
```
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(10, 25))
c=data_fruital.plot(column='ADM1_EN',
legend=True,
ax=ax,cmap=plt.cm.Spectral)
leg = ax.get_legend()
leg.set_bbox_to_anchor((1.2,0.8))
ax.set_axis_off()
data.geometry.boundary.plot(color=None,edgecolor='k',linewidth = 1,ax=ax)
ax.set(title="ECCBC Algeria' s territories")
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(10, 25))
c=data_fruital.plot(column='ADM1_EN', ax=ax,cmap=plt.cm.Spectral)
ax.set_axis_off()
data_fruital['coords'] = data_fruital['geometry'].apply(lambda x: x.representative_point().coords[:])
data_fruital['coords'] = [coords[0] for coords in data_fruital['coords']]
for idx, row in data_fruital.iterrows():
plt.annotate(s=row['ADM1_EN'], xy=row['coords'],
horizontalalignment='center')
full_data2=gpd.read_file("C:/Users/Salif SAWADOGO/OneDrive - EQUATORIAL COCA-COLA BOTTLING COMPANY S.L/dynamic segmentation/algeria census/data/algeria_administrative_level_data/dza_admbnda_adm2_unhcr_20200120.shp")
full_data2.crs
full_data2=full_data2.loc[full_data2['ADM1_EN']=='Tizi Ouzou']
fig, ax = plt.subplots(figsize=(10, 10))
c=full_data2.plot(color='rebeccapurple',column='ADM1_EN', ax=ax)
ax.set_axis_off()
ax.set(title='Tizi Ouzou Wilaya, Area: 4 605 Km$^2$')
data_fruital.loc[data_fruital['ADM1_EN']=='Tizi'].area/10**6
def form_hex_grid(territory, hex_diameter: int):
"""
A function to form a hexagonal grid
Arguments:
territory - GeoDataFrame for a territory
hex_diameter - integer, diameter size to use in defining hexagon dimensions
Returns: GeoDataFrame with geometry for hexagonal grid formed for a territory provided
"""
# 1) Define general hex parameters
xmin, ymin, xmax, ymax = [x * 1.01 for x in territory.total_bounds]
EW = haversine_custom((xmin,ymin),(xmax,ymin))
NS = haversine_custom((xmin,ymin),(xmin,ymax))
# diamter of each hexagon in the grid
d = hex_diameter
# horizontal width of hexagon = w = d* sin(60)
w = d*np.sin(np.pi/3)
# Approximate number of hexagons per row = EW/w
n_cols = int(EW/w) + 1
# Approximate number of hexagons per column = NS/d
n_rows = int(NS/w) + 1
# 2) Add hex params to territory
# ax = territory[["geometry"]].boundary.plot(edgecolor='black', figsize=(30, 60)) #
# width of hexagon
w = (xmax-xmin)/n_cols
# diameter of hexagon
d = w/np.sin(np.pi/3)
array_of_hexes = []
for rows in range(0,n_rows):
hcoord = np.arange(xmin,xmax,w) + (rows%2)*w/2
vcoord = [ymax- rows*d*0.75]*n_cols
for x, y in zip(hcoord, vcoord): #, colors):
hexes = RegularPolygon((x, y), numVertices=6, radius=d/2, alpha=0.2, edgecolor='k')
verts = hexes.get_path().vertices
trans = hexes.get_patch_transform()
points = trans.transform(verts)
array_of_hexes.append(Polygon(points))
# ax.add_patch(hexes) #
# ax.set_xlim([xmin, xmax]) #
# ax.set_ylim([ymin, ymax]) #
# plt.show() #
# 3) Form hex grid as gpd
hex_grid = gpd.GeoDataFrame({'geometry': array_of_hexes}, crs="EPSG:4326")
hex_grid = hex_grid.to_crs(epsg=4326)
return hex_grid
def haversine_custom(coord1, coord2):
"""
A function to determine the great-circle distance between 2 points on Earth given their longitudes and latitudes
Arguments:
coord1 - territory bounds for first point, lon & lat
coord2 - territory bounds for second point, lon & lat
Returns: Distance in meters
"""
# Coordinates in decimal degrees (e.g. 43.60, -79.49)
lon1, lat1 = coord1
lon2, lat2 = coord2
# Radius of Earth in meters
R = 6371000
phi_1 = np.radians(lat1)
phi_2 = np.radians(lat2)
delta_phi = np.radians(lat2 - lat1)
delta_lambda = np.radians(lon2 - lon1)
a = np.sin(delta_phi / 2.0) ** 2 + np.cos(phi_1) * np.cos(phi_2) * np.sin(delta_lambda / 2.0) ** 2
c = 2 * np.arctan2(np.sqrt(a),np.sqrt(1 - a))
# Output distance in meters
meters = R * c
# Output distance in kilometers
km = meters / 1000.0
meters = round(meters)
km = round(km, 3)
#print(f"Distance: {meters} m")
#print(f"Distance: {km} km")
return meters
def hexalize_territory(territory, hex_grid):
"""
A function to add hexagonal grid geometry to GeoDataFrame territory
Arguments:
territory - GeoDataFrame for a territory
hex_grid - GeoDataFrame, hexagonal grid geometry as prepared for specified territory
Returns: GeoDataFrame of a territory overlayed with a hexagonal grid
"""
territory_hex = gpd.overlay(hex_grid, territory)
territory_hex = gpd.GeoDataFrame(territory_hex, geometry='geometry', crs="EPSG:4326")
territory_hex = territory_hex.reset_index()
territory_hex.rename(columns={'index': 'hex_id'}, inplace=True)
return territory_hex
data3 = form_hex_grid(data, hex_diameter=1250)
sub_set = gpd.overlay(full_data2, data3, how="intersection")
sub_set.plot(alpha=1, color='rebeccapurple',figsize=(10, 10))
data4 = form_hex_grid(data, hex_diameter=2500)
sub_set = gpd.overlay(full_data2, data4, how="intersection")
sub_set.plot(alpha=1, color='rebeccapurple',figsize=(10, 10))
#data3.to_file("hex_set_1250", driver='GeoJSON')
#data4.to_file("hex_set_2500", driver='GeoJSON')
```
| github_jupyter |
<img alt="QuantRocket logo" src="https://www.quantrocket.com/assets/img/notebook-header-logo.png">
© Copyright Quantopian Inc.<br>
© Modifications Copyright QuantRocket LLC<br>
Licensed under the [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/legalcode).
<a href="https://www.quantrocket.com/disclaimer/">Disclaimer</a>
# Multiple Linear Regression
By Evgenia "Jenny" Nitishinskaya, Maxwell Margenot, Delaney Granizo-Mackenzie, and Gilbert Wasserman.
```
import numpy as np
import pandas as pd
import statsmodels.api as sm
# If the observations are in a dataframe, you can use statsmodels.formulas.api to do the regression instead
from statsmodels import regression
import matplotlib.pyplot as plt
```
Multiple linear regression generalizes linear regression, allowing the dependent variable to be a linear function of multiple independent variables. As before, we assume that the variable $Y$ is a linear function of $X_1,\ldots, X_k$:
$$ Y_i = \beta_0 + \beta_1 X_{1i} + \ldots + \beta_k X_{ki} + \epsilon_i $$
Often in finance the form will be written as follows, but it is just the variable name that changes and otherwise the model is identical.
$$ Y_i = \alpha + \beta_1 X_{1i} + \ldots + \beta_k X_{ki} + \epsilon_i $$
For observations $i = 1,2,\ldots, n$. In order to find the plane (or hyperplane) of best fit, we will use the method of ordinary least-squares (OLS), which seeks to minimize the squared error between predictions and observations, $\sum_{i=1}^n \epsilon_i^2$. The square makes positive and negative errors equally bad, and magnifies large errors. It also makes the closed form math behind linear regression nice, but we won't go into that now. For an example of squared error, see the following.
Let's say Y is our actual data, and Y_hat is the predictions made by linear regression.
```
Y = np.array([1, 3.5, 4, 8, 12])
Y_hat = np.array([1, 3, 5, 7, 9])
print('Error ' + str(Y_hat - Y))
# Compute squared error
SE = (Y_hat - Y) ** 2
print('Squared Error ' + str(SE))
print('Sum Squared Error ' + str(np.sum(SE)))
```
Once we have used this method to determine the coefficients of the regression, we will be able to use new observed values of $X$ to predict values of $Y$.
Each coefficient $\beta_j$ tells us how much $Y_i$ will change if we change $X_j$ by one while holding all of the other dependent variables constant. This lets us separate out the contributions of different effects. This is assuming the linear model is the correct one.
We start by artificially constructing a $Y$, $X_1$, and $X_2$ in which we know the precise relationship.
```
# Construct a simple linear curve of 1, 2, 3, ...
X1 = np.arange(100)
# Make a parabola and add X1 to it, this is X2
X2 = np.array([i ** 2 for i in range(100)]) + X1
# This is our real Y, constructed using a linear combination of X1 and X2
Y = X1 + X2
plt.plot(X1, label='X1')
plt.plot(X2, label='X2')
plt.plot(Y, label='Y')
plt.legend();
```
We can use the same function from `statsmodels` as we did for a single linear regression lecture.
```
# Use column_stack to combine independent variables, then add a column of ones so we can fit an intercept
X = sm.add_constant(np.column_stack((X1, X2)))
# Run the model
results = regression.linear_model.OLS(Y, X).fit()
print('Beta_0:', results.params[0])
print('Beta_1:', results.params[1])
print('Beta_2:', results.params[2])
```
The same care must be taken with these results as with partial derivatives. The formula for $Y$ is ostensibly
$$X_1 + X_2 = X_1 + X^2 + X_1 = 2 X_1 + X^2$$
Or $2X_1$ plus a parabola.
However, the coefficient of $X_1$ is 1. That is because $Y$ changes by 1 if we change $X_1$ by 1 <i>while holding $X_2$ constant</i>. Multiple linear regression separates out contributions from different variables.
Similarly, running a linear regression on two securities might give a high $\beta$. However, if we bring in a third security (like SPY, which tracks the S&P 500) as an independent variable, we may find that the correlation between the first two securities is almost entirely due to them both being correlated with the S&P 500. This is useful because the S&P 500 may then be a more reliable predictor of both securities than they were of each other. This method allows us to better gauge the significance between the two securities and prevent confounding the two variables.
```
# Load pricing data for two arbitrarily-chosen assets and SPY
from quantrocket.master import get_securities
from quantrocket import get_prices
securities = get_securities(symbols=['SPY', 'AAPL', 'JNJ'], vendors='usstock')
start = '2019-01-01'
end = '2020-01-01'
prices = get_prices('usstock-free-1min', data_frequency='daily', sids=securities.index.tolist(), fields='Close', start_date=start, end_date=end).loc['Close']
sids_to_symbols = securities.Symbol.to_dict()
prices = prices.rename(columns=sids_to_symbols)
asset1 = prices['AAPL']
asset2 = prices['JNJ']
benchmark = prices['SPY']
# First, run a linear regression on the two assets
slr = regression.linear_model.OLS(asset1, sm.add_constant(asset2)).fit()
print('SLR beta of asset2:', slr.params[1])
# Run multiple linear regression using asset2 and SPY as independent variables
mlr = regression.linear_model.OLS(asset1, sm.add_constant(np.column_stack((asset2, benchmark)))).fit()
prediction = mlr.params[0] + mlr.params[1]*asset2 + mlr.params[2]*benchmark
prediction.name = 'Prediction'
print('MLR beta of asset2:', mlr.params[1], '\nMLR beta of S&P 500:', mlr.params[2])
```
The next step after running an analysis is determining if we can even trust the results. A good first step is checking to see if anything looks weird in graphs of the independent variables, dependent variables, and predictions.
```
# Plot the three variables along with the prediction given by the MLR
asset1.plot()
asset2.plot()
benchmark.plot()
prediction.plot(color='y')
plt.xlabel('Price')
plt.legend(bbox_to_anchor=(1,1), loc=2);
# Plot only the dependent variable and the prediction to get a closer look
asset1.plot()
prediction.plot(color='y')
plt.xlabel('Price')
plt.legend();
```
## Evaluation
We can get some statistics about the fit from the result returned by the regression:
```
mlr.summary()
```
## Model Assumptions
The validity of these statistics depends on whether or not the assumptions of the linear regression model are satisfied. These are:
* The independent variable is not random.
* The variance of the error term is constant across observations. This is important for evaluating the goodness of the fit.
* The errors are not autocorrelated. The Durbin-Watson statistic reported by the regression detects this. If it is close to $2$, there is no autocorrelation.
* The errors are normally distributed. If this does not hold, we cannot use some of the statistics, such as the F-test.
Multiple linear regression also requires an additional assumption:
* There is no exact linear relationship between the independent variables. Otherwise, it is impossible to solve for the coefficients $\beta_i$ uniquely, since the same linear equation can be expressed in multiple ways.
If there is a linear relationship between any set of independent variables, also known as covariance, we say that they are linear combinations of each other. In the case where they are dependent on each other in this manner, the values of our $\beta_i$ coefficients will be inaccurate for a given $X_i$. The intuition for this can be found in an exteme example where $X_1$ and $X_2$ are 100% covarying. In that case then linear regression can equivalently assign the total coefficient sum in any combination without affecting the predictive capability.
$$ 1X_1 + 0X_2 = 0.5X_1 + 0.5X_2 = 0X_1 + 1X_2 $$
While our coefficients may be nondescriptive, the ultimate model may still be accurate provided that there is a good overall fit between the independent variables and the dependent variables. The best practice for constructing a model where dependence is a problem is to leave out the less descriptive variables that are correlated with the better ones. This improves the model by reducing the chances of overfitting while bringing the $\beta_i$ estimates closer to their true values.
If we confirm that the necessary assumptions of the regression model are satisfied, we can safely use the statistics reported to analyze the fit. For example, the $R^2$ value tells us the fraction of the total variation of $Y$ that is explained by the model. When doing multiple linear regression, however, we prefer to use adjusted $R^2$, which corrects for the small increases in $R^2$ that occur when we add more regression variables to the model, even if they are not significantly correlated with the dependent variable. Adjusted $R^2$ is defined as
$$ 1 - (1 - R^2)\frac{n-1}{n-k-1} $$
Where $n$ is the number of observations and $k$ is the number of independent variables in the model. Other useful statistics include the F-statistic and the standard error of estimate.
## Model Selection Example
When deciding on the best possible model of your dependent variables, there are several different methods to turn to. If you use too many explanatory variables, you run the risk of overfitting your model, but if you use too few you may end up with a terrible fit. One of the most prominent methods to decide on a best model is stepwise regression. Forward stepwise regression starts from an empty model and tests each individual variable, selecting the one that results in the best model quality, usually measured with AIC or BIC (lowest is best). It then adds the remaining variables one at a time, testing each subsequent combination of explanatory variables in a regression and calculating the AIC or BIC value at each step. At the end of the regression, the model with the best quality (according to the given measure) is selected and presented as a the final, best model. This does have limitations, however. It does not test every single possible combination of variables so it may miss the theoretical best model if a particular variable was written off earlier in performing the algorithm. As such, stepwise regression should be used in combination with your best judgment regarding the model.
```
X1 = np.arange(100)
X2 = [i**2 for i in range(100)] - X1
X3 = [np.log(i) for i in range(1, 101)] + X2
X4 = 5 * X1
Y = 2 * X1 + 0.5 * X2 + 10 * X3 + X4
plt.plot(X1, label='X1')
plt.plot(X2, label='X2')
plt.plot(X3, label='X3')
plt.plot(X4, label='X4')
plt.plot(Y, label='Y')
plt.legend();
results = regression.linear_model.OLS(Y, sm.add_constant(np.column_stack((X1,X2,X3,X4)))).fit()
print("Beta_0:", results.params[0])
print("Beta_1:", results.params[1])
print("Beta_2:", results.params[2])
print("Beta_3:", results.params[3])
print("Beta_4:", results.params[4])
data = pd.DataFrame(np.column_stack((X1,X2,X3,X4)), columns=['X1','X2','X3','X4'])
response = pd.Series(Y, name='Y')
def forward_aic(response, data):
# This function will work with pandas dataframes and series
# Initialize some variables
explanatory = list(data.columns)
selected = pd.Series(np.ones(data.shape[0]), name="Intercept")
current_score, best_new_score = np.inf, np.inf
# Loop while we haven't found a better model
while current_score == best_new_score and len(explanatory) != 0:
scores_with_elements = []
count = 0
# For each explanatory variable
for element in explanatory:
# Make a set of explanatory variables including our current best and the new one
tmp = pd.concat([selected, data[element]], axis=1)
# Test the set
result = regression.linear_model.OLS(Y, tmp).fit()
score = result.aic
scores_with_elements.append((score, element, count))
count += 1
# Sort the scoring list
scores_with_elements.sort(reverse = True)
# Get the best new variable
best_new_score, best_element, index = scores_with_elements.pop()
if current_score > best_new_score:
# If it's better than the best add it to the set
explanatory.pop(index)
selected = pd.concat([selected, data[best_element]],axis=1)
current_score = best_new_score
# Return the final model
model = regression.linear_model.OLS(Y, selected).fit()
return model
result = forward_aic(Y, data)
result.summary()
```
In the construction of this model, the $X_4$ term is highly closely related to the $X1$ term, simply multiplying it by a scalar. However, stepwise regression did not catch this and remove the variable and simply adjust the coefficient of the $X_1$ term. Our own judgment would say to leave the $X_4$ term out of the model, showing the limitations of stepwise regression.
There are other ways to diagnose the health of a model and individual variables with varying degrees of penalty given to more complex models. This will be covered in-depth in a model selection notebook.
---
**Next Lecture:** [Violations of Regression Models](Lecture16-Violations-of-Regression-Models.ipynb)
[Back to Introduction](Introduction.ipynb)
---
*This presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. ("Quantopian") or QuantRocket LLC ("QuantRocket"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, neither Quantopian nor QuantRocket has taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information believed to be reliable at the time of publication. Neither Quantopian nor QuantRocket makes any guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances.*
| github_jupyter |
<center>
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/Logos/organization_logo/organization_logo.png" width="300" alt="cognitiveclass.ai logo" />
</center>
<h1>Extracting Stock Data Using a Python Library</h1>
A company's stock share is a piece of the company more precisely:
<p><b>A stock (also known as equity) is a security that represents the ownership of a fraction of a corporation. This
entitles the owner of the stock to a proportion of the corporation's assets and profits equal to how much stock they own. Units of stock are called "shares." [1]</p></b>
An investor can buy a stock and sell it later. If the stock price increases, the investor profits, If it decreases,the investor with incur a loss. Determining the stock price is complex; it depends on the number of outstanding shares, the size of the company's future profits, and much more. People trade stocks throughout the day the stock ticker is a report of the price of a certain stock, updated continuously throughout the trading session by the various stock market exchanges.
<p>You are a data scientist working for a hedge fund; it's your job to determine any suspicious stock activity. In this lab you will extract stock data using a Python library. We will use the <coode>yfinance</code> library, it allows us to extract data for stocks returning data in a pandas dataframe. You will use the lab to extract.</p>
<h2>Table of Contents</h2>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ul>
<li>Using yfinance to Extract Stock Info</li>
<li>Using yfinance to Extract Historical Share Price Data</li>
<li>Using yfinance to Extract Historical Dividends Data</li>
<li>Exercise</li>
</ul>
<p>
Estimated Time Needed: <strong>30 min</strong></p>
</div>
<hr>
```
!pip install yfinance
#!pip install pandas
import yfinance as yf
import pandas as pd
```
## Using the yfinance Library to Extract Stock Data
Using the `Ticker` module we can create an object that will allow us to access functions to extract data. To do this we need to provide the ticker symbol for the stock, here the company is Apple and the ticker symbol is `AAPL`.
```
apple = yf.Ticker("AAPL")
```
Now we can access functions and variables to extract the type of data we need. You can view them and what they represent here https://aroussi.com/post/python-yahoo-finance.
### Stock Info
Using the attribute <code>info</code> we can extract information about the stock as a Python dictionary.
```
apple_info=apple.info
apple_info
```
We can get the <code>'country'</code> using the key country
```
apple_info['country']
```
### Extracting Share Price
A share is the single smallest part of a company's stock that you can buy, the prices of these shares fluctuate over time. Using the <code>history()</code> method we can get the share price of the stock over a certain period of time. Using the `period` parameter we can set how far back from the present to get data. The options for `period` are 1 day (1d), 5d, 1 month (1mo) , 3mo, 6mo, 1 year (1y), 2y, 5y, 10y, ytd, and max.
```
apple_share_price_data = apple.history(period="max")
```
The format that the data is returned in is a Pandas DataFrame. With the `Date` as the index the share `Open`, `High`, `Low`, `Close`, `Volume`, and `Stock Splits` are given for each day.
```
apple_share_price_data.head()
```
We can reset the index of the DataFrame with the `reset_index` function. We also set the `inplace` paramter to `True` so the change takes place to the DataFrame itself.
```
apple_share_price_data.reset_index(inplace=True)
```
We can plot the `Open` price against the `Date`:
```
apple_share_price_data.plot(x="Date", y="Open")
```
### Extracting Dividends
Dividends are the distribution of a companys profits to shareholders. In this case they are defined as an amount of money returned per share an investor owns. Using the variable `dividends` we can get a dataframe of the data. The period of the data is given by the period defined in the 'history\` function.
```
apple.dividends
```
We can plot the dividends overtime:
```
apple.dividends.plot()
```
## Exercise
Now using the `Ticker` module create an object for AMD (Advanced Micro Devices) with the ticker symbol is `AMD` called; name the object <code>amd</code>.
```
amd = yf.Ticker("AMD")
```
<b>Question 1</b> Use the key <code>'country'</code> to find the country the stock belongs to, remember it as it will be a quiz question.
```
amd_info=amd.info
amd_info['country']
```
<b>Question 2</b> Use the key <code>'sector'</code> to find the sector the stock belongs to, remember it as it will be a quiz question.
```
amd_info['sector']
```
<b>Question 3</b> Obtain stock data for AMD using the `history` function, set the `period` to max. Find the `Volume` traded on the first day (first row).
```
amd_share_price_data = amd.history(period="max")
amd_share_price_data.reset_index(inplace=True)
md_share_price_data.plot(x="Date", y="Volume")
amd_share_price_data1 = amd.history(period="10y")
amd_share_price_data1.reset_index(inplace=True)
amd_share_price_data1['Volume'].max()
```
<h2>About the Authors:</h2>
<a href="https://www.linkedin.com/in/joseph-s-50398b136/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkPY0220ENSkillsNetwork23455606-2021-01-01">Joseph Santarcangelo</a> has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.
Azim Hirjani
## Change Log
| Date (YYYY-MM-DD) | Version | Changed By | Change Description |
| ----------------- | ------- | ------------- | ------------------------- |
| 2020-11-10 | 1.1 | Malika Singla | Deleted the Optional part |
| 2020-08-27 | 1.0 | Malika Singla | Added lab to GitLab |
<hr>
## <h3 align="center"> © IBM Corporation 2020. All rights reserved. <h3/>
<p>
| github_jupyter |
# Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the [original paper here](https://arxiv.org/pdf/1511.06434.pdf).
You'll be training DCGAN on the [Street View House Numbers](http://ufldl.stanford.edu/housenumbers/) (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST.

So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what [you saw previously](https://github.com/udacity/deep-learning/tree/master/gan_mnist) are in the generator and discriminator, otherwise the rest of the implementation is the same.
```
%matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
```
## Getting the data
Here you can download the SVHN dataset. Run the cell above and it'll download to your machine.
```
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Testing Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
```
These SVHN files are `.mat` files typically used with Matlab. However, we can load them in with `scipy.io.loadmat` which we imported above.
```
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
```
Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
```
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
```
Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
```
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.shuffle = shuffle
def batches(self, batch_size):
if self.shuffle:
idx = np.arange(len(dataset.train_x))
np.random.shuffle(idx)
self.train_x = self.train_x[idx]
self.train_y = self.train_y[idx]
n_batches = len(self.train_y)//batch_size
for ii in range(0, len(self.train_y), batch_size):
x = self.train_x[ii:ii+batch_size]
y = self.train_y[ii:ii+batch_size]
yield self.scaler(x), y
```
## Network Inputs
Here, just creating some placeholders like normal.
```
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
```
## Generator
Here you'll build the generator network. The input will be our noise vector `z` as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.
What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.
You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper:

Note that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3.
>**Exercise:** Build the transposed convolutional network for the generator in the function below. Be sure to use leaky ReLUs on all the layers except for the last tanh layer, as well as batch normalization on all the transposed convolutional layers except the last one.
```
def generator(z, output_dim, reuse=False, alpha=0.2, training=True):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x
# Output layer, 32x32x3
logits =
out = tf.tanh(logits)
return out
```
## Discriminator
Here you'll build the discriminator. This is basically just a convolutional classifier like you've built before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.
You'll also want to use batch normalization with `tf.layers.batch_normalization` on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.
Note: in this project, your batch normalization layers will always use batch statistics. (That is, always set `training` to `True`.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the `training` parameter appropriately.
>**Exercise:** Build the convolutional network for the discriminator. The input is a 32x32x3 images, the output is a sigmoid plus the logits. Again, use Leaky ReLU activations and batch normalization on all the layers except the first.
```
def discriminator(x, reuse=False, alpha=0.2):
with tf.variable_scope('discriminator', reuse=reuse):
# Input layer is 32x32x3
x =
logits =
out =
return out, logits
```
## Model Loss
Calculating the loss like before, nothing new here.
```
def model_loss(input_real, input_z, output_dim, alpha=0.2):
"""
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
"""
g_model = generator(input_z, output_dim, alpha=alpha)
d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
```
## Optimizers
Not much new here, but notice how the train operations are wrapped in a `with tf.control_dependencies` block so the batch normalization layers can update their population statistics.
```
def model_opt(d_loss, g_loss, learning_rate, beta1):
"""
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
"""
# Get weights and bias to update
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
```
## Building the model
Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.
```
class GAN:
def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.input_real, self.input_z = model_inputs(real_size, z_size)
self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,
real_size[2], alpha=alpha)
self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1)
```
Here is a function for displaying generated images.
```
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img, aspect='equal')
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
```
And another function we can use to train our network. Notice when we call `generator` to create the samples to display, we set `training` to `False`. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the `net.input_real` placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the `tf.control_dependencies` block we created in `model_opt`.
```
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.uniform(-1, 1, size=(72, z_size))
samples, losses = [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in dataset.batches(batch_size):
steps += 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})
_ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})
train_loss_g = net.g_loss.eval({net.input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
if steps % show_every == 0:
gen_samples = sess.run(
generator(net.input_z, 3, reuse=True, training=False),
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 6, 12, figsize=figsize)
plt.show()
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return losses, samples
```
## Hyperparameters
GANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read [the DCGAN paper](https://arxiv.org/pdf/1511.06434.pdf) to see what worked for them.
>**Exercise:** Find hyperparameters to train this GAN. The values found in the DCGAN paper work well, or you can experiment on your own. In general, you want the discriminator loss to be around 0.3, this means it is correctly classifying images as fake or real about 50% of the time.
```
real_size = (32,32,3)
z_size = 100
learning_rate = 0.001
batch_size = 64
epochs = 1
alpha = 0.01
beta1 = 0.9
# Create the network
net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)
# Load the data and train the network here
dataset = Dataset(trainset, testset)
losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
```
| github_jupyter |
# Posterior
```
%matplotlib inline
from __future__ import print_function, division
import numpy as np
import math
import matplotlib
from matplotlib import pyplot as plt
import matplotlib.colors as colors
from matplotlib import rcParams, rc
from matplotlib.ticker import MultipleLocator, AutoMinorLocator
from matplotlib import gridspec, ticker
from xpsi.tools.phase_interpolator import interpolate_pulse
import sys
import xpsi
import os
from xpsi import PostProcessing
PostProcessing.NSBackend.use_nestcheck = True
from xpsi.global_imports import _dpr, _keV, _k_B
from xpsi.cellmesh.mesh_tools import eval_cedeCentreCoords
PostProcessing.publication_rc_settings()
from xpsi.global_imports import _c, _G, _M_s, _dpr, gravradius
class model():
""" A namespace if mutiple models in one notebook. """
from SynthesiseData import CustomData
from CustomInstrument import CustomInstrument
from CustomBackground import CustomBackground
from CustomPulse import CustomPulse
from CustomSpacetime import CustomSpacetime
from CustomPrior import CustomPrior
phase_edges = np.linspace(0.0, 1.0, 33)
data = CustomData(0, 181, phase_edges)
m1 = model()
m1.NICER = CustomInstrument.from_SWG(num_params=0,
bounds=[],
ARF = 'model_data/nicer_v1.01_arf.txt',
RMF = 'model_data/nicer_v1.01_rmf_matrix.txt',
max_input=500,
min_input=0,
chan_edges = 'model_data/nicer_v1.01_rmf_energymap.txt')
background = CustomBackground(num_params = 1, bounds = [(-3.0,-1.01)])
pulse = CustomPulse(tag = 'all',
num_params = 2,
bounds = [(-0.25, 0.75), (-0.25, 0.75)],
data = data,
instrument = m1.NICER,
background = background,
interstellar = None,
energies_per_interval = 0.5,
default_energy_spacing = 'logspace',
fast_rel_energies_per_interval = 0.5,
workspace_intervals = 1000,
adaptive_energies=False,
store=True,
epsrel = 1.0e-8,
epsilon = 1.0e-3,
sigmas = 10.0)
bounds = [(0.1, 1.0),
(1.0, 3.0),
(3.0 * gravradius(1.0), 16.0),
(0.001, math.pi/2.0)]
spacetime = CustomSpacetime(num_params = 4, bounds = bounds, S = 300.0)
bounds = [(0.001, math.pi - 0.001),
(0.001, math.pi/2.0 - 0.001),
(5.5, 6.5),
(0.001, math.pi - 0.001),
(0.001, math.pi/2.0 - 0.001),
(5.5, 6.5)]
spot = xpsi.Spots(num_params=(3,3), bounds=bounds,
symmetry=True,
hole=False,
cede=False,
concentric=False,
antipodal_symmetry=False,
sqrt_num_cells=32,
min_sqrt_num_cells=10,
max_sqrt_num_cells=64,
do_fast=False,
num_leaves=100,
num_rays=200)
photosphere = xpsi.Photosphere(num_params = 0, bounds = [],
tag = 'all', spot = spot, elsewhere = None)
star = xpsi.Star(spacetime = spacetime, photospheres = photosphere)
likelihood = xpsi.Likelihood(star = star, pulses = pulse, threads=1)
prior = CustomPrior(bounds=likelihood.bounds, spacetime=spacetime)
likelihood.prior = prior
fig = plt.figure(figsize = (10,10))
ax = fig.add_subplot(111)
profile = ax.imshow(m1.NICER.matrix,
cmap = plt.cm.viridis,
rasterized = True)
plt.plot(np.sum(m1.NICER.matrix, axis=0))
print(likelihood)
%env GSL_RNG_SEED=0
q = [0.2, 1.4, 12.5, 1.25,
1.0, 0.075, 6.2,
math.pi - 1.0, 0.2, 6.0,
-2.0, 0.0, 0.025]
# v2
likelihood._theta = [0]*len(likelihood._theta)
likelihood.synthesise(q, require_source_counts=2.0e6, require_background_counts=2.0e6, directory='./data') # [s] SEED=0
expec = np.loadtxt('data/synthetic_expected_hreadable.dat')
np.sum(expec[:,2])
realisation = np.loadtxt('data/synthetic_realisation.dat', dtype=np.double)
realisation.shape
np.sum(realisation)
from matplotlib.ticker import MultipleLocator
from matplotlib import gridspec
from matplotlib import cm
fig = plt.figure(figsize = (10,10))
gs = gridspec.GridSpec(1, 2, width_ratios=[50,1])
ax = plt.subplot(gs[0])
ax_cb = plt.subplot(gs[1])
profile = ax.pcolormesh(pulse.phases,
pulse.logspace_energies_hires,
pulse.raw_signals[0] + interpolate_pulse(pulse.phases, pulse.phases, pulse.raw_signals[1], q[-1]),
cmap = plt.cm.viridis,
linewidth = 0,
rasterized = True)
profile.set_edgecolor('face')
ax.tick_params(which='major', colors='black', length=8)
ax.tick_params(which='minor', colors='black', length=4)
ax.xaxis.set_tick_params(which='both', width=1.5)
ax.yaxis.set_tick_params(which='both', width=1.5)
plt.setp(ax.spines.values(), linewidth=1.5, color='black')
ax.set_xlim([0.0, 1.0])
ax.set_yscale('log')
ax.set_ylim([pulse.logspace_energies_hires[0], pulse.logspace_energies_hires[-1]])
ax.set_ylabel(r'Energy [keV]')
[i.set_color("black") for i in ax.get_xticklabels()]
[i.set_color("black") for i in ax.get_yticklabels()]
ax.xaxis.set_major_locator(MultipleLocator(0.2))
ax.xaxis.set_minor_locator(MultipleLocator(0.05))
ax.set_xlabel(r'Phase')
cb = plt.colorbar(profile,
cax = ax_cb)
cb.set_label(label=r'Normalised specific photon flux', labelpad=25)
cb.solids.set_edgecolor('face')
plt.subplots_adjust(wspace = 0.025)
plt.plot(pulse.phases, np.sum(pulse.raw_signals[0], axis=0))
plt.plot(pulse.phases, np.sum(pulse.raw_signals[1], axis=0))
from matplotlib.ticker import MultipleLocator
from matplotlib import gridspec
from matplotlib import cm
fig = plt.figure(figsize = (10,10))
gs = gridspec.GridSpec(1, 2, width_ratios=[50,1])
ax = plt.subplot(gs[0])
ax_cb = plt.subplot(gs[1])
new_phases = np.linspace(0.0, 3.0, 3000)
interpolated = interpolate_pulse(new_phases, pulse.phases, pulse.pulse[0], q[-2])
interpolated += interpolate_pulse(new_phases, pulse.phases, pulse.pulse[1], q[-1])
profile = ax.pcolormesh(new_phases,
m1.NICER.channels,
interpolated,
cmap = plt.cm.viridis,
linewidth = 0,
rasterized = True)
profile.set_edgecolor('face')
ax.tick_params(which='major', colors='black', length=8)
ax.tick_params(which='minor', colors='black', length=4)
ax.xaxis.set_tick_params(which='both', width=1.5)
ax.yaxis.set_tick_params(which='both', width=1.5)
plt.setp(ax.spines.values(), linewidth=1.5, color='black')
ax.set_xlim([0.0, 3.0])
ax.set_yscale('log')
ax.set_ylabel(r'Channel')
[i.set_color("black") for i in ax.get_xticklabels()]
[i.set_color("black") for i in ax.get_yticklabels()]
ax.xaxis.set_major_locator(MultipleLocator(0.2))
ax.xaxis.set_minor_locator(MultipleLocator(0.05))
ax.set_xlabel(r'Phase')
cb = plt.colorbar(profile,
cax = ax_cb)
cb.set_label(label=r'counts/s', labelpad=25)
cb.solids.set_edgecolor('face')
plt.subplots_adjust(wspace = 0.025)
from matplotlib.ticker import MultipleLocator
from matplotlib import gridspec
from matplotlib import cm
fig = plt.figure(figsize = (10,10))
gs = gridspec.GridSpec(1, 2, width_ratios=[50,1])
ax = plt.subplot(gs[0])
ax_cb = plt.subplot(gs[1])
profile = ax.pcolormesh(data.phases,
m2.NICER.channels,
realisation,
cmap = plt.cm.viridis,
linewidth = 0,
rasterized = True)
profile.set_edgecolor('face')
ax.tick_params(which='major', colors='black', length=8)
ax.tick_params(which='minor', colors='black', length=4)
ax.xaxis.set_tick_params(which='both', width=1.5)
ax.yaxis.set_tick_params(which='both', width=1.5)
plt.setp(ax.spines.values(), linewidth=1.5, color='black')
ax.set_xlim([0.0, 1.0])
ax.set_yscale('log')
ax.set_ylabel(r'Channel')
[i.set_color("black") for i in ax.get_xticklabels()]
[i.set_color("black") for i in ax.get_yticklabels()]
ax.xaxis.set_major_locator(MultipleLocator(0.2))
ax.xaxis.set_minor_locator(MultipleLocator(0.05))
ax.set_xlabel(r'Phase')
cb = plt.colorbar(profile,
cax = ax_cb)
cb.set_label(label=r'counts/s', labelpad=25)
cb.solids.set_edgecolor('face')
plt.subplots_adjust(wspace = 0.025)
plt.plot(m1.NICER.channels, np.sum(realisation, axis=1))
plt.gca().set_yscale('log')
plt.plot(m1.NICER.channels, np.sum(pulse.pulse[0] + interpolate_pulse(pulse.phases, pulse.phases, pulse.pulse[1], q[-1]), axis=1))
plt.gca().set_yscale('log')
fig = plt.figure(figsize=(15,15))
profile = plt.pcolormesh(spot._super_cellArea,
cmap = plt.cm.magma,
linewidth = 0,
rasterized = True)
fig = plt.figure(figsize=(15,15))
profile = plt.pcolormesh(spot._Spots__super_cellArea,
cmap = plt.cm.magma,
linewidth = 0,
rasterized = True)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/ayulockin/Explore-NFNet/blob/main/Train_NF_ResNet_on_Cifar_10_using_PyTorch_Lightning.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## ⚙️ Imports and Setups
```
%%capture
# Install pytorch lighting
!pip install pytorch-lightning --quiet
# Install weights and biases
!pip install wandb --quiet
# Install patool to unrar dataset file
!pip install patool
!git clone https://github.com/rwightman/pytorch-image-models
# regular imports
import sys
sys.path.append("pytorch-image-models")
import os
import re
import numpy as np
import patoolib
# pytorch related imports
import torch
from torch import nn
from torch.nn import functional as F
import torchvision.models as models
from torchvision import transforms
from torch.utils.data import DataLoader, random_split
from torchvision.datasets import ImageFolder
from torchvision.datasets.utils import download_url
# import for nfnet
import timm
# lightning related imports
import pytorch_lightning as pl
from pytorch_lightning.metrics.functional import accuracy
from pytorch_lightning.callbacks import Callback
from pytorch_lightning.callbacks.early_stopping import EarlyStopping
from pytorch_lightning.loggers import WandbLogger
from pytorch_lightning.callbacks import ModelCheckpoint
# sklearn related imports
from sklearn.metrics import precision_recall_curve
from sklearn.preprocessing import label_binarize
# import wandb and login
import wandb
wandb.login()
```
## 🎨 Using DataModules - `Clatech101DataModule`
DataModules are a way of decoupling data-related hooks from the `LightningModule` so you can develop dataset agnostic models.
```
class Caltech101DataModule(pl.LightningDataModule):
def __init__(self, batch_size, data_dir: str = './'):
super().__init__()
self.data_dir = data_dir
self.batch_size = batch_size
# Augmentation policy
self.augmentation = transforms.Compose([
transforms.RandomResizedCrop(size=256, scale=(0.8, 1.0)),
transforms.RandomRotation(degrees=15),
transforms.RandomHorizontalFlip(),
transforms.CenterCrop(size=224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],[0.229, 0.224, 0.225])
])
self.transform = transforms.Compose([
transforms.Resize(size=256),
transforms.CenterCrop(size=224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],[0.229, 0.224, 0.225])
])
self.num_classes = 102
def prepare_data(self):
# source: https://figshare.com/articles/dataset/Caltech101_Image_Dataset/7007090
url = 'https://s3-eu-west-1.amazonaws.com/pfigshare-u-files/12855005/Caltech101ImageDataset.rar'
# download
download_url(url, self.data_dir)
# extract
patoolib.extract_archive("Caltech101ImageDataset.rar", outdir=self.data_dir)
def setup(self, stage=None):
# build dataset
caltect_dataset = ImageFolder('Caltech101')
# split dataset
self.train, self.val, self.test = random_split(caltect_dataset, [6500, 1000, 1645])
self.train.dataset.transform = self.augmentation
self.val.dataset.transform = self.transform
self.test.dataset.transform = self.transform
def train_dataloader(self):
return DataLoader(self.train, batch_size=self.batch_size, shuffle=True)
def val_dataloader(self):
return DataLoader(self.val, batch_size=self.batch_size)
def test_dataloader(self):
return DataLoader(self.test, batch_size=self.batch_size)
```
## 📲 Callbacks
#### 🚏 Earlystopping
```
early_stop_callback = EarlyStopping(
monitor='val_loss',
patience=3,
verbose=False,
mode='min'
)
```
#### 🛃 Custom Callback - `ImagePredictionLogger`
```
class ImagePredictionLogger(Callback):
def __init__(self, val_samples, num_samples=32):
super().__init__()
self.num_samples = num_samples
self.val_imgs, self.val_labels = val_samples
def on_validation_epoch_end(self, trainer, pl_module):
val_imgs = self.val_imgs.to(device=pl_module.device)
val_labels = self.val_labels.to(device=pl_module.device)
logits = pl_module(val_imgs)
preds = torch.argmax(logits, -1)
trainer.logger.experiment.log({
"examples":[wandb.Image(x, caption=f"Pred:{pred}, Label:{y}")
for x, pred, y in zip(val_imgs[:self.num_samples],
preds[:self.num_samples],
val_labels[:self.num_samples])]
})
```
#### 💾 Model Checkpoint Callback
```
MODEL_CKPT_PATH = 'model/'
MODEL_CKPT = 'model/model-{epoch:02d}-{val_loss:.2f}'
checkpoint_callback = ModelCheckpoint(
monitor='val_loss',
filename=MODEL_CKPT,
save_top_k=3,
mode='min')
```
## 🎺 Define The Model
```
class LitModel(pl.LightningModule):
def __init__(self, input_shape, num_classes, learning_rate=2e-4):
super().__init__()
# log hyperparameters
self.save_hyperparameters()
self.learning_rate = learning_rate
self.dim = input_shape
self.num_classes = num_classes
self.classifier = timm.create_model('nf_resnet50', pretrained=False, num_classes=num_classes)
def forward(self, x):
# x = self._forward_features(x)
# x = x.view(x.size(0), -1)
x = F.log_softmax(self.classifier(x), dim=1)
return x
# logic for a single training step
def training_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
# training metrics
preds = torch.argmax(logits, dim=1)
acc = accuracy(preds, y)
self.log('train_loss', loss, on_step=True, on_epoch=True, logger=True)
self.log('train_acc', acc, on_step=True, on_epoch=True, logger=True)
return loss
# logic for a single validation step
def validation_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
# validation metrics
preds = torch.argmax(logits, dim=1)
acc = accuracy(preds, y)
self.log('val_loss', loss, prog_bar=True)
self.log('val_acc', acc, prog_bar=True)
return loss
# logic for a single testing step
def test_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
# validation metrics
preds = torch.argmax(logits, dim=1)
acc = accuracy(preds, y)
self.log('test_loss', loss, prog_bar=True)
self.log('test_acc', acc, prog_bar=True)
return loss
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=self.learning_rate)
return optimizer
```
## ⚡ Train and Evaluate The Model
```
# Init our data pipeline
BATCH_SIZE = 32
dm = Caltech101DataModule(batch_size=BATCH_SIZE)
# To access the x_dataloader we need to call prepare_data and setup.
dm.prepare_data()
dm.setup()
# Samples required by the custom ImagePredictionLogger callback to log image predictions.
val_samples = next(iter(dm.val_dataloader()))
val_imgs, val_labels = val_samples[0], val_samples[1]
val_imgs.shape, val_labels.shape
# Init our model
model = LitModel((3,224,224), 102)
# Initialize wandb logger
wandb_logger = WandbLogger(project='nfnet', job_type='train-nf-resnet')
# Initialize a trainer
trainer = pl.Trainer(max_epochs=50,
progress_bar_refresh_rate=20,
gpus=1,
logger=wandb_logger,
callbacks=[early_stop_callback,
ImagePredictionLogger(val_samples)],
checkpoint_callback=checkpoint_callback)
# Train the model ⚡🚅⚡
trainer.fit(model, dm)
# Evaluate the model on the held out test set ⚡⚡
trainer.test()
# Close wandb run
wandb.finish()
```
| github_jupyter |
```
#!pip install gym-retro
import gym
import tensorflow as tf
import tensorflow.keras as keras
import numpy as np
import evolvepy as ep
import matplotlib.pyplot as plt
from evolvepy.integrations.tf_keras import ProcessTFKerasEvaluator, EvolutionaryModel
import numpy as np
w1 = np.random.rand(240, 24)
b1 = np.random.rand(240)
w2 = np.random.rand(140, 240)
b2 = np.random.rand(140)
w3 = np.random.rand(40, 140)
b3 = np.random.rand(40)
w4 = np.random.rand(4,40)
b4 = np.random.rand(4)
import numba
layers = [[w1, b1], [w2,b2], [w3,b3], [w4,b4]]
for i in range(4):
for j in range(2):
layers[i][j] = layers[i][j].astype(np.float32)
@numba.njit
def func():
x = np.empty(24, dtype=np.float32)
result = x
for i in range(len(layers)-1):
layer = layers[i]
(layer[0] @ result)+layer[1]
(np.abs(result)+result)/2
layer = layers[-1]
(layer[0] @ result)+layer[1]
1/(1+np.exp(-result))
func()
%timeit func()
env = gym.make("BipedalWalker-v3")
model = EvolutionaryModel([keras.layers.Dense(240, activation="relu", input_shape=(24,)),
keras.layers.Dense(140, activation="relu"),
keras.layers.Dense(40, activation="relu"),
keras.layers.Dense(4, activation="sigmoid")])
from rl_utils import BipedalWalkerFitnessFunction
evaluator = ProcessTFKerasEvaluator(BipedalWalkerFitnessFunction, model, n_process=1)
multiple_evaluation = ep.evaluator.MultipleEvaluation(evaluator, 5, discard_max=True, discard_min=True)
first = ep.generator.Layer()
combine = ep.generator.CombineLayer(ep.generator.selection.tournament, ep.generator.crossover.one_point)
mutation = ep.generator.mutation.NumericMutationLayer(ep.generator.mutation.sum_mutation, 1.0, 0.5, (-0.5, 0.5))
filter0 = ep.generator.FilterFirsts(47)
sort = ep.generator.Sort()
filter1 = ep.generator.FilterFirsts(3)
concat = ep.generator.Concatenate()
first.next = combine
combine.next = mutation
combine.next = filter0
filter0.next = concat
first.next = sort
sort.next = filter1
filter1.next = concat
generator = ep.generator.Generator(first_layer=first, last_layer=concat, descriptor=model.descriptor)
from evolvepy.integrations.wandb import WandbLogger
wandb_log = WandbLogger("BipedalWalker", "EvolvePy Example")
evolver = ep.Evolver(generator, multiple_evaluation, 50) #[ wandb_log])
hist, last_pop = evolver.evolve(1)
import matplotlib.pyplot as plt
plt.plot(hist.max(axis=1))
from evolvepy.integrations.tf_keras import transfer_weights
best = last_pop[np.argmax(hist[-1])]
transfer_weights(best, model)
from gym.wrappers import Monitor
env = gym.make("FetchReach-v1")
env = Monitor(env, "./video", force=True)
fitness_function([model])
env.close()
env = gym.make("Humanoid-v2")
obs = env.reset()
env.action_space
obs.shape
img = env.render(mode="rgb_array")
import matplotlib.pyplot as plt
plt.imshow(img)
plt.show()
env.action_space
from evolvepy.integrations.tf_keras import ProcessTFKerasFitnessFunction, EvolutionaryModel
import evolvepy as ep
model = EvolutionaryModel([keras.layers.Dense(1, input_shape=(1,))])
class TestFunction(ProcessTFKerasFitnessFunction):
def evaluate(self, model: keras.Model) -> np.ndarray:
x = np.zeros((1,1))
return model(x)[0][0].numpy()
evaluator = ep.evaluator.ProcessEvaluator(TestFunction, args=model.get_config())
individuals = np.empty(1, model.descriptor.dtype)
for i in [0, 20, 78, 222]:
best = np.load("wandb\\run-20220106_123149-2o9vemwl\\files\\best_individual"+str(i)+".npy", allow_pickle=True).item()
individuals = np.empty(1, descriptor.dtype)
for name in descriptor.dtype.fields:
size = descriptor.dtype.fields[name][0].shape[0]
for i in range(size):
individuals[0][name][i] = best["best_individual/"+name+"/"+str(i)]
test_evaluator(individuals)
```
| github_jupyter |
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt # for plotting and visualozing data
#our dataset
fruits=pd.read_table('fruit_data_with_colors.txt')
```
We have loaded our dataset, now we will check it's first five rows to check how our data looks, which features our data have.
```
#checking first five rows of our dataset
fruits.head()
# create a mapping from fruit label value to fruit name to make results easier to interpret
predct = dict(zip(fruits.fruit_label.unique(), fruits.fruit_name.unique()))
predct
```
Dataset have seven columns containing the information about fruits. Here only two fruits i.e apple and mandarin are seen. Every fruit is described with four features i.e 1) mass of fruit 2) width of fruit 3) what is height and 4) what is color score of fruit. Now we have to check how many fruits are present in our data.
```
#checking how many unique fruit names are present in the dataset
fruits['fruit_name'].value_counts()
```
We have seen that the dataset contains four unique fruits. apple with 19 entries, orange with 19 entries, lemon with 16 entries and mandarin with 5 entries.
Now we will store all unique data on four different dataframes.
```
apple_data=fruits[fruits['fruit_name']=='apple']
orange_data=fruits[fruits['fruit_name']=='orange']
lemon_data=fruits[fruits['fruit_name']=='lemon']
mandarin_data=fruits[fruits['fruit_name']=='mandarin']
apple_data.head()
mandarin_data.head()
orange_data.head()
lemon_data.head()
```
By looking above data, it is shown that for every fruit there is a fruit_label. For apple it is 1, for mandarin it is 2, for orange it is 3 and for lemon it is 4. Now we will visualize this data on plots for further exploration.
```
plt.scatter(fruits['width'],fruits['height'])
plt.scatter(fruits['mass'],fruits['color_score'])
plt.plot(fruits['height'],label='Height')
plt.plot(fruits['width'],label='Width')
plt.legend()
```
Now we will use K-Nearest Neighbors classifier to predict a new record on the basis of this data. For this we will aplit this dataset into test and train sets. First we will import sklearn library for our model.
```
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
X=fruits[['mass','width','height']]
Y=fruits['fruit_label']
X_train,X_test,y_train,y_test=train_test_split(X,Y,random_state=0)
X_train.describe()
X_test.describe()
knn=KNeighborsClassifier()
knn.fit(X_train,y_train)
```
We can check the accuracy of our classifier
```
knn.score(X_test,y_test)
```
Now we can make predictions with new data as following:
```
#parameters of following function are mass,width and height
#example1
prediction1=knn.predict([['100','6.3','8']])
predct[prediction1[0]]
#example2
prediction2=knn.predict([['300','7','10']])
predct[prediction2[0]]
```
Yes, our model is running successfully and making accurate predictions.
| github_jupyter |
# What is RDD?
RDD stands for “Resilient Distributed Dataset”. It is the fundamental data structure of Apache Spark. RDD in Apache Spark is an immutable collection of objects which computes on the different node of the cluster.
Decomposing the name RDD:
1.Resilient i.e. fault-tolerant with the help of RDD lineage graph(DAG) and so able to recompute missing or damaged partitions due to node failures.
2.Distributed, since Data resides on multiple nodes.
3.Dataset represents records of the data you work with. The user can load the data set externally which can be either JSON file, CSV file, text file or database via JDBC with no specific data structure.
Hence, each and every dataset in RDD is logically partitioned across many servers so that they can be computed on different nodes of the cluster. RDDs are fault tolerant i.e. It posses self-recovery in the case of failure.
There are three ways to create RDDs in Spark such as – Data in stable storage, other RDDs, and parallelizing already existing collection in driver program. One can also operate Spark RDDs in parallel with a low-level API that offers transformations and actions. We will study these Spark RDD Operations later in this section.
Spark RDD can also be cached and manually partitioned. Caching is beneficial when we use RDD several times. And manual partitioning is important to correctly balance partitions. Generally, smaller partitions allow distributing RDD data more equally, among more executors. Hence, fewer partitions make the work easy.
Programmers can also call a persist method to indicate which RDDs they want to reuse in future operations. Spark keeps persistent RDDs in memory by default, but it can spill them to disk if there is not enough RAM. Users can also request other persistence strategies, such as storing the RDD only on disk or replicating it across machines, through flags to persist.
# Contents:
a.Creating RDD
b.Basic Operations:
1. .map(...)The method is applied to each element of the RDD and transformation is done
2. .filter(...)The method allows you to select elements of your dataset that fit specified criteria
3. .flatMap(...)The method works similarly to .map(...) but returns a flattened results instead of a list.
4. .distinct(...)The method returns a list of distinct values in a specified column.
5. .sample(...)The method returns a randomized sample from the dataset.
6. .take(n)The method returns first n elements in RDD
7. .collect(...)The method used to print all elements in RDD
8. .reduce(...)The method reduces the elements of an RDD using a specified method
9. .count(...)The method used to return the number of elements in RDD
10. .first(...)The Method used to return first element in RDD
11. .foreach(...)A method that applies the same function to each element of the RDD in an iterative way.
12. .sum()The method used to sum of all elements in RDD
13. .stats()The method used to print all statistics of RDD
# Importing Libraries
```
import pyspark
from pyspark import SparkContext
import numpy as np
import pandas as pd
sc=SparkContext("local[*]")
```
# A. Creating RDD
```
lst=np.random.randint(0,10,20)
print(lst)
```
### What did we just do? We created a RDD? What is a RDD?

Spark revolves around the concept of a resilient distributed dataset (RDD), which is a **fault-tolerant collection of elements that can be operated on in parallel**. SparkContext manages the distributed data over the worker nodes through the cluster manager.
There are two ways to create RDDs:
* parallelizing an existing collection in your driver program, or
* referencing a dataset in an external storage system, such as a shared filesystem, HDFS, HBase, or any data source offering a Hadoop InputFormat.
We created a RDD using the former approach
# `A` is a pyspark RDD object, we cannot access the elements directly
```
A=sc.parallelize(lst)
type(A)
A
```
### Opposite to parallelization - `collect` brings all the distributed elements and returns them to the head node. <br><br>Note - this is a slow process, do not use it often.
```
A.collect()
```
### How were the partitions created? Use `glom` method
```
A.glom().collect()
```
# B. Transformations
### 1. `map` function
```
B=A.map(lambda x:x*x)
B.collect()
```
`map` operation with regular Python function
```
def square(x):
return x*x*x
C=A.map(square)
C.collect()
```
### 2. `filter` function
```
A.filter(lambda x:x%4==0).collect()
```
### 3. `flatmap` function
```
D=A.flatMap(lambda x:(x,x*x))
```
### `flatmap` method returns a new RDD by first applying a function to all elements of this RDD, and then flattening the results
```
D.collect()
```
### 4. `distinct` function
### The method `RDD.distinct()` Returns a new dataset that contains the distinct elements of the source dataset.
```
A.distinct().collect()
```
### 5. `sample` function
## Sampling an RDD
* RDDs are often very large.
* **Aggregates, such as averages, can be approximated efficiently by using a sample.** This comes handy often for operation with extremely large datasets where a sample can tell a lot about the pattern and descriptive statistics of the data.
* Sampling is done in parallel and requires limited computation.
The method `RDD.sample(withReplacement,p)` generates a sample of the elements of the RDD. where
- `withReplacement` is a boolean flag indicating whether or not a an element in the RDD can be sampled more than once.
- `p` is the probability of accepting each element into the sample. Note that as the sampling is performed independently in each partition, the number of elements in the sample changes from sample to sample.
```
m=5
n=20
print('sample1=',A.sample(False,m/n).collect())
print('sample2=',A.sample(False,m/n).collect())
```
### 6. `take` function
```
A.take(1)
```
### 7. `collect` function
```
A.collect()
```
### 8. `reduce` function
```
A.reduce(lambda x,y:x+y)
```
### 9. `count` function
```
A.count()
```
### 10. `first` function
```
A.first()
```
### 11. `foreach` function
```
def lm(x):
print(x)
A.foreach(lm)
```
### 12. `sum` function
```
A.sum()
```
### 13. `stats` function
```
A.stats()
# sc.stop()
```
| github_jupyter |
#### 📚 This is a [Geart Fork](https://github.com/geart891/RLabClone)! [Lastest version](https://colab.research.google.com/github/geart891/RLabClone/blob/master/RcloneLab.ipynb)! [Backup version](https://colab.research.google.com/github/geart891/RLabClone/blob/master/RcloneLab_bak.ipynb)! [Black Pearl Template](https://colab.research.google.com/github/geart891/RLabClone/blob/master/blackpeal_template.ipynb)!
# <img src='https://geart891.github.io/RLabClone/img/title_rclonelab.svg' height="45" alt="RcloneLab"/>
```
# ============================= FORM ============================= #
# @markdown #### ⬅️ Execute rClone
Mode = "Copy" # @param ["Move", "Copy", "Sync", "Verify", "Dedupe", "Clean Empty Dirs", "Empty Trash"]
Source = "" # @param {type:"string"}
Destination = "" # @param {type:"string"}
Extra_Arguments = "" # @param {type:"string"}
COPY_SHARED_FILES = False # @param{type: "boolean"}
Compare = "Size & Checksum"
TRANSFERS, CHECKERS = 20, 20
THROTTLE_TPS = True
BRIDGE_TRANSFER = False # @param{type: "boolean"}
FAST_LIST = False # @param{type: "boolean"}
OPTIMIZE_GDRIVE = True
SIMPLE_LOG = True
RECORD_LOGFILE = False # @param{type: "boolean"}
SKIP_NEWER_FILE = False
SKIP_EXISTED = False
SKIP_UPDATE_MODTIME = False
ONE_FILE_SYSTEM = False
LOG_LEVEL = "DEBUG"
SYNC_MODE = "Delete after transfering"
SYNC_TRACK_RENAME = True
DEDUPE_MODE = "Largest"
USE_TRASH = True
DRY_RUN = False # @param{type: "boolean"}
# ================================================================ #
from os import path as _p
if not _p.exists("/root/.ipython/rlab_utils.py"):
from shlex import split as _spl
from subprocess import run # nosec
shellCmd = "wget -qq https://geart891.github.io/RLabClone/res/rlab_utils.py \
-O /root/.ipython/rlab_utils.py"
run(_spl(shellCmd)) # nosec
from datetime import datetime as _dt
from rlab_utils import (
displayOutput,
checkAvailable,
runSh,
prepareSession,
PATH_RClone_Config,
accessSettingFile,
memGiB,
)
def populateActionArg():
if Mode == "Copy":
actionArg = "copy"
elif Mode == "Sync":
actionArg = "sync"
elif Mode == "Verify":
actionArg = "check"
elif Mode == "Dedupe":
actionArg = "dedupe largest"
elif Mode == "Clean Empty Dirs":
actionArg = "rmdirs"
elif Mode == "Empty Trash":
actionArg = "delete"
else:
actionArg = "move"
return actionArg
def populateCompareArg():
if Compare == "Mod-Time":
compareArg = "--ignore-size"
elif Compare == "Size":
compareArg = "--size-only"
elif Compare == "Checksum":
compareArg = "-c --ignore-size"
else:
compareArg = "-c"
return compareArg
def populateOptimizeGDriveArg():
return (
"--buffer-size 256M \
--drive-chunk-size 256M \
--drive-upload-cutoff 256M \
--drive-acknowledge-abuse \
--drive-keep-revision-forever"
if OPTIMIZE_GDRIVE
else "--buffer-size 128M"
)
def populateGDriveCopyArg():
if BRIDGE_TRANSFER and memGiB() < 13:
global TRANSFERS, CHECKERS
TRANSFERS, CHECKERS = 10, 80
else:
pass
return "--disable copy" if BRIDGE_TRANSFER else "--drive-server-side-across-configs"
def populateStatsArg():
statsArg = "--stats-one-line --stats=5s" if SIMPLE_LOG else "--stats=5s -P"
if LOG_LEVEL != "OFF":
statsArg += " -v" if SIMPLE_LOG else "-vv"
elif LOG_LEVEL == "INFO":
statsArg += " --log-level INFO"
elif LOG_LEVEL == "ERROR":
statsArg += " --log-level ERROR"
else:
statsArg += " --log-level DEBUG"
return statsArg
def populateSyncModeArg():
if Mode != "Sync":
return ""
elif SYNC_MODE == "Delete before transfering":
syncModeArg = "--delete-before"
elif SYNC_MODE == "Delete after transfering":
syncModeArg = "--delete-after"
else:
syncModeArg = "--delete-during"
if SYNC_TRACK_RENAME:
syncModeArg += " --track-renames"
return syncModeArg
def populateDedupeModeArg():
if DEDUPE_MODE == "Interactive":
dedupeModeArg = "--dedupe-mode interactive"
elif DEDUPE_MODE == "Skip":
dedupeModeArg = "--dedupe-mode skip"
elif DEDUPE_MODE == "First":
dedupeModeArg = "--dedupe-mode first"
elif DEDUPE_MODE == "Newest":
dedupeModeArg = "--dedupe-mode newest"
elif DEDUPE_MODE == "Oldest":
dedupeModeArg = "--dedupe-mode oldest"
elif DEDUPE_MODE == "Rename":
dedupeModeArg = "--dedupe-mode rename"
else:
dedupeModeArg = "--dedupe-mode largest"
return dedupeModeArg
def generateCmd():
sharedFilesArgs = (
"--drive-shared-with-me --files-from /content/upload.txt --no-traverse"
if COPY_SHARED_FILES
else ""
)
logFileArg = f"--log-file /content/rclone_log.txt -vv -P"
args = [
"rclone",
f"--config {PATH_RClone_Config}/rclone.conf",
'--user-agent "Mozilla"',
populateActionArg(),
f'"{Source}"',
f'"{Destination}"' if Mode in ("Move", "Copy", "Sync") else "",
f"--transfers {str(TRANSFERS)}",
f"--checkers {str(CHECKERS)}",
]
if Mode == "Verify":
args.append("--one-way")
elif Mode == "Empty Trash":
args.append("--drive-trashed-only --drive-use-trash=false")
else:
args.extend(
[
populateGDriveCopyArg(),
populateSyncModeArg(),
populateCompareArg(),
populateOptimizeGDriveArg(),
"-u" if SKIP_NEWER_FILE else "",
"--ignore-existing" if SKIP_EXISTED else "",
"--no-update-modtime" if SKIP_UPDATE_MODTIME else "",
"--one-file-system" if ONE_FILE_SYSTEM else "",
"--tpslimit 95 --tpslimit-burst 40" if THROTTLE_TPS else "",
"--fast-list" if FAST_LIST else "",
"--delete-empty-src-dirs" if Mode == "Move" else "",
]
)
args.extend(
[
"-n" if DRY_RUN else "",
populateStatsArg() if not RECORD_LOGFILE else logFileArg,
sharedFilesArgs,
Extra_Arguments,
]
)
return args
def executeRclone():
prepareSession()
if Source.strip() == "":
displayOutput("❌ The Source field is empty.")
return
if checkAvailable("/content/rclone_log.txt"):
if not checkAvailable("/content/logfiles"):
runSh("mkdir -p -m 666 /content/logfiles")
job = accessSettingFile("job.txt")
runSh(
f'mv /content/rclone_log.txt /content/logfiles/{job["title"]}_{job["status"]}_logfile.txt'
)
onGoingJob = {
"title": f'{Mode}_{Source}_{Destination}_{_dt.now().strftime("%a-%H-%M-%S")}',
"status": "ongoing",
}
accessSettingFile("job.txt", onGoingJob)
cmd = " ".join(generateCmd())
runSh(cmd, output=True) # nosec
displayOutput(Mode, "success")
onGoingJob["status"] = "finished"
accessSettingFile("job.txt", onGoingJob)
executeRclone()
# ============================= FORM ============================= #
# @markdown #### ⬅️ Extract Files
MODE = "7Z" # @param ["UNZIP", "UNTAR", "UNRAR", "7Z"]
PATH_TO_FILE = "" # @param {type:"string"}
ARCHIVE_PASSWORD = "" # @param {type:"string"} #nosec
# ================================================================ #
from os import path as _p
if not _p.exists("/root/.ipython/rlab_utils.py"):
from shlex import split as _spl
from subprocess import run # nosec
shellCmd = "wget -qq https://geart891.github.io/RLabClone/res/rlab_utils.py \
-O /root/.ipython/rlab_utils.py"
run(_spl(shellCmd)) # nosec
from rlab_utils import checkAvailable, runSh
def extractFiles():
extractPath = "/content/extract"
if not checkAvailable("/content/extract"):
runSh("mkdir -p -m 777 /content/extract")
if MODE == "UNZIP":
runSh(f'unzip "{PATH_TO_FILE}" -d "{extractPath}" -p {ARCHIVE_PASSWORD}', output=True)
elif MODE == "UNRAR":
runSh(f'unrar x "{PATH_TO_FILE}" "{extractPath}" -p {ARCHIVE_PASSWORD} -o+', output=True)
elif MODE == "UNTAR":
runSh(f'tar -C "{extractPath}" -xvf "{PATH_TO_FILE}"', output=True)
else:
runSh(f'7z x "{PATH_TO_FILE}" -o{extractPath} -p{ARCHIVE_PASSWORD}', output=True)
extractFiles()
# ============================= FORM ============================= #
# @markdown #### ⬅️ Execute Upload Local File
MODE = "RCONFIG" # @param ['UTILS', 'RCONFIG', "GENERATELIST"]
REMOTE = "mnc" # @param {type:"string"}
QUERY_PATTERN = "" # @param {type:"string"}
# ================================================================ #
from os import path as _p
if not _p.exists("/root/.ipython/rlab_utils.py"):
from shlex import split as _spl
from subprocess import run # nosec
shellCmd = "wget -qq https://geart891.github.io/RLabClone/res/rlab_utils.py \
-O /root/.ipython/rlab_utils.py"
run(_spl(shellCmd)) # nosec
import importlib, rlab_utils
from google.colab import files # pylint: disable=import-error #nosec
from rlab_utils import checkAvailable, runSh, PATH_RClone_Config, prepareSession
def generateUploadList():
prepareSession()
if checkAvailable("/content/upload.txt"):
runSh("rm -f upload.txt")
runSh(
f"rclone --config {PATH_RClone_Config}/rclone.conf lsf {REMOTE}: --include '{QUERY_PATTERN}' --drive-shared-with-me --files-only --max-depth 1 > /content/upload.txt",
shell=True, # nosec
)
def uploadLocalFiles():
if MODE == "UTILS":
filePath = "/root/.ipython/rlab_utils.py"
elif MODE == "RCONFIG":
filePath = f"{PATH_RClone_Config}/rclone.conf"
else:
pass
try:
if checkAvailable(filePath):
runSh(f"rm -f {filePath}")
print("Select file from your computer.\n")
uploadedFile = files.upload()
fileNameDictKeys = uploadedFile.keys()
fileNo = len(fileNameDictKeys)
if fileNo > 1:
for fn in fileNameDictKeys:
runSh(f'rm -f "/content/{fn}"')
return print("\nPlease only upload a single config file.")
elif fileNo == 0:
return print("\nFile upload cancelled.")
elif fileNo == 1:
for fn in fileNameDictKeys:
if checkAvailable(f"/content/{fn}"):
runSh(f'mv -f "/content/{fn}" {filePath}')
runSh(f"chmod 666 {filePath}")
runSh(f'rm -f "/content/{fn}"')
importlib.reload(rlab_utils)
print("\nUpload completed.")
return
else:
print("\nNo file")
return
except:
return print("\nUpload process Error.")
if MODE == "GENERATELIST":
generateUploadList()
else:
uploadLocalFiles()
# ============================= FORM ============================= #
# @markdown #### ⬅️ SYNC Backups
FAST_LIST = True
# ================================================================ #
from os import path as _p
if not _p.exists("/root/.ipython/rlab_utils.py"):
from shlex import split as _spl
from subprocess import run # nosec
shellCmd = "wget -qq https://geart891.github.io/RLabClone/res/rlab_utils.py \
-O /root/.ipython/rlab_utils.py"
run(_spl(shellCmd)) # nosec
from rlab_utils import (
runSh,
prepareSession,
PATH_RClone_Config,
)
def generateCmd(src, dst):
block=f"{'':=<117}"
title=f"""+{f'Now Synchronizing... "{src}" > "{dst}" Fast List : {"ON" if FAST_LIST else "OFF"}':^{len(block)-2}}+"""
print(f"{block}\n{title}\n{block}")
cmd = f'rclone sync "{src}" "{dst}" --config {PATH_RClone_Config}/rclone.conf {"--fast-list" if FAST_LIST else ""} --user-agent "Mozilla" --transfers 20 --checkers 20 --drive-server-side-across-configs -c --buffer-size 256M --drive-chunk-size 256M --drive-upload-cutoff 256M --drive-acknowledge-abuse --drive-keep-revision-forever --tpslimit 95 --tpslimit-burst 40 --stats-one-line --stats=5s -v --delete-after --track-renames'
return cmd
def executeSync():
prepareSession()
runSh(generateCmd("tdTdnMov:Movies","tdMovRa4:"), output=True)
runSh(generateCmd("tdTdnTvs:TV Shows","tdTvsRa5:"), output=True)
runSh(generateCmd("tdTdnRa6:Games","tdGamRa7:"), output=True)
runSh(generateCmd("tdTdnRa8:Software","tdSofRa9:"), output=True)
runSh(generateCmd("tdTdnR11:Tutorials","tdTutR12:"), output=True)
runSh(generateCmd("tdTdnR13:Anime","tdAniR14:"), output=True)
# runSh(generateCmd("tdTdn14:Music","tdMusR15:"), output=True)
executeSync()
# ============================= FORM ============================= #
# @markdown #### ⬅️ Check VM's Status
Check_IP = True
Loop_Check = True #@param {type:"boolean"}
Loop_Interval = 720
# ================================================================ #
from os import path as _p
if not _p.exists("/root/.ipython/rlab_utils.py"):
from shlex import split as _spl
from subprocess import run # nosec
shellCmd = "wget -qq https://geart891.github.io/RLabClone/res/rlab_utils.py \
-O /root/.ipython/rlab_utils.py"
run(_spl(shellCmd)) # nosec
import time, requests
from IPython.display import clear_output
from rlab_utils import runSh
sessionIP = f"\nYour Public IP: {requests.get('http://ip.42.pl/raw').text}:12121"
LOOP_Range = Loop_Interval if Loop_Check else 1
def monitorStatus():
for i in range(LOOP_Range):
clear_output(wait=True)
!top -bcn1 -w512
if Check_IP:
print(sessionIP)
print("====")
runSh("du -sh /content /root/.qBittorrent_temp/", output=True)
print("====")
time.sleep(5)
monitorStatus()
```
# <img src='https://geart891.github.io/RLabClone/img/title_qbittorrent.png' height="45" alt="qBittorrent"/>
```
# ============================= FORM ============================= #
# @markdown #### ⬅️ Execute qBittorrent
USR_Api = "mnc" # @param {type:"string"}
USE_Ngrok = False # @param {type:"boolean"}
# ================================================================ #
from os import path as _p
if not _p.exists("/root/.ipython/rlab_utils.py"):
from shlex import split as _spl
from subprocess import run # nosec
shellCmd = "wget -qq https://geart891.github.io/RLabClone/res/rlab_utils.py \
-O /root/.ipython/rlab_utils.py"
run(_spl(shellCmd)) # nosec
import time, requests, json # pylint: disable=import-error #nosec
from rlab_utils import (
displayUrl,
checkAvailable,
runSh,
findProcess,
tokens,
installQBittorrent,
accessSettingFile,
exx,
generateRandomStr,
installNgrok,
installAutoSSH,
prepareSession,
checkServer,
QB_Port,
)
def selectApi(api):
try:
return tokens[api]
except:
return "Invalid Token"
def startQBService():
prepareSession()
installQBittorrent()
if not findProcess("qbittorrent-nox"):
runSh(f"qbittorrent-nox -d --webui-port={QB_Port}")
time.sleep(1)
def startWebUiQB(name):
if name == "serveo":
installAutoSSH()
hostName = f"http://{RAND_QB_Name}.serveo.net"
cmd = f"{RAND_QB_Name}:80:localhost:{QB_Port}"
shellCmd = f"autossh -M 0 -fNT -o \
'StrictHostKeyChecking=no' -o \
'ServerAliveInterval 300' -o \
'ServerAliveCountMax 30' -R \
{cmd} serveo.net &"
runSh(shellCmd, shell=True) # nosec
data = {
"url": hostName,
"port": QB_Port,
"cmd": cmd,
"shellCmd": shellCmd,
"pid": findProcess("autossh", f"{cmd}", True),
}
elif name == "ngrok":
if selectApi(USR_Api) == "Invalid Token":
print(selectApi(USR_Api))
exx()
ngrokToken = selectApi(USR_Api)
installNgrok()
shellCmd = f"ngrok authtoken {ngrokToken} \
&& ngrok http -inspect=false {QB_Port} &"
runSh(shellCmd, shell=True) # nosec
time.sleep(7)
pid = findProcess("ngrok", str(QB_Port), True)
try:
host = json.loads(requests.get("http://localhost:4040/api/tunnels").text)[
"tunnels"
][0]["public_url"][8:]
except:
print("ngrok Token is in used!")
exx()
data = {
"url": f"http://{host}",
"port": QB_Port,
"token": ngrokToken,
"pid": pid,
}
else:
pass
displayButtons(data)
accessSettingFile(f"{name}QBUrl.txt", data)
def resetService(name):
data = accessSettingFile(f"{name}QBUrl.txt")
while findProcess(data["pid"]):
runSh(f"kill {data['pid']}")
time.sleep(1)
startWebUiQB(name)
def killProcess(name):
if not findProcess(
name,
selectApi(USR_Api)
if name == "ngrok"
else f"{RAND_QB_Name}:80:localhost:{QB_Port}",
):
return
if checkAvailable(f"{name}QBUrl.txt", True):
pid = accessSettingFile("{name}QBUrl.txt")["pid"]
while findProcess(pid):
runSh(f"kill {pid}")
time.sleep(1)
else:
runSh(f"pkill -f {name}")
def displayButtons(data):
def resetNgrokService(a):
resetService("ngrok")
def startBackupRemoteService(a):
installAutoSSH()
!autossh -l {RAND_QB_Name} -M 0 -o \
'StrictHostKeyChecking=no' -o \
'ServerAliveInterval 300' -o \
'ServerAliveCountMax 30' -R \
80:localhost:{QB_Port} ssh.localhost.run
displayUrl(data, startBackupRemoteService, resetNgrokService)
def executeQBWebUi():
startQBService()
if not USE_Ngrok and checkServer("serveo.net"):
killProcess("ngrok")
if checkAvailable("serveoQBUrl.txt", True):
data = accessSettingFile("serveoQBUrl.txt")
displayButtons(data)
else:
startWebUiQB("serveo")
else:
killProcess("serveo")
if checkAvailable("ngrokQBUrl.txt", True):
data = accessSettingFile("ngrokQBUrl.txt")
displayButtons(data)
else:
startWebUiQB("ngrok")
RAND_QB_Name = f"{str(USR_Api)}{generateRandomStr()}"
executeQBWebUi()
# ============================= FORM ============================= #
# @markdown #### ⬅️ Recheck Download
SRC_PATH = "" # @param {type:"string"}
DIR_NAME = "" # @param {type:"string"}
# ================================================================ #
from os import path as _p
if not _p.exists("/root/.ipython/rlab_utils.py"):
from shlex import split as _spl
from subprocess import run # nosec
shellCmd = "wget -qq https://geart891.github.io/RLabClone/res/rlab_utils.py \
-O /root/.ipython/rlab_utils.py"
run(_spl(shellCmd)) # nosec
from rlab_utils import (
checkAvailable
runSh,
prepareSession,
PATH_RClone_Config,
)
def generateCmd(src, dst):
block=f"{'':=<117}"
title=f"""+{f'Now Copying... "{src}" > "{dst}"':^{len(block)-2}}+"""
print(f"{block}\n{title}\n{block}")
cmd = f'rclone copy "{src}" "/root/.qBittorrent_temp/{dst}" --config {PATH_RClone_Config}/rclone.conf --user-agent "Mozilla" --transfers 20 --checkers 20 --drive-server-side-across-configs -c --buffer-size 256M --drive-chunk-size 256M --drive-upload-cutoff 256M --drive-acknowledge-abuse --drive-keep-revision-forever --tpslimit 95 --tpslimit-burst 40 --stats-one-line --stats=5s -v'
return cmd
def executeRecheck():
prepareSession()
if not checkAvailable(f"/root/.qBittorrent_temp/{DIR_NAME}"):
runSh(f"mkdir -p -m 666 /root/.qBittorrent_temp/{DIR_NAME}")
runSh(generateCmd(SRC_PATH,DIR_NAME), output=True)
executeRecheck()
```
# FileBot
```
# ============================= FORM ============================= #
# @markdown #### ⬅️ Execute FileBot <a href="https://www.filebot.net/cli.html" target="_blank">CLI</a> <a href="https://www.filebot.net/naming.html" target="_blank">FORMAT</a>
MODE = "MOVIE" # @param ["SERIES", "MOVIE"]
PATH = "" # @param {type:"string"}
TEST_RUN = True # @param{type: "boolean"}
RESTRUCTURE_DIR = False # @param{type: "boolean"}
QUERRY_HELPER = "" # @param {type:"string"}
RELEASE_GROUP = "" # @param {type:"string"}
RELEASE_NAME = "" #
FORCE_FORMAT = "" #
OLD_FILEBOT = False
MEDIA_SOURCE = "" # @param ["", "BluRay", "WEB-DL", "WEBCap"]
RIP_SITE = "" # @param ["", "OPEN.MATTE", "NF", "AMZN", "NF", "DSNP"]
SEARCH_DATABASE = "TheMovieDB" # @param ["TheMovieDB", "OMDb", "TheMovieDB::TV", "TheTVDB", "AniDB", "AcoustID", "ID3"]
EXTRA_ARGS = "-non-strict -r" # @param {type:"string"}
# ================================================================ #
from os import path as _p
if not _p.exists("/root/.ipython/rlab_utils.py"):
from shlex import split as _spl
from subprocess import run # nosec
shellCmd = "wget -qq https://geart891.github.io/RLabClone/res/rlab_utils.py \
-O /root/.ipython/rlab_utils.py"
run(_spl(shellCmd)) # nosec
from rlab_utils import runSh, checkAvailable, installFilebot, prepareSession
def makeSortableName(name):
tmp = name.strip().title().replace(" ", ".")
illegal = ['NUL', '\',' '//', ':', '*', '"', '<', '>', '|']
quotes = ['`', '\xB4', '\u2018', '\u2019', '\u02BB']
for i in illegal:
tmp = tmp.replace(i, '')
for i in quotes:
tmp = tmp.replace(i, "'")
tmpspl = tmp.split(".", 1)
if tmpspl[0].lower() in ["a", "an", "the"]:
return f"{tmpspl[1]},.{tmpspl[0]}"
else:
return tmp
def executeFilebot():
prepareSession()
installFilebot(OLD_FILEBOT)
if not checkAvailable("/content/drive/Shared drives"):
from google.colab import drive # pylint: disable=import-error
drive.mount("/content/drive")
# HELPERS =============================================
titleHelper = ".removeIllegalCharacters().replaceTrailingBrackets().space('.').upperInitial().replaceAll(/[`\xB4\u2018\u2019\u02BB]/, \"'\")"
# PROCESS ARGUMENTS ===================================
# searchDB
if MODE == "MOVIE" and SEARCH_DATABASE not in ["TheMovieDB", "OMDb"]:
searchDB = "TheMovieDB"
elif MODE == "SERIES" and SEARCH_DATABASE not in [
"TheMovieDB::TV", "TheTVDB", "AniDB"
]:
searchDB = "TheMovieDB::TV"
else:
searchDB = SEARCH_DATABASE
releaseName = RELEASE_NAME if RELEASE_NAME else "{n" + titleHelper + "}"
releaseYear = "{'.'+y}"
videoTags = "{any{'.'+tags[0].upper().space('.').replace('I','i')}{''}}" + (
f".{RIP_SITE}" if RIP_SITE else ""
)
mediaSource = "{'.'+hd}" + (
f".{MEDIA_SOURCE}" if MEDIA_SOURCE else
"{'.'+(self.source==null||self.source=='BD'||self.source=='Blu-ray'?'BluRay':source)}{fn.lower().match('.remux').upper()}"
)
videoFrame = ".{vf}.{bitdepth+'bit'}{any{'.'+hdr}{''}}"
videoCODEC = ".{(vc=='x265'||vc=='HEVC')?'x265.HEVC':vc=='Microsoft'?'VC-1':vc}"
audioCODEC = ".{ac=='EAC3'?'DDP':ac}{ac=='DTS'?aco=='MA'?'-HD.MA':'-X':ac=='AAC'?aco=='LC'?'-LC':'':''}"
audioChannels = ".{af=='8ch'?'7.1':af=='7ch'?'6.1':af=='6ch'?'5.1':af=='5ch'?'4.1':af=='4ch'?'3.1':af=='3ch'?'2.1':af=='2ch'?'Stereo':af=='1ch'?'Mono':af}{'.'+aco.match('Atmos')}"
releaseGroup = RELEASE_GROUP if RELEASE_GROUP else "{any{group.lower()=='remux-framestor'?'FraMeSToR':group}{'MONARCH'}}"
if "Anime" in PATH:
releaseGroup = f"[{releaseGroup}]"
# animeAltName
if searchDB == "AniDB":
animeAltName = "{n==primaryTitle?'':'.('+primaryTitle" + titleHelper + "+')'}"
elif "Anime" in PATH and searchDB in ["TheMovieDB::TV", "TheTVDB"]:
nameAls = f"alias[{0 if searchDB == 'TheTVDB' else 2}]"
animeAltName = "{n==" + nameAls + "?'':" + f"{nameAls}.matches('\\\\w+')?'.('+{nameAls}{titleHelper}+')':'.('+primaryTitle{titleHelper}+')'" + "}"
else:
animeAltName = ""
# animeTitle
animeTitle = f"{{'.'+t{titleHelper}}}" if "Anime" in PATH else ""
# =====================================================
mediaNfo = f"{videoTags}{mediaSource}{videoFrame}{videoCODEC}{audioCODEC}{audioChannels}-{releaseGroup}"
fullTitle = f"{releaseName}{animeAltName}{releaseYear}{animeTitle}{mediaNfo}"
if RESTRUCTURE_DIR:
sortableTitle = makeSortableName(
RELEASE_NAME
) if RELEASE_NAME else f"{{n.sortName('$2, $1'){titleHelper}}}{animeAltName}{releaseYear}{animeTitle}{mediaNfo}"
defFormat = f"{PATH.rsplit('/', 1)[0]}/{sortableTitle}/"
if MODE == "SERIES":
seasonTitle = f"{{self.s==null?self.s00e00!=null?'Special':'':'Season.'+s.pad(2)}}{{'.('+sy+')'}}{mediaNfo}/"
episodeNo = '{self.s==null?{".SP.E${any{e.pad(2)}{special.pad(2)}}"}:"."+s00e00}{".AE"+absolute}'
mediaTitle = f"{{'.'+t{titleHelper}}}{{'.'+d.format('dd.MM.yyyy')}}"
defFormat += f"{seasonTitle}{releaseName}{episodeNo}{mediaTitle}{{(hd=='SD'||vf=='720p')?'.'+hd+'.'+vf:''}}"
else:
defFormat += fullTitle
else:
defFormat = fullTitle
if FORCE_FORMAT:
nameFormat = FORCE_FORMAT
else:
nameFormat = defFormat.replace('\\\\', '\\\\\\\\').replace('"', '\\"')
cmd = f"""filebot -rename "{PATH}" --db '{searchDB}' --q "{QUERRY_HELPER}" --format "{nameFormat}" {'--action test' if TEST_RUN else ''} {EXTRA_ARGS}"""
# LOGGING =============================================
printCheck = f"DB [{searchDB}]"
printCheck += f', Querry: [{QUERRY_HELPER}]' if QUERRY_HELPER else ''
printCheck += f', ARGS: [{EXTRA_ARGS}]' if EXTRA_ARGS else ''
printCheck += f', Force Source: [{MEDIA_SOURCE}]' if MEDIA_SOURCE else ''
printCheck += f', Force Group: [{RELEASE_GROUP}]' if RELEASE_GROUP else ''
printCheck += ', ✓ Restructure' if RESTRUCTURE_DIR else ''
printCheck += ', v[4.8.5r6120].' if OLD_FILEBOT else ', v[Lastest].'
print(f"{'# LOGGING INFOS ':=<117}\n ")
print(f"+ {'[TEST]' if TEST_RUN else '[EXEC]'} Renaming [{MODE}]")
print(f"+ {printCheck}\n")
print(f"{'# LOGGING COMMANDS ':=<117}\n ")
print("+ Title: " + fullTitle)
print("+ Format: " + nameFormat.replace("\\",""))
print("+ CMD: " + cmd)
print(f"\n{'':=<117}\n")
# =====================================================
runSh(cmd, output=True)
if RESTRUCTURE_DIR:
runSh(f"find \"{PATH}\" -type d -empty -delete")
runSh(f"rmdir -p \"{PATH}\"")
executeFilebot()
```
# <img src='https://geart891.github.io/RLabClone/img/title_jdownloader.png' height="45" alt="JDownloader"/>
```
# ============================= FORM ============================= #
# @markdown #### ⬅️ Execute JDownloader
NEW_Account = False # @param {type:"boolean"}
# ================================================================ #
from os import path as _p
if not _p.exists("/root/.ipython/rlab_utils.py"):
from shlex import split as _spl
from subprocess import run # nosec
shellCmd = "wget -qq https://geart891.github.io/RLabClone/res/rlab_utils.py \
-O /root/.ipython/rlab_utils.py"
run(_spl(shellCmd)) # nosec
from rlab_utils import handleJDLogin
handleJDLogin(NEW_Account)
```
# Plowshare
```
# ============================= FORM ============================= #
# @markdown #### ⬅️ Execute JDownloader
MODE = "ZIPPY" # @param ["ZIPPY"]
# ================================================================ #
SOURCE_HOST = MODE.lower()
from os import path as _p
if not _p.exists("/root/.ipython/rlab_utils.py"):
from shlex import split as _spl
from subprocess import run # nosec
shellCmd = "wget -qq https://geart891.github.io/RLabClone/res/rlab_utils.py \
-O /root/.ipython/rlab_utils.py"
run(_spl(shellCmd)) # nosec
from rlab_utils import runSh, checkAvailable, prepareSession, accessSettingFile
def installPlowshare():
if checkAvailable("plowshare.txt", True):
return
runSh("wget -qq -c -nc https://github.com/mcrapet/plowshare/archive/v2.1.7.zip")
runSh("unzip -qq -n v2.1.7.zip")
runSh('make -C "/content/plowshare-2.1.7" install')
runSh("plowmod --install")
runSh("apt-get install nodejs")
# Clean download files
runSh("rm -rf /content/plowshare-2.1.7")
runSh("rm -f /content/v2.1.7.zip")
data = data = {"plowshare": "installed"}
accessSettingFile("plowshare.txt", data)
def prepareDownload():
installPlowshare()
if not checkAvailable(f"/content/{SOURCE_HOST}/wk/download.txt"):
runSh(f"mkdir -p -m 666 /content/{SOURCE_HOST}/wk")
runSh(f"mkdir -p -m 666 /content/{SOURCE_HOST}/extract")
runSh(f"touch /content/{SOURCE_HOST}/wk/download.txt")
else:
pass
def executePlowshare():
prepareSession()
prepareDownload()
runSh(
f"plowdown /content/{SOURCE_HOST}/wk/download.txt -m -o /content/{SOURCE_HOST}/wk",
output=True,
)
executePlowshare()
```
# <img src='https://geart891.github.io/RLabClone/img/title_youtube-dl.png' height="45" alt="Youtube-DL"/>
```
# ============================= FORM ============================= #
# @markdown #### ⬅️ Execute YouTube-DL
# @markdown 📝 Note: if you want to change an archive file just run this cell again.
Archive = False #@param {type:"boolean"}
# ================================================================ #
import os, uuid, urllib.parse
import ipywidgets as widgets
from glob import glob
from urllib.parse import urlparse, parse_qs
from IPython.display import HTML, clear_output, YouTubeVideo
from IPython.utils.io import ask_yes_no
from google.colab import output, files
Links = widgets.Textarea(placeholder='''Video/Playlist Link
(one link per line)''')
VideoQ = widgets.Dropdown(options=["Best Quality (VP9 upto 4K)", "Best Compatibility (H.264 upto 1080p)"])
AudioQ = widgets.Dropdown(options=["Best Quality (Opus)", "Best Compatibility (M4A)"])
Subtitle = widgets.ToggleButton(value=True, description="Subtitle", button_style="info", tooltip="Subtitle")
SavePathYT = widgets.Dropdown(options=["/content", "/content/Downloads"])
AudioOnly = widgets.ToggleButton(value=False, description="Audio Only", button_style="", tooltip="Audio Only")
Resolution = widgets.Select(options=["Highest", "4K", "1440p", "1080p", "720p", "480p", "360p", "240p", "144p"], value="Highest")
Extension = widgets.Select(options=["mkv", "webm"], value="mkv")
UsernameYT = widgets.Text(placeholder="Username")
PasswordYT = widgets.Text(placeholder="Password")
SecAuth = widgets.Text(placeholder="2nd Factor Authentication")
VideoPW = widgets.Text(placeholder="Video Password")
GEOBypass = widgets.Dropdown(options=["Disable", "Hide", "AD", "AE", "AF", "AG", "AI", "AL", "AM", "AO", "AQ", "AR", "AS", "AT", "AU", "AW", "AX", "AZ", "BA", "BB", "BD", "BE", "BF", "BG", "BH", "BI", "BJ", "BL", "BM", "BN", "BO", "BQ", "BR", "BS", "BT", "BV", "BW", "BY", "BZ", "CA", "CC", "CD", "CF", "CG", "CH", "CI", "CK", "CL", "CM", "CN", "CO", "CR", "CU", "CV", "CW", "CX", "CY", "CZ", "DE", "DJ", "DK", "DM", "DO", "DZ", "EC", "EE", "EG", "EH", "ER", "ES", "ET", "FI", "FJ", "FK", "FM", "FO", "FR", "GA", "GB", "GD", "GE", "GF", "GG", "GH", "GI", "GL", "GM", "GN", "GP", "GQ", "GR", "GS", "GT", "GU", "GW", "GY", "HK", "HM", "HN", "HR", "HT", "HU", "ID", "IE", "IL", "IM", "IN", "IO", "IQ", "IR", "IS", "IT", "JE", "JM", "JO", "JP", "KE", "KG", "KH", "KI", "KM", "KN", "KP", "KR", "KW", "KY", "KZ", "LA", "LB", "LC", "LI", "LK", "LR", "LS", "LT", "LU", "LV", "LY", "MA", "MC", "MD", "ME", "MF", "MG", "MH", "MK", "ML", "MM", "MN", "MO", "MP", "MQ", "MR", "MS", "MT", "MU", "MV", "MW", "MX", "MY", "MZ", "NA", "NC", "NE", "NF", "NG", "NI", "NL", "NO", "NP", "NR", "NU", "NZ", "OM", "PA", "PE", "PF", "PG", "PH", "PK", "PL", "PM", "PN", "PR", "PS", "PT", "PW", "PY", "QA", "RE", "RO", "RS", "RU", "RW", "SA", "SB", "SC", "SD", "SE", "SG", "SH", "SI", "SJ", "SK", "SL", "SM", "SN", "SO", "SR", "SS", "ST", "SV", "SX", "SY", "SZ", "TC", "TD", "TF", "TG", "TH", "TJ", "TK", "TL", "TM", "TN", "TO", "TR", "TT", "TV", "TW", "TZ", "UA", "UG", "UM", "US", "UY", "UZ", "VA", "VC", "VE", "VG", "VI", "VN", "VU", "WF", "WS", "YE", "YT", "ZA", "ZM", "ZW"])
ProxyYT = widgets.Text(placeholder="Proxy URL")
MinSleep = widgets.BoundedIntText(value=0, min=0, max=300, step=1, description="Min:")
MaxSleep = widgets.BoundedIntText(value=0, min=0, max=300, step=1, description="Max:")
ExtraArg = widgets.Text(placeholder="Extra Arguments")
class MakeButton(object):
def __init__(self, title, callback, style):
self._title = title
self._callback = callback
self._style = style
def _repr_html_(self):
callback_id = 'button-' + str(uuid.uuid4())
output.register_callback(callback_id, self._callback)
if self._style != "":
style_html = "p-Widget jupyter-widgets jupyter-button widget-button mod-" + self._style
else:
style_html = "p-Widget jupyter-widgets jupyter-button widget-button"
template = """<button class="{style_html}" id="{callback_id}">{title}</button>
<script>
document.querySelector("#{callback_id}").onclick = (e) => {{
google.colab.kernel.invokeFunction('{callback_id}', [], {{}})
e.preventDefault();
}};
</script>"""
html = template.format(title=self._title, callback_id=callback_id, style_html=style_html)
return html
def MakeLabel(description, button_style):
return widgets.Button(description=description, disabled=True, button_style=button_style)
def upload_archive():
if ask_yes_no("Do you already have an archive file? (y/n)", default="", interrupt=""):
try:
display(HTML("<h2 style=\"font-family:Trebuchet MS;color:#4f8bd6;\">Please upload an archive from your computer.</h2><br>"))
UploadConfig = files.upload().keys()
clear_output(wait=True)
if len(UploadConfig) == 0:
return display(HTML("<center><h2 style=\"font-family:Trebuchet MS;color:#ce2121;\">File upload has been cancelled during upload file.</h2><br></center>"))
elif len(UploadConfig) == 1:
for fn in UploadConfig:
if os.path.isfile("/content/" + fn):
get_ipython().system_raw("mv -f " + "\"" + fn + "\" /root/.youtube-dl.txt && chmod 666 /root/.youtube-dl.txt")
AudioOnly.observe(AudioOnlyChange)
Subtitle.observe(SubtitleChange)
AudioQ.observe(AudioQChange)
ShowYT()
else:
return display(HTML("<center><h2 style=\"font-family:Trebuchet MS;color:#ce2121;\">File upload has been failed during upload file.</h2><br></center>"))
else:
for fn in UploadConfig:
get_ipython().system_raw("rm -f " + "\"" + fn + "\"")
return display(HTML("<center><h2 style=\"font-family:Trebuchet MS;color:#ce2121;\">Please uploading only one file at a time.</h2><br></center>"))
except:
clear_output(wait=True)
return display(HTML("<center><h2 style=\"font-family:Trebuchet MS;color:#ce2121;\">Error occurred during upload file.</h2><br></center>"))
else:
get_ipython().system_raw("touch '/root/.youtube-dl.txt'")
AudioOnly.observe(AudioOnlyChange)
Subtitle.observe(SubtitleChange)
AudioQ.observe(AudioQChange)
ShowYT()
def RefreshPathYT():
if os.path.exists("/content/drive/"):
if os.path.exists("/content/drive/Shared drives/"):
SavePathYT.options = ["/content", "/content/Downloads", "/content/drive/My Drive"] + glob("/content/drive/My Drive/*/") + glob("/content/drive/Shared drives/*/")
else:
SavePathYT.options = ["/content", "/content/Downloads", "/content/drive/My Drive"] + glob("/content/drive/My Drive/*/")
else:
SavePathYT.options = ["/content", "/content/Downloads"]
def AudioOnlyChange(change):
if change["type"] == "change" and change["new"]:
VideoQ.disabled = True
Subtitle.disabled = True
if Subtitle.value:
Subtitle.button_style = "info"
else:
Subtitle.button_style = ""
Resolution.disabled = True
Extension.options = ["best", "aac", "flac", "mp3", "m4a", "opus", "vorbis", "wav"]
Extension.value = "best"
AudioOnly.button_style = "info"
elif change["type"] == "change" and change["new"] == False:
VideoQ.disabled = False
Subtitle.disabled = False
if Subtitle.value:
Subtitle.button_style = "info"
else:
Subtitle.button_style = ""
Resolution.disabled = False
if AudioQ.value == "Best Quality (Opus)":
Extension.options = ["mkv", "webm"]
else:
Extension.options = ["mkv", "mp4", "webm"]
Extension.value = "mkv"
AudioOnly.button_style = ""
def SubtitleChange(change):
if change["type"] == "change" and change["new"]:
Subtitle.button_style = "info"
elif change["type"] == "change" and change["new"] == False:
Subtitle.button_style = ""
def AudioQChange(change):
if change["type"] == "change" and change["new"] == "Best Quality (Opus)":
Extension.options = ["mkv", "webm"]
Extension.value = "mkv"
elif change["type"] == "change" and change["new"] == "Best Compatibility (M4A)":
Extension.options = ["mkv", "mp4", "webm"]
Extension.value = "mkv"
def ShowYT():
clear_output(wait=True)
RefreshPathYT()
display(widgets.HBox([widgets.VBox([widgets.HTML("<b style=\"color:#888888;\">Link:</b>"), Links,
widgets.HTML("<b style=\"color:#888888;\">For website that require an account:</b>"), UsernameYT, PasswordYT, SecAuth, VideoPW,
widgets.HTML("<b><a href=\"https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2#Officially_assigned_code_elements\" target=\"_blank\">GEO Bypass Country:</a></b>"), GEOBypass,
widgets.HTML("<b style=\"color:#888888;\">Proxy:</b>"), ProxyYT,
widgets.HTML("<b style=\"color:#888888;\">Sleep Interval (second):</b>"), MinSleep, MaxSleep]),
widgets.VBox([widgets.HTML("<b style=\"color:#888888;\">Video Quality:</b>"), VideoQ, widgets.HTML("<b style=\"color:#888888;\">Resolution:</b>"), Resolution,
widgets.HTML("<b style=\"color:#888888;\">Audio Quality:</b>"), AudioQ, widgets.HTML("<b style=\"color:#888888;\">Extension:</b>"), Extension,
widgets.HTML("<b style=\"color:#888888;\">Extra Options:</b>"), widgets.HBox([Subtitle, AudioOnly]),
widgets.HTML("<b style=\"color:#888888;\">Extra Arguments:</b>"), ExtraArg])]), HTML("<h4 style=\"color:#888888;\">Save Location:</h4>"),
SavePathYT, MakeButton("Refresh", RefreshPathYT, ""))
if not os.path.exists("/content/drive/"):
display(HTML("*If you want to save in Google Drive please run the cell below."))
display(HTML("<br>"), MakeButton("Download", DownloadYT, "info"))
def DownloadYT():
if Links.value.strip():
Count = 0
Total = str(len(Links.value.splitlines()))
# Account Check
if UsernameYT.value.strip() and PasswordYT.value.strip():
accountC = "--username \"" + UsernameYT.value + "\" --password \"" + PasswordYT.value + "\""
else:
accountC = ""
if SecAuth.value.strip():
secauthC = "-2 " + SecAuth.value
else:
secauthC = ""
if VideoPW.value.strip():
videopwC = "--video-password " + VideoPW.value
else:
videopwC = ""
# Proxy
if ProxyYT.value.strip():
proxyytC = "--proxy " + ProxyYT.value
else:
proxyytC = ""
# GEO Bypass
if GEOBypass.value == "Disable":
geobypass = ""
elif GEOBypass.value == "Hide":
geobypass = "--geo-bypass"
else:
geobypass = "--geo-bypass-country " + GEOBypass.value
# Video Quality
if VideoQ.value == "Best Quality (VP9 upto 4K)":
videoqC = "webm"
else:
videoqC = "mp4"
# Audio Quality
if AudioQ.value == "Best Quality (Opus)":
audioqC = "webm"
else:
audioqC = "m4a"
# Audio Only Check
if AudioOnly.value:
subtitleC = ""
thumbnailC = ""
extC = "-x --audio-quality 0 --audio-format " + Extension.value
codecC = "bestaudio[ext=" + audioqC + "]/bestaudio/best"
else:
if Subtitle.value:
subtitleC = "--all-subs --convert-subs srt --embed-subs"
else:
subtitleC = ""
if Extension.value == "mp4":
thumbnailC = "--embed-thumbnail"
else:
thumbnailC = ""
extC = "--merge-output-format " + Extension.value
if Resolution.value == "Highest":
codecC = "bestvideo[ext=" + videoqC + "]+bestaudio[ext=" + audioqC + "]/bestvideo+bestaudio/best"
else:
codecC = "bestvideo[ext=" + videoqC + ",height<=" + Resolution.value.replace("4K", "2160").replace("p", "") + "]+bestaudio[ext=" + audioqC + "]/bestvideo[height<=" + Resolution.value.replace("4K", "2160").replace("p", "") + "]+bestaudio/bestvideo+bestaudio/best"
# Archive
if os.path.isfile("/root/.youtube-dl.txt"):
archiveC = "--download-archive \"/root/.youtube-dl.txt\""
else:
archiveC = ""
# Sleep Interval
if MinSleep.value > 0 and MaxSleep.value > 0:
minsleepC = "--min-sleep-interval " + MinSleep.value
maxsleepC = "--max-sleep-interval " + MaxSleep.value
else:
minsleepC = ""
maxsleepC = ""
# Extra Arguments
extraargC = ExtraArg.value
for Link in Links.value.splitlines():
clear_output(wait=True)
Count += 1
display(HTML("<h3 style=\"font-family:Trebuchet MS;color:#4f8bd6;\">Processing link " + str(Count) + " out of " + Total + "</h3>"))
if "youtube.com" in Link or "youtu.be" in Link:
display(HTML("<h3 style=\"font-family:Trebuchet MS;color:#4f8bd6;\">Currently downloading...</h3><br>"), YouTubeVideo(Link, width=640, height=360), HTML("<br>"))
else:
display(HTML("<h3 style=\"font-family:Trebuchet MS;color:#4f8bd6;\">Currently downloading <a href=\"" + Link + "\">" + Link + "</a></h3><br>"))
if ("youtube.com" in Link or "youtu.be" in Link) and "list=" in Link:
!youtube-dl -i --no-warnings --yes-playlist --add-metadata $accountC $secauthC $videopwC $minsleepC $maxsleepC $geobypass $proxyytC $extC $thumbnailC $subtitleC $archiveC $extraargC -f "$codecC" -o "/root/.YouTube-DL/%(playlist)s/%(playlist_index)s - %(title)s.%(ext)s" "$Link"
else:
!youtube-dl -i --no-warnings --yes-playlist --add-metadata $accountC $secauthC $videopwC $minsleepC $maxsleepC $geobypass $proxyytC $extC $thumbnailC $subtitleC $archiveC $extraargC -f "$codecC" -o "/root/.YouTube-DL/%(title)s.%(ext)s" "$Link"
if not os.path.exists(SavePathYT.value):
get_ipython().system_raw("mkdir -p -m 666 " + SavePathYT.value)
get_ipython().system_raw("mv /root/.YouTube-DL/* '" + SavePathYT.value + "/'")
# Archive Download
if os.path.isfile("/root/.youtube-dl.txt"):
files.download("/root/.youtube-dl.txt")
ShowYT()
if not os.path.isfile("/usr/local/bin/youtube-dl"):
get_ipython().system_raw("rm -rf /content/sample_data/ && mkdir -p -m 666 /root/.YouTube-DL/ && apt-get install atomicparsley && curl -L https://yt-dl.org/downloads/latest/youtube-dl -o /usr/local/bin/youtube-dl && chmod a+rx /usr/local/bin/youtube-dl")
if Archive:
upload_archive()
else:
AudioOnly.observe(AudioOnlyChange)
Subtitle.observe(SubtitleChange)
AudioQ.observe(AudioQChange)
ShowYT()
```
# <img src='https://geart891.github.io/RLabClone/img/title_utility.png' height="45" alt="Utility"/>
```
# ============================= FORM ============================= #
#@markdown <h3>⬅️ Install netdata (Real-time Server Monitoring)</h3>
#@markdown <br><center><img src='https://geart891.github.io/RLabClone/img/title_netdata.png' height="60" alt="netdata"/></center>
# ================================================================ #
import os, psutil, IPython, uuid, time
import ipywidgets as widgets
from IPython.display import HTML, clear_output
from google.colab import output
class MakeButton(object):
def __init__(self, title, callback):
self._title = title
self._callback = callback
def _repr_html_(self):
callback_id = 'button-' + str(uuid.uuid4())
output.register_callback(callback_id, self._callback)
template = """<button class="p-Widget jupyter-widgets jupyter-button widget-button mod-info" id="{callback_id}">{title}</button>
<script>
document.querySelector("#{callback_id}").onclick = (e) => {{
google.colab.kernel.invokeFunction('{callback_id}', [], {{}})
e.preventDefault();
}};
</script>"""
html = template.format(title=self._title, callback_id=callback_id)
return html
def MakeLabel(description, button_style):
return widgets.Button(description=description, disabled=True, button_style=button_style)
def RandomGenerator():
return time.strftime("%S") + str(time.time()).split(".")[-1]
def CheckProcess(process, command):
for pid in psutil.pids():
try:
p = psutil.Process(pid)
if process in p.name():
for arg in p.cmdline():
if command in str(arg):
return True
else:
pass
else:
pass
except:
continue
def AutoSSH(name,port):
get_ipython().system_raw("autossh -M 0 -fNT -o 'StrictHostKeyChecking=no' -o 'ServerAliveInterval 300' -o 'ServerAliveCountMax 30' -R " + name + ":80:localhost:" + port + " serveo.net &")
def Start_AutoSSH_ND():
if CheckProcess("netdata", "") != True:
get_ipython().system_raw("/usr/sbin/netdata")
if CheckProcess("autossh", Random_URL_ND) != True:
AutoSSH(Random_URL_ND, Port_ND)
def Start_Localhost_ND():
try:
clear_output(wait=True)
!autossh -l $Random_URL_ND -M 0 -o 'StrictHostKeyChecking=no' -o 'ServerAliveInterval 300' -o 'ServerAliveCountMax 30' -R 80:localhost:$Port_ND ssh.localhost.run
except:
Control_Panel_CC()
Control_Panel_CC()
def Control_Panel_ND():
clear_output(wait=True)
display(MakeLabel("✔ Successfully", "success"), MakeButton("Recheck", Start_AutoSSH_ND), MakeButton("Backup Website", Start_Localhost_ND), HTML("<h4 style=\"font-family:Trebuchet MS;color:#4f8bd6;\">" \
"<a style=\"font-family:Trebuchet MS;color:#356ebf;\" href=\"https://" + Random_URL_ND + ".serveo.net/#theme=slate;update_always=true\" target=\"_blank\">Website</a></h4>"))
try:
try:
Random_URL_ND
except NameError:
Random_URL_ND = "nd" + RandomGenerator()
Port_ND = "19999"
display(MakeLabel("Installing in Progress", "warning"))
if os.path.isfile("/usr/bin/autossh") == False:
get_ipython().system_raw("apt update -qq -y && apt install autossh -qq -y")
if os.path.isfile("/usr/sbin/netdata") == False:
get_ipython().system_raw("bash <(curl -Ss https://my-netdata.io/kickstart.sh) --dont-wait --dont-start-it")
Start_AutoSSH_ND()
Control_Panel_ND()
except:
clear_output(wait=True)
display(MakeLabel("✘ Unsuccessfully", "danger"))
# ============================= FORM ============================= #
#@markdown <h3>⬅️ Install Cloud Commander (file manager)</h3>
#@markdown <br><center><img src='https://geart891.github.io/RLabClone/img/title_cloud_commander.png' height="60" alt="netdata"/></center>
# ================================================================ #
import os, psutil, IPython, uuid, time
import ipywidgets as widgets
from IPython.display import HTML, clear_output
from google.colab import output
class MakeButton(object):
def __init__(self, title, callback):
self._title = title
self._callback = callback
def _repr_html_(self):
callback_id = 'button-' + str(uuid.uuid4())
output.register_callback(callback_id, self._callback)
template = """<button class="p-Widget jupyter-widgets jupyter-button widget-button mod-info" id="{callback_id}">{title}</button>
<script>
document.querySelector("#{callback_id}").onclick = (e) => {{
google.colab.kernel.invokeFunction('{callback_id}', [], {{}})
e.preventDefault();
}};
</script>"""
html = template.format(title=self._title, callback_id=callback_id)
return html
def MakeLabel(description, button_style):
return widgets.Button(description=description, disabled=True, button_style=button_style)
def RandomGenerator():
return time.strftime("%S") + str(time.time()).split(".")[-1]
def CheckProcess(process, command):
for pid in psutil.pids():
try:
p = psutil.Process(pid)
if process in p.name():
for arg in p.cmdline():
if command in str(arg):
return True
else:
pass
else:
pass
except:
continue
def AutoSSH(name,port):
get_ipython().system_raw("autossh -M 0 -fNT -o 'StrictHostKeyChecking=no' -o 'ServerAliveInterval 300' -o 'ServerAliveCountMax 30' -R " + name + ":80:localhost:" + port + " serveo.net &")
def Start_AutoSSH_CC():
if CheckProcess("cloudcmd", "") != True:
get_ipython().system_raw("cloudcmd --online --no-auth --show-config --show-file-name --editor 'deepword' --packer 'tar' --port $Port_CC --progress --no-confirm-copy --confirm-move --name 'RcloneLab File Manager' --keys-panel --no-contact --console --sync-console-path --no-terminal --no-vim --columns 'name-size-date' --no-log &")
if CheckProcess("autossh", Random_URL_CC) != True:
AutoSSH(Random_URL_CC, Port_CC)
def Start_Localhost_CC():
try:
clear_output(wait=True)
!pkill autossh
!autossh -l $Random_URL_CC -M 0 -o 'StrictHostKeyChecking=no' -o 'ServerAliveInterval 300' -o 'ServerAliveCountMax 30' -R 80:localhost:$Port_CC ssh.localhost.run
except:
Control_Panel_CC()
Control_Panel_CC()
def Control_Panel_CC():
clear_output(wait=True)
display(MakeLabel("✔ Successfully", "success"), MakeButton("Recheck", Start_AutoSSH_CC), MakeButton("Backup Website", Start_Localhost_CC), HTML("<h2 style=\"font-family:Trebuchet MS;color:#4f8bd6;\">File Manager</h2><h4 style=\"font-family:Trebuchet MS;color:#4f8bd6;\">" \
"<a style=\"font-family:Trebuchet MS;color:#356ebf;\" href=\"https://" + Random_URL_CC + ".serveo.net/fs/content/\" target=\"_blank\">Website</a></h3>"))
try:
try:
Random_URL_CC
except NameError:
Random_URL_CC = "cc" + RandomGenerator()
Port_CC = "7007"
display(MakeLabel('Installing in Progress', 'warning'))
if os.path.isfile("/tools/node/bin/cloudcmd") == False:
get_ipython().system_raw("rm -rf /content/sample_data/ && npm i -g npm && npm i cloudcmd -g")
if os.path.isfile("/usr/bin/autossh") == False:
get_ipython().system_raw("apt update -qq -y && apt install autossh -qq -y")
Start_AutoSSH_CC()
Control_Panel_CC()
except:
clear_output(wait=True)
display(MakeLabel("✘ Unsuccessfully", "danger"))
# ============================= FORM ============================= #
#@markdown <h3>⬅️ Check VM's Status</h3>
Check_IP = True #@param {type:"boolean"}
Loop_Check = False #@param {type:"boolean"}
Loop_Interval = 15 #@param {type:"slider", min:1, max:15, step:1}
# ================================================================ #
import time, requests
from IPython.display import clear_output
Loop = True
try:
while Loop == True:
clear_output(wait=True)
if Check_IP: print("\nYour Public IP: " + requests.get('http://ip.42.pl/raw').text)
print("====")
!du -sh /content /root/.qBittorrent_temp/
print("====")
!top -bcn1 -w512
if Loop_Check == False:
Loop = False
else:
time.sleep(Loop_Interval)
except:
clear_output()
# ============================= FORM ============================= #
#@markdown <h3>⬅️ Get VM's Specification</h3>
Output_Format = "TEXT" #@param ["TEXT", "HTML", "XML", "JSON"]
Short_Output = True #@param {type:"boolean"}
# ================================================================ #
import os
from google.colab import files
from IPython.display import HTML, clear_output
try:
Output_Format_Ext
except NameError:
get_ipython().system_raw("apt install lshw -qq -y")
if Short_Output:
Output_Format = "txt"
Output_Format2 = "-short"
Output_Format_Ext = "txt"
elif Output_Format == "TEXT":
Output_Format = "txt"
Output_Format2 = ""
Output_Format_Ext = "txt"
else:
Output_Format = Output_Format.lower()
Output_Format2 = "-"+Output_Format.lower()
Output_Format_Ext = Output_Format.lower()
get_ipython().system_raw("lshw " + Output_Format2 + " > Specification." + Output_Format)
files.download("/content/Specification." + Output_Format_Ext)
get_ipython().system_raw("rm -f /content/Specification.$outputformatC")
display(HTML("<center><h2 style=\"font-family:Trebuchet MS;color:#4f8bd6;\">Sending log to your browser...</h2><br></center>"))
# ============================= FORM ============================= #
# @markdown <h3>⬅️ MKV Toolnix <a href="https://mkvtoolnix.download/docs.html" target="_blank">See Docs</a></h3>
MODE = "MEDIAINFO" # @param ['MEDIAINFO', 'MKVPROPEDIT', 'MKVMERGE', 'MKVINFO']
FILE_PATH = "" # @param {type:"string"}
OPTION = "" # @param {type:"string"}
OUTPUT_FILE_PATH = "" # @param {type:"string"}
# ============================= FORM ============================= #
from os import path as _p
if not _p.exists("/root/.ipython/rlab_utils.py"):
from shlex import split as _spl
from subprocess import run # nosec
shellCmd = "wget -qq https://geart891.github.io/RLabClone/res/rlab_utils.py \
-O /root/.ipython/rlab_utils.py"
run(_spl(shellCmd)) # nosec
def checkAvailable(path_="", userPath=False):
from os import path as _p
if path_ == "":
return False
else:
return (
_p.exists(path_)
if not userPath
else _p.exists(f"/usr/local/sessionSettings/{path_}")
)
def runSh(args, *, output=False, shell=False):
import subprocess, shlex # nosec
if not shell:
if output:
proc = subprocess.Popen( # nosec
shlex.split(args), stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
while True:
output = proc.stdout.readline()
if output == b"" and proc.poll() is not None:
return
if output:
print(output.decode("utf-8").strip())
return subprocess.run(shlex.split(args)).returncode # nosec
else:
if output:
return (
subprocess.run(
args,
shell=True, # nosec
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
)
.stdout.decode("utf-8")
.strip()
)
return subprocess.run(args, shell=True).returncode # nosec
def installMkvTools():
if checkAvailable("/etc/apt/sources.list.d/mkvtoolnix.download.list"):
return
if checkAvailable("/content/sample_data"):
runSh("rm -rf /content/sample_data")
with open("/etc/apt/sources.list.d/mkvtoolnix.download.list", "w+") as outFile:
outFile.write("deb https://mkvtoolnix.download/ubuntu/ bionic main")
runSh(
"wget -q -O - https://mkvtoolnix.download/gpg-pub-moritzbunkus.txt | sudo apt-key add - && sudo apt-get install mkvtoolnix mkvtoolnix-gui",
shell=True, # nosec
)
if not checkAvailable("/usr/bin/mediainfo"):
runSh("apt-get install mediainfo")
if not checkAvailable("/content/drive/Shared drives"):
from google.colab import drive # pylint: disable=import-error
from IPython.display import clear_output
try:
drive.mount("/content/drive")
clear_output(wait=True)
except:
clear_output()
def displayOutput(operationName="", color="#ce2121"):
from IPython.display import HTML, display # pylint: disable=import-error
if color == "success":
hColor = "#28a745"
displayTxt = f"👍 Operation {operationName} has been successfully completed."
elif color == "danger":
hColor = "#dc3545"
displayTxt = f"❌ Operation {operationName} has been errored."
elif color == "info":
hColor = "#17a2b8"
displayTxt = f"👋 Operation {operationName} has some info."
elif color == "warning":
hColor = "#ffc107"
displayTxt = f"⚠ Operation {operationName} has been warning."
else:
hColor = "#ffc107"
displayTxt = f"{operationName} works."
display(
HTML(
f"""
<center>
<h2 style="font-family:monospace;color:{hColor};">
{displayTxt}
</h2>
<br>
</center>
"""
)
)
# ================================================================ #
def generateMediainfo():
installMkvTools()
if not FILE_PATH:
print("Please enter path to media file from mounted drive.")
return
runSh("mkdir -m 777 -p /content/mediainfo")
logFilePath = f"/content/mediainfo/{_p.basename(FILE_PATH).rpartition('.')[0]}.txt"
if checkAvailable(logFilePath):
runSh(f"rm -f {logFilePath}")
runSh(f'mediainfo -f --LogFile="{logFilePath}" "{FILE_PATH}"')
runSh(f"mkvmerge -i '{FILE_PATH}' >> '{logFilePath}'", shell=True) # nosec
with open(logFilePath, "r") as file:
media = file.read()
media = media.replace(f"{_p.dirname(FILE_PATH)}/", "")
print(media)
# !curl -s https://pastebin.com/raw/TApKLQfM -o /content/blackpearl.txt
def executeMKVTool():
installMkvTools()
if "/content/drive/Shared drives/" in FILE_PATH:
print("File must be downloaded to local disk.")
return
if MODE == "MKVMERGE":
if not OUTPUT_FILE_PATH:
print("Must specify Output Path.")
return
runSh(
f'mkvmerge -o "{OUTPUT_FILE_PATH}" {OPTION} "{FILE_PATH}" -v', output=True
)
elif MODE == "MKVPROPEDIT":
runSh(f'mkvpropedit "{FILE_PATH}" {OPTION}', output=True)
else:
runSh(f'mkvinfo "{FILE_PATH}"', output=True)
displayOutput(MODE, "success")
if MODE == "MEDIAINFO":
generateMediainfo()
else:
executeMKVTool()
# ============================= FORM ============================= #
#@markdown <h3>⬅️ Speedtest</h3>
#@markdown <h4>📝 Note: Rerun this cell to test.</h4>
# ================================================================ #
import os
import re
import csv
import sys
import math
import errno
import signal
import socket
import timeit
import datetime
import platform
import threading
import xml.parsers.expat
try:
import gzip
GZIP_BASE = gzip.GzipFile
except ImportError:
gzip = None
GZIP_BASE = object
__version__ = '2.1.1'
class FakeShutdownEvent(object):
"""Class to fake a threading.Event.isSet so that users of this module
are not required to register their own threading.Event()
"""
@staticmethod
def isSet():
"Dummy method to always return false"""
return False
# Some global variables we use
DEBUG = False
_GLOBAL_DEFAULT_TIMEOUT = object()
# Begin import game to handle Python 2 and Python 3
try:
import json
except ImportError:
try:
import simplejson as json
except ImportError:
json = None
try:
import xml.etree.cElementTree as ET
except ImportError:
try:
import xml.etree.ElementTree as ET
except ImportError:
from xml.dom import minidom as DOM
from xml.parsers.expat import ExpatError
ET = None
try:
from urllib2 import (urlopen, Request, HTTPError, URLError,
AbstractHTTPHandler, ProxyHandler,
HTTPDefaultErrorHandler, HTTPRedirectHandler,
HTTPErrorProcessor, OpenerDirector)
except ImportError:
from urllib.request import (urlopen, Request, HTTPError, URLError,
AbstractHTTPHandler, ProxyHandler,
HTTPDefaultErrorHandler, HTTPRedirectHandler,
HTTPErrorProcessor, OpenerDirector)
try:
from httplib import HTTPConnection, BadStatusLine
except ImportError:
from http.client import HTTPConnection, BadStatusLine
try:
from httplib import HTTPSConnection
except ImportError:
try:
from http.client import HTTPSConnection
except ImportError:
HTTPSConnection = None
try:
from httplib import FakeSocket
except ImportError:
FakeSocket = None
try:
from Queue import Queue
except ImportError:
from queue import Queue
try:
from urlparse import urlparse
except ImportError:
from urllib.parse import urlparse
try:
from urlparse import parse_qs
except ImportError:
try:
from urllib.parse import parse_qs
except ImportError:
from cgi import parse_qs
try:
from hashlib import md5
except ImportError:
from md5 import md5
try:
from argparse import ArgumentParser as ArgParser
from argparse import SUPPRESS as ARG_SUPPRESS
PARSER_TYPE_INT = int
PARSER_TYPE_STR = str
PARSER_TYPE_FLOAT = float
except ImportError:
from optparse import OptionParser as ArgParser
from optparse import SUPPRESS_HELP as ARG_SUPPRESS
PARSER_TYPE_INT = 'int'
PARSER_TYPE_STR = 'string'
PARSER_TYPE_FLOAT = 'float'
try:
from cStringIO import StringIO
BytesIO = None
except ImportError:
try:
from StringIO import StringIO
BytesIO = None
except ImportError:
from io import StringIO, BytesIO
try:
import __builtin__
except ImportError:
import builtins
from io import TextIOWrapper, FileIO
class _Py3Utf8Output(TextIOWrapper):
"""UTF-8 encoded wrapper around stdout for py3, to override
ASCII stdout
"""
def __init__(self, f, **kwargs):
buf = FileIO(f.fileno(), 'w')
super(_Py3Utf8Output, self).__init__(
buf,
encoding='utf8',
errors='strict'
)
def write(self, s):
super(_Py3Utf8Output, self).write(s)
self.flush()
_py3_print = getattr(builtins, 'print')
try:
_py3_utf8_stdout = _Py3Utf8Output(sys.stdout)
_py3_utf8_stderr = _Py3Utf8Output(sys.stderr)
except OSError:
# sys.stdout/sys.stderr is not a compatible stdout/stderr object
# just use it and hope things go ok
_py3_utf8_stdout = sys.stdout
_py3_utf8_stderr = sys.stderr
def to_utf8(v):
"""No-op encode to utf-8 for py3"""
return v
def print_(*args, **kwargs):
"""Wrapper function for py3 to print, with a utf-8 encoded stdout"""
if kwargs.get('file') == sys.stderr:
kwargs['file'] = _py3_utf8_stderr
else:
kwargs['file'] = kwargs.get('file', _py3_utf8_stdout)
_py3_print(*args, **kwargs)
else:
del __builtin__
def to_utf8(v):
"""Encode value to utf-8 if possible for py2"""
try:
return v.encode('utf8', 'strict')
except AttributeError:
return v
def print_(*args, **kwargs):
"""The new-style print function for Python 2.4 and 2.5.
Taken from https://pypi.python.org/pypi/six/
Modified to set encoding to UTF-8 always, and to flush after write
"""
fp = kwargs.pop("file", sys.stdout)
if fp is None:
return
def write(data):
if not isinstance(data, basestring):
data = str(data)
# If the file has an encoding, encode unicode with it.
encoding = 'utf8' # Always trust UTF-8 for output
if (isinstance(fp, file) and
isinstance(data, unicode) and
encoding is not None):
errors = getattr(fp, "errors", None)
if errors is None:
errors = "strict"
data = data.encode(encoding, errors)
fp.write(data)
fp.flush()
want_unicode = False
sep = kwargs.pop("sep", None)
if sep is not None:
if isinstance(sep, unicode):
want_unicode = True
elif not isinstance(sep, str):
raise TypeError("sep must be None or a string")
end = kwargs.pop("end", None)
if end is not None:
if isinstance(end, unicode):
want_unicode = True
elif not isinstance(end, str):
raise TypeError("end must be None or a string")
if kwargs:
raise TypeError("invalid keyword arguments to print()")
if not want_unicode:
for arg in args:
if isinstance(arg, unicode):
want_unicode = True
break
if want_unicode:
newline = unicode("\n")
space = unicode(" ")
else:
newline = "\n"
space = " "
if sep is None:
sep = space
if end is None:
end = newline
for i, arg in enumerate(args):
if i:
write(sep)
write(arg)
write(end)
# Exception "constants" to support Python 2 through Python 3
try:
import ssl
try:
CERT_ERROR = (ssl.CertificateError,)
except AttributeError:
CERT_ERROR = tuple()
HTTP_ERRORS = (
(HTTPError, URLError, socket.error, ssl.SSLError, BadStatusLine) +
CERT_ERROR
)
except ImportError:
ssl = None
HTTP_ERRORS = (HTTPError, URLError, socket.error, BadStatusLine)
class SpeedtestException(Exception):
"""Base exception for this module"""
class SpeedtestCLIError(SpeedtestException):
"""Generic exception for raising errors during CLI operation"""
class SpeedtestHTTPError(SpeedtestException):
"""Base HTTP exception for this module"""
class SpeedtestConfigError(SpeedtestException):
"""Configuration XML is invalid"""
class SpeedtestServersError(SpeedtestException):
"""Servers XML is invalid"""
class ConfigRetrievalError(SpeedtestHTTPError):
"""Could not retrieve config.php"""
class ServersRetrievalError(SpeedtestHTTPError):
"""Could not retrieve speedtest-servers.php"""
class InvalidServerIDType(SpeedtestException):
"""Server ID used for filtering was not an integer"""
class NoMatchedServers(SpeedtestException):
"""No servers matched when filtering"""
class SpeedtestMiniConnectFailure(SpeedtestException):
"""Could not connect to the provided speedtest mini server"""
class InvalidSpeedtestMiniServer(SpeedtestException):
"""Server provided as a speedtest mini server does not actually appear
to be a speedtest mini server
"""
class ShareResultsConnectFailure(SpeedtestException):
"""Could not connect to speedtest.net API to POST results"""
class ShareResultsSubmitFailure(SpeedtestException):
"""Unable to successfully POST results to speedtest.net API after
connection
"""
class SpeedtestUploadTimeout(SpeedtestException):
"""testlength configuration reached during upload
Used to ensure the upload halts when no additional data should be sent
"""
class SpeedtestBestServerFailure(SpeedtestException):
"""Unable to determine best server"""
class SpeedtestMissingBestServer(SpeedtestException):
"""get_best_server not called or not able to determine best server"""
def create_connection(address, timeout=_GLOBAL_DEFAULT_TIMEOUT,
source_address=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
Largely vendored from Python 2.7, modified to work with Python 2.4
"""
host, port = address
err = None
for res in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
if timeout is not _GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(float(timeout))
if source_address:
sock.bind(source_address)
sock.connect(sa)
return sock
except socket.error:
err = get_exception()
if sock is not None:
sock.close()
if err is not None:
raise err
else:
raise socket.error("getaddrinfo returns an empty list")
class SpeedtestHTTPConnection(HTTPConnection):
"""Custom HTTPConnection to support source_address across
Python 2.4 - Python 3
"""
def __init__(self, *args, **kwargs):
source_address = kwargs.pop('source_address', None)
timeout = kwargs.pop('timeout', 10)
HTTPConnection.__init__(self, *args, **kwargs)
self.source_address = source_address
self.timeout = timeout
def connect(self):
"""Connect to the host and port specified in __init__."""
try:
self.sock = socket.create_connection(
(self.host, self.port),
self.timeout,
self.source_address
)
except (AttributeError, TypeError):
self.sock = create_connection(
(self.host, self.port),
self.timeout,
self.source_address
)
if HTTPSConnection:
class SpeedtestHTTPSConnection(HTTPSConnection,
SpeedtestHTTPConnection):
"""Custom HTTPSConnection to support source_address across
Python 2.4 - Python 3
"""
def __init__(self, *args, **kwargs):
source_address = kwargs.pop('source_address', None)
timeout = kwargs.pop('timeout', 10)
HTTPSConnection.__init__(self, *args, **kwargs)
self.timeout = timeout
self.source_address = source_address
def connect(self):
"Connect to a host on a given (SSL) port."
SpeedtestHTTPConnection.connect(self)
if ssl:
try:
kwargs = {}
if hasattr(ssl, 'SSLContext'):
kwargs['server_hostname'] = self.host
self.sock = self._context.wrap_socket(self.sock, **kwargs)
except AttributeError:
self.sock = ssl.wrap_socket(self.sock)
try:
self.sock.server_hostname = self.host
except AttributeError:
pass
elif FakeSocket:
# Python 2.4/2.5 support
try:
self.sock = FakeSocket(self.sock, socket.ssl(self.sock))
except AttributeError:
raise SpeedtestException(
'This version of Python does not support HTTPS/SSL '
'functionality'
)
else:
raise SpeedtestException(
'This version of Python does not support HTTPS/SSL '
'functionality'
)
def _build_connection(connection, source_address, timeout, context=None):
"""Cross Python 2.4 - Python 3 callable to build an ``HTTPConnection`` or
``HTTPSConnection`` with the args we need
Called from ``http(s)_open`` methods of ``SpeedtestHTTPHandler`` or
``SpeedtestHTTPSHandler``
"""
def inner(host, **kwargs):
kwargs.update({
'source_address': source_address,
'timeout': timeout
})
if context:
kwargs['context'] = context
return connection(host, **kwargs)
return inner
class SpeedtestHTTPHandler(AbstractHTTPHandler):
"""Custom ``HTTPHandler`` that can build a ``HTTPConnection`` with the
args we need for ``source_address`` and ``timeout``
"""
def __init__(self, debuglevel=0, source_address=None, timeout=10):
AbstractHTTPHandler.__init__(self, debuglevel)
self.source_address = source_address
self.timeout = timeout
def http_open(self, req):
return self.do_open(
_build_connection(
SpeedtestHTTPConnection,
self.source_address,
self.timeout
),
req
)
http_request = AbstractHTTPHandler.do_request_
class SpeedtestHTTPSHandler(AbstractHTTPHandler):
"""Custom ``HTTPSHandler`` that can build a ``HTTPSConnection`` with the
args we need for ``source_address`` and ``timeout``
"""
def __init__(self, debuglevel=0, context=None, source_address=None,
timeout=10):
AbstractHTTPHandler.__init__(self, debuglevel)
self._context = context
self.source_address = source_address
self.timeout = timeout
def https_open(self, req):
return self.do_open(
_build_connection(
SpeedtestHTTPSConnection,
self.source_address,
self.timeout,
context=self._context,
),
req
)
https_request = AbstractHTTPHandler.do_request_
def build_opener(source_address=None, timeout=10):
"""Function similar to ``urllib2.build_opener`` that will build
an ``OpenerDirector`` with the explicit handlers we want,
``source_address`` for binding, ``timeout`` and our custom
`User-Agent`
"""
printer('Timeout set to %d' % timeout, debug=True)
if source_address:
source_address_tuple = (source_address, 0)
printer('Binding to source address: %r' % (source_address_tuple,),
debug=True)
else:
source_address_tuple = None
handlers = [
ProxyHandler(),
SpeedtestHTTPHandler(source_address=source_address_tuple,
timeout=timeout),
SpeedtestHTTPSHandler(source_address=source_address_tuple,
timeout=timeout),
HTTPDefaultErrorHandler(),
HTTPRedirectHandler(),
HTTPErrorProcessor()
]
opener = OpenerDirector()
opener.addheaders = [('User-agent', build_user_agent())]
for handler in handlers:
opener.add_handler(handler)
return opener
class GzipDecodedResponse(GZIP_BASE):
"""A file-like object to decode a response encoded with the gzip
method, as described in RFC 1952.
Largely copied from ``xmlrpclib``/``xmlrpc.client`` and modified
to work for py2.4-py3
"""
def __init__(self, response):
# response doesn't support tell() and read(), required by
# GzipFile
if not gzip:
raise SpeedtestHTTPError('HTTP response body is gzip encoded, '
'but gzip support is not available')
IO = BytesIO or StringIO
self.io = IO()
while 1:
chunk = response.read(1024)
if len(chunk) == 0:
break
self.io.write(chunk)
self.io.seek(0)
gzip.GzipFile.__init__(self, mode='rb', fileobj=self.io)
def close(self):
try:
gzip.GzipFile.close(self)
finally:
self.io.close()
def get_exception():
"""Helper function to work with py2.4-py3 for getting the current
exception in a try/except block
"""
return sys.exc_info()[1]
def distance(origin, destination):
"""Determine distance between 2 sets of [lat,lon] in km"""
lat1, lon1 = origin
lat2, lon2 = destination
radius = 6371 # km
dlat = math.radians(lat2 - lat1)
dlon = math.radians(lon2 - lon1)
a = (math.sin(dlat / 2) * math.sin(dlat / 2) +
math.cos(math.radians(lat1)) *
math.cos(math.radians(lat2)) * math.sin(dlon / 2) *
math.sin(dlon / 2))
c = 2 * math.atan2(math.sqrt(a), math.sqrt(1 - a))
d = radius * c
return d
def build_user_agent():
"""Build a Mozilla/5.0 compatible User-Agent string"""
ua_tuple = (
'Mozilla/5.0',
'(%s; U; %s; en-us)' % (platform.platform(),
platform.architecture()[0]),
'Python/%s' % platform.python_version(),
'(KHTML, like Gecko)',
'speedtest-cli/%s' % __version__
)
user_agent = ' '.join(ua_tuple)
printer('User-Agent: %s' % user_agent, debug=True)
return user_agent
def build_request(url, data=None, headers=None, bump='0', secure=False):
"""Build a urllib2 request object
This function automatically adds a User-Agent header to all requests
"""
if not headers:
headers = {}
if url[0] == ':':
scheme = ('http', 'https')[bool(secure)]
schemed_url = '%s%s' % (scheme, url)
else:
schemed_url = url
if '?' in url:
delim = '&'
else:
delim = '?'
# WHO YOU GONNA CALL? CACHE BUSTERS!
final_url = '%s%sx=%s.%s' % (schemed_url, delim,
int(timeit.time.time() * 1000),
bump)
headers.update({
'Cache-Control': 'no-cache',
})
printer('%s %s' % (('GET', 'POST')[bool(data)], final_url),
debug=True)
return Request(final_url, data=data, headers=headers)
def catch_request(request, opener=None):
"""Helper function to catch common exceptions encountered when
establishing a connection with a HTTP/HTTPS request
"""
if opener:
_open = opener.open
else:
_open = urlopen
try:
uh = _open(request)
if request.get_full_url() != uh.geturl():
printer('Redirected to %s' % uh.geturl(), debug=True)
return uh, False
except HTTP_ERRORS:
e = get_exception()
return None, e
def get_response_stream(response):
"""Helper function to return either a Gzip reader if
``Content-Encoding`` is ``gzip`` otherwise the response itself
"""
try:
getheader = response.headers.getheader
except AttributeError:
getheader = response.getheader
if getheader('content-encoding') == 'gzip':
return GzipDecodedResponse(response)
return response
def get_attributes_by_tag_name(dom, tag_name):
"""Retrieve an attribute from an XML document and return it in a
consistent format
Only used with xml.dom.minidom, which is likely only to be used
with python versions older than 2.5
"""
elem = dom.getElementsByTagName(tag_name)[0]
return dict(list(elem.attributes.items()))
def print_dots(shutdown_event):
"""Built in callback function used by Thread classes for printing
status
"""
def inner(current, total, start=False, end=False):
if shutdown_event.isSet():
return
sys.stdout.write('.')
if current + 1 == total and end is True:
sys.stdout.write('\n')
sys.stdout.flush()
return inner
def do_nothing(*args, **kwargs):
pass
class HTTPDownloader(threading.Thread):
"""Thread class for retrieving a URL"""
def __init__(self, i, request, start, timeout, opener=None,
shutdown_event=None):
threading.Thread.__init__(self)
self.request = request
self.result = [0]
self.starttime = start
self.timeout = timeout
self.i = i
if opener:
self._opener = opener.open
else:
self._opener = urlopen
if shutdown_event:
self._shutdown_event = shutdown_event
else:
self._shutdown_event = FakeShutdownEvent()
def run(self):
try:
if (timeit.default_timer() - self.starttime) <= self.timeout:
f = self._opener(self.request)
while (not self._shutdown_event.isSet() and
(timeit.default_timer() - self.starttime) <=
self.timeout):
self.result.append(len(f.read(10240)))
if self.result[-1] == 0:
break
f.close()
except IOError:
pass
class HTTPUploaderData(object):
"""File like object to improve cutting off the upload once the timeout
has been reached
"""
def __init__(self, length, start, timeout, shutdown_event=None):
self.length = length
self.start = start
self.timeout = timeout
if shutdown_event:
self._shutdown_event = shutdown_event
else:
self._shutdown_event = FakeShutdownEvent()
self._data = None
self.total = [0]
def pre_allocate(self):
chars = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ'
multiplier = int(round(int(self.length) / 36.0))
IO = BytesIO or StringIO
try:
self._data = IO(
('content1=%s' %
(chars * multiplier)[0:int(self.length) - 9]
).encode()
)
except MemoryError:
raise SpeedtestCLIError(
'Insufficient memory to pre-allocate upload data. Please '
'use --no-pre-allocate'
)
@property
def data(self):
if not self._data:
self.pre_allocate()
return self._data
def read(self, n=10240):
if ((timeit.default_timer() - self.start) <= self.timeout and
not self._shutdown_event.isSet()):
chunk = self.data.read(n)
self.total.append(len(chunk))
return chunk
else:
raise SpeedtestUploadTimeout()
def __len__(self):
return self.length
class HTTPUploader(threading.Thread):
"""Thread class for putting a URL"""
def __init__(self, i, request, start, size, timeout, opener=None,
shutdown_event=None):
threading.Thread.__init__(self)
self.request = request
self.request.data.start = self.starttime = start
self.size = size
self.result = None
self.timeout = timeout
self.i = i
if opener:
self._opener = opener.open
else:
self._opener = urlopen
if shutdown_event:
self._shutdown_event = shutdown_event
else:
self._shutdown_event = FakeShutdownEvent()
def run(self):
request = self.request
try:
if ((timeit.default_timer() - self.starttime) <= self.timeout and
not self._shutdown_event.isSet()):
try:
f = self._opener(request)
except TypeError:
# PY24 expects a string or buffer
# This also causes issues with Ctrl-C, but we will concede
# for the moment that Ctrl-C on PY24 isn't immediate
request = build_request(self.request.get_full_url(),
data=request.data.read(self.size))
f = self._opener(request)
f.read(11)
f.close()
self.result = sum(self.request.data.total)
else:
self.result = 0
except (IOError, SpeedtestUploadTimeout):
self.result = sum(self.request.data.total)
class SpeedtestResults(object):
"""Class for holding the results of a speedtest, including:
Download speed
Upload speed
Ping/Latency to test server
Data about server that the test was run against
Additionally this class can return a result data as a dictionary or CSV,
as well as submit a POST of the result data to the speedtest.net API
to get a share results image link.
"""
def __init__(self, download=0, upload=0, ping=0, server=None, client=None,
opener=None, secure=False):
self.download = download
self.upload = upload
self.ping = ping
if server is None:
self.server = {}
else:
self.server = server
self.client = client or {}
self._share = None
self.timestamp = '%sZ' % datetime.datetime.utcnow().isoformat()
self.bytes_received = 0
self.bytes_sent = 0
if opener:
self._opener = opener
else:
self._opener = build_opener()
self._secure = secure
def __repr__(self):
return repr(self.dict())
def share(self):
"""POST data to the speedtest.net API to obtain a share results
link
"""
if self._share:
return self._share
download = int(round(self.download / 1000.0, 0))
ping = int(round(self.ping, 0))
upload = int(round(self.upload / 1000.0, 0))
# Build the request to send results back to speedtest.net
# We use a list instead of a dict because the API expects parameters
# in a certain order
api_data = [
'recommendedserverid=%s' % self.server['id'],
'ping=%s' % ping,
'screenresolution=',
'promo=',
'download=%s' % download,
'screendpi=',
'upload=%s' % upload,
'testmethod=http',
'hash=%s' % md5(('%s-%s-%s-%s' %
(ping, upload, download, '297aae72'))
.encode()).hexdigest(),
'touchscreen=none',
'startmode=pingselect',
'accuracy=1',
'bytesreceived=%s' % self.bytes_received,
'bytessent=%s' % self.bytes_sent,
'serverid=%s' % self.server['id'],
]
headers = {'Referer': 'http://c.speedtest.net/flash/speedtest.swf'}
request = build_request('://www.speedtest.net/api/api.php',
data='&'.join(api_data).encode(),
headers=headers, secure=self._secure)
f, e = catch_request(request, opener=self._opener)
if e:
raise ShareResultsConnectFailure(e)
response = f.read()
code = f.code
f.close()
if int(code) != 200:
raise ShareResultsSubmitFailure('Could not submit results to '
'speedtest.net')
qsargs = parse_qs(response.decode())
resultid = qsargs.get('resultid')
if not resultid or len(resultid) != 1:
raise ShareResultsSubmitFailure('Could not submit results to '
'speedtest.net')
self._share = 'http://www.speedtest.net/result/%s.png' % resultid[0]
return self._share
def dict(self):
"""Return dictionary of result data"""
return {
'download': self.download,
'upload': self.upload,
'ping': self.ping,
'server': self.server,
'timestamp': self.timestamp,
'bytes_sent': self.bytes_sent,
'bytes_received': self.bytes_received,
'share': self._share,
'client': self.client,
}
@staticmethod
def csv_header(delimiter=','):
"""Return CSV Headers"""
row = ['Server ID', 'Sponsor', 'Server Name', 'Timestamp', 'Distance',
'Ping', 'Download', 'Upload', 'Share', 'IP Address']
out = StringIO()
writer = csv.writer(out, delimiter=delimiter, lineterminator='')
writer.writerow([to_utf8(v) for v in row])
return out.getvalue()
def csv(self, delimiter=','):
"""Return data in CSV format"""
data = self.dict()
out = StringIO()
writer = csv.writer(out, delimiter=delimiter, lineterminator='')
row = [data['server']['id'], data['server']['sponsor'],
data['server']['name'], data['timestamp'],
data['server']['d'], data['ping'], data['download'],
data['upload'], self._share or '', self.client['ip']]
writer.writerow([to_utf8(v) for v in row])
return out.getvalue()
def json(self, pretty=False):
"""Return data in JSON format"""
kwargs = {}
if pretty:
kwargs.update({
'indent': 4,
'sort_keys': True
})
return json.dumps(self.dict(), **kwargs)
class Speedtest(object):
"""Class for performing standard speedtest.net testing operations"""
def __init__(self, config=None, source_address=None, timeout=10,
secure=False, shutdown_event=None):
self.config = {}
self._source_address = source_address
self._timeout = timeout
self._opener = build_opener(source_address, timeout)
self._secure = secure
if shutdown_event:
self._shutdown_event = shutdown_event
else:
self._shutdown_event = FakeShutdownEvent()
self.get_config()
if config is not None:
self.config.update(config)
self.servers = {}
self.closest = []
self._best = {}
self.results = SpeedtestResults(
client=self.config['client'],
opener=self._opener,
secure=secure,
)
@property
def best(self):
if not self._best:
self.get_best_server()
return self._best
def get_config(self):
"""Download the speedtest.net configuration and return only the data
we are interested in
"""
headers = {}
if gzip:
headers['Accept-Encoding'] = 'gzip'
request = build_request('://www.speedtest.net/speedtest-config.php',
headers=headers, secure=self._secure)
uh, e = catch_request(request, opener=self._opener)
if e:
raise ConfigRetrievalError(e)
configxml_list = []
stream = get_response_stream(uh)
while 1:
try:
configxml_list.append(stream.read(1024))
except (OSError, EOFError):
raise ConfigRetrievalError(get_exception())
if len(configxml_list[-1]) == 0:
break
stream.close()
uh.close()
if int(uh.code) != 200:
return None
configxml = ''.encode().join(configxml_list)
printer('Config XML:\n%s' % configxml, debug=True)
try:
try:
root = ET.fromstring(configxml)
except ET.ParseError:
e = get_exception()
raise SpeedtestConfigError(
'Malformed speedtest.net configuration: %s' % e
)
server_config = root.find('server-config').attrib
download = root.find('download').attrib
upload = root.find('upload').attrib
# times = root.find('times').attrib
client = root.find('client').attrib
except AttributeError:
try:
root = DOM.parseString(configxml)
except ExpatError:
e = get_exception()
raise SpeedtestConfigError(
'Malformed speedtest.net configuration: %s' % e
)
server_config = get_attributes_by_tag_name(root, 'server-config')
download = get_attributes_by_tag_name(root, 'download')
upload = get_attributes_by_tag_name(root, 'upload')
# times = get_attributes_by_tag_name(root, 'times')
client = get_attributes_by_tag_name(root, 'client')
ignore_servers = list(
map(int, server_config['ignoreids'].split(','))
)
ratio = int(upload['ratio'])
upload_max = int(upload['maxchunkcount'])
up_sizes = [32768, 65536, 131072, 262144, 524288, 1048576, 7340032]
sizes = {
'upload': up_sizes[ratio - 1:],
'download': [350, 500, 750, 1000, 1500, 2000, 2500,
3000, 3500, 4000]
}
size_count = len(sizes['upload'])
upload_count = int(math.ceil(upload_max / size_count))
counts = {
'upload': upload_count,
'download': int(download['threadsperurl'])
}
threads = {
'upload': int(upload['threads']),
'download': int(server_config['threadcount']) * 2
}
length = {
'upload': int(upload['testlength']),
'download': int(download['testlength'])
}
self.config.update({
'client': client,
'ignore_servers': ignore_servers,
'sizes': sizes,
'counts': counts,
'threads': threads,
'length': length,
'upload_max': upload_count * size_count
})
try:
self.lat_lon = (float(client['lat']), float(client['lon']))
except ValueError:
raise SpeedtestConfigError(
'Unknown location: lat=%r lon=%r' %
(client.get('lat'), client.get('lon'))
)
printer('Config:\n%r' % self.config, debug=True)
return self.config
def get_servers(self, servers=None, exclude=None):
"""Retrieve a the list of speedtest.net servers, optionally filtered
to servers matching those specified in the ``servers`` argument
"""
if servers is None:
servers = []
if exclude is None:
exclude = []
self.servers.clear()
for server_list in (servers, exclude):
for i, s in enumerate(server_list):
try:
server_list[i] = int(s)
except ValueError:
raise InvalidServerIDType(
'%s is an invalid server type, must be int' % s
)
urls = [
'://www.speedtest.net/speedtest-servers-static.php',
'http://c.speedtest.net/speedtest-servers-static.php',
'://www.speedtest.net/speedtest-servers.php',
'http://c.speedtest.net/speedtest-servers.php',
]
headers = {}
if gzip:
headers['Accept-Encoding'] = 'gzip'
errors = []
for url in urls:
try:
request = build_request(
'%s?threads=%s' % (url,
self.config['threads']['download']),
headers=headers,
secure=self._secure
)
uh, e = catch_request(request, opener=self._opener)
if e:
errors.append('%s' % e)
raise ServersRetrievalError()
stream = get_response_stream(uh)
serversxml_list = []
while 1:
try:
serversxml_list.append(stream.read(1024))
except (OSError, EOFError):
raise ServersRetrievalError(get_exception())
if len(serversxml_list[-1]) == 0:
break
stream.close()
uh.close()
if int(uh.code) != 200:
raise ServersRetrievalError()
serversxml = ''.encode().join(serversxml_list)
printer('Servers XML:\n%s' % serversxml, debug=True)
try:
try:
try:
root = ET.fromstring(serversxml)
except ET.ParseError:
e = get_exception()
raise SpeedtestServersError(
'Malformed speedtest.net server list: %s' % e
)
elements = root.getiterator('server')
except AttributeError:
try:
root = DOM.parseString(serversxml)
except ExpatError:
e = get_exception()
raise SpeedtestServersError(
'Malformed speedtest.net server list: %s' % e
)
elements = root.getElementsByTagName('server')
except (SyntaxError, xml.parsers.expat.ExpatError):
raise ServersRetrievalError()
for server in elements:
try:
attrib = server.attrib
except AttributeError:
attrib = dict(list(server.attributes.items()))
if servers and int(attrib.get('id')) not in servers:
continue
if (int(attrib.get('id')) in self.config['ignore_servers']
or int(attrib.get('id')) in exclude):
continue
try:
d = distance(self.lat_lon,
(float(attrib.get('lat')),
float(attrib.get('lon'))))
except Exception:
continue
attrib['d'] = d
try:
self.servers[d].append(attrib)
except KeyError:
self.servers[d] = [attrib]
break
except ServersRetrievalError:
continue
if (servers or exclude) and not self.servers:
raise NoMatchedServers()
return self.servers
def set_mini_server(self, server):
"""Instead of querying for a list of servers, set a link to a
speedtest mini server
"""
urlparts = urlparse(server)
name, ext = os.path.splitext(urlparts[2])
if ext:
url = os.path.dirname(server)
else:
url = server
request = build_request(url)
uh, e = catch_request(request, opener=self._opener)
if e:
raise SpeedtestMiniConnectFailure('Failed to connect to %s' %
server)
else:
text = uh.read()
uh.close()
extension = re.findall('upload_?[Ee]xtension: "([^"]+)"',
text.decode())
if not extension:
for ext in ['php', 'asp', 'aspx', 'jsp']:
try:
f = self._opener.open(
'%s/speedtest/upload.%s' % (url, ext)
)
except Exception:
pass
else:
data = f.read().strip().decode()
if (f.code == 200 and
len(data.splitlines()) == 1 and
re.match('size=[0-9]', data)):
extension = [ext]
break
if not urlparts or not extension:
raise InvalidSpeedtestMiniServer('Invalid Speedtest Mini Server: '
'%s' % server)
self.servers = [{
'sponsor': 'Speedtest Mini',
'name': urlparts[1],
'd': 0,
'url': '%s/speedtest/upload.%s' % (url.rstrip('/'), extension[0]),
'latency': 0,
'id': 0
}]
return self.servers
def get_closest_servers(self, limit=5):
"""Limit servers to the closest speedtest.net servers based on
geographic distance
"""
if not self.servers:
self.get_servers()
for d in sorted(self.servers.keys()):
for s in self.servers[d]:
self.closest.append(s)
if len(self.closest) == limit:
break
else:
continue
break
printer('Closest Servers:\n%r' % self.closest, debug=True)
return self.closest
def get_best_server(self, servers=None):
"""Perform a speedtest.net "ping" to determine which speedtest.net
server has the lowest latency
"""
if not servers:
if not self.closest:
servers = self.get_closest_servers()
servers = self.closest
if self._source_address:
source_address_tuple = (self._source_address, 0)
else:
source_address_tuple = None
user_agent = build_user_agent()
results = {}
for server in servers:
cum = []
url = os.path.dirname(server['url'])
stamp = int(timeit.time.time() * 1000)
latency_url = '%s/latency.txt?x=%s' % (url, stamp)
for i in range(0, 3):
this_latency_url = '%s.%s' % (latency_url, i)
printer('%s %s' % ('GET', this_latency_url),
debug=True)
urlparts = urlparse(latency_url)
try:
if urlparts[0] == 'https':
h = SpeedtestHTTPSConnection(
urlparts[1],
source_address=source_address_tuple
)
else:
h = SpeedtestHTTPConnection(
urlparts[1],
source_address=source_address_tuple
)
headers = {'User-Agent': user_agent}
path = '%s?%s' % (urlparts[2], urlparts[4])
start = timeit.default_timer()
h.request("GET", path, headers=headers)
r = h.getresponse()
total = (timeit.default_timer() - start)
except HTTP_ERRORS:
e = get_exception()
printer('ERROR: %r' % e, debug=True)
cum.append(3600)
continue
text = r.read(9)
if int(r.status) == 200 and text == 'test=test'.encode():
cum.append(total)
else:
cum.append(3600)
h.close()
avg = round((sum(cum) / 6) * 1000.0, 3)
results[avg] = server
try:
fastest = sorted(results.keys())[0]
except IndexError:
raise SpeedtestBestServerFailure('Unable to connect to servers to '
'test latency.')
best = results[fastest]
best['latency'] = fastest
self.results.ping = fastest
self.results.server = best
self._best.update(best)
printer('Best Server:\n%r' % best, debug=True)
return best
def download(self, callback=do_nothing, threads=None):
"""Test download speed against speedtest.net
A ``threads`` value of ``None`` will fall back to those dictated
by the speedtest.net configuration
"""
urls = []
for size in self.config['sizes']['download']:
for _ in range(0, self.config['counts']['download']):
urls.append('%s/random%sx%s.jpg' %
(os.path.dirname(self.best['url']), size, size))
request_count = len(urls)
requests = []
for i, url in enumerate(urls):
requests.append(
build_request(url, bump=i, secure=self._secure)
)
def producer(q, requests, request_count):
for i, request in enumerate(requests):
thread = HTTPDownloader(
i,
request,
start,
self.config['length']['download'],
opener=self._opener,
shutdown_event=self._shutdown_event
)
thread.start()
q.put(thread, True)
callback(i, request_count, start=True)
finished = []
def consumer(q, request_count):
while len(finished) < request_count:
thread = q.get(True)
while thread.isAlive():
thread.join(timeout=0.1)
finished.append(sum(thread.result))
callback(thread.i, request_count, end=True)
q = Queue(threads or self.config['threads']['download'])
prod_thread = threading.Thread(target=producer,
args=(q, requests, request_count))
cons_thread = threading.Thread(target=consumer,
args=(q, request_count))
start = timeit.default_timer()
prod_thread.start()
cons_thread.start()
while prod_thread.isAlive():
prod_thread.join(timeout=0.1)
while cons_thread.isAlive():
cons_thread.join(timeout=0.1)
stop = timeit.default_timer()
self.results.bytes_received = sum(finished)
self.results.download = (
(self.results.bytes_received / (stop - start)) * 8.0
)
if self.results.download > 100000:
self.config['threads']['upload'] = 8
return self.results.download
def upload(self, callback=do_nothing, pre_allocate=True, threads=None):
"""Test upload speed against speedtest.net
A ``threads`` value of ``None`` will fall back to those dictated
by the speedtest.net configuration
"""
sizes = []
for size in self.config['sizes']['upload']:
for _ in range(0, self.config['counts']['upload']):
sizes.append(size)
# request_count = len(sizes)
request_count = self.config['upload_max']
requests = []
for i, size in enumerate(sizes):
# We set ``0`` for ``start`` and handle setting the actual
# ``start`` in ``HTTPUploader`` to get better measurements
data = HTTPUploaderData(
size,
0,
self.config['length']['upload'],
shutdown_event=self._shutdown_event
)
if pre_allocate:
data.pre_allocate()
headers = {'Content-length': size}
requests.append(
(
build_request(self.best['url'], data, secure=self._secure,
headers=headers),
size
)
)
def producer(q, requests, request_count):
for i, request in enumerate(requests[:request_count]):
thread = HTTPUploader(
i,
request[0],
start,
request[1],
self.config['length']['upload'],
opener=self._opener,
shutdown_event=self._shutdown_event
)
thread.start()
q.put(thread, True)
callback(i, request_count, start=True)
finished = []
def consumer(q, request_count):
while len(finished) < request_count:
thread = q.get(True)
while thread.isAlive():
thread.join(timeout=0.1)
finished.append(thread.result)
callback(thread.i, request_count, end=True)
q = Queue(threads or self.config['threads']['upload'])
prod_thread = threading.Thread(target=producer,
args=(q, requests, request_count))
cons_thread = threading.Thread(target=consumer,
args=(q, request_count))
start = timeit.default_timer()
prod_thread.start()
cons_thread.start()
while prod_thread.isAlive():
prod_thread.join(timeout=0.1)
while cons_thread.isAlive():
cons_thread.join(timeout=0.1)
stop = timeit.default_timer()
self.results.bytes_sent = sum(finished)
self.results.upload = (
(self.results.bytes_sent / (stop - start)) * 8.0
)
return self.results.upload
def ctrl_c(shutdown_event):
"""Catch Ctrl-C key sequence and set a SHUTDOWN_EVENT for our threaded
operations
"""
def inner(signum, frame):
shutdown_event.set()
printer('\nCancelling...', error=True)
sys.exit(0)
return inner
def version():
"""Print the version"""
printer('speedtest-cli %s' % __version__)
printer('Python %s' % sys.version.replace('\n', ''))
sys.exit(0)
def csv_header(delimiter=','):
"""Print the CSV Headers"""
printer(SpeedtestResults.csv_header(delimiter=delimiter))
sys.exit(0)
def parse_args():
"""Function to handle building and parsing of command line arguments"""
description = (
'Command line interface for testing internet bandwidth using '
'speedtest.net.\n'
'------------------------------------------------------------'
'--------------\n'
'https://github.com/sivel/speedtest-cli')
parser = ArgParser(description=description)
# Give optparse.OptionParser an `add_argument` method for
# compatibility with argparse.ArgumentParser
try:
parser.add_argument = parser.add_option
except AttributeError:
pass
parser.add_argument('--no-download', dest='download', default=True,
action='store_const', const=False,
help='Do not perform download test')
parser.add_argument('--no-upload', dest='upload', default=True,
action='store_const', const=False,
help='Do not perform upload test')
parser.add_argument('--single', default=False, action='store_true',
help='Only use a single connection instead of '
'multiple. This simulates a typical file '
'transfer.')
parser.add_argument('--bytes', dest='units', action='store_const',
const=('byte', 8), default=('bit', 1),
help='Display values in bytes instead of bits. Does '
'not affect the image generated by --share, nor '
'output from --json or --csv')
parser.add_argument('--share', action='store_true',
help='Generate and provide a URL to the speedtest.net '
'share results image, not displayed with --csv')
parser.add_argument('--simple', action='store_true', default=False,
help='Suppress verbose output, only show basic '
'information')
parser.add_argument('--csv', action='store_true', default=False,
help='Suppress verbose output, only show basic '
'information in CSV format. Speeds listed in '
'bit/s and not affected by --bytes')
parser.add_argument('--csv-delimiter', default=',', type=PARSER_TYPE_STR,
help='Single character delimiter to use in CSV '
'output. Default ","')
parser.add_argument('--csv-header', action='store_true', default=False,
help='Print CSV headers')
parser.add_argument('--json', action='store_true', default=False,
help='Suppress verbose output, only show basic '
'information in JSON format. Speeds listed in '
'bit/s and not affected by --bytes')
parser.add_argument('--list', action='store_true',
help='Display a list of speedtest.net servers '
'sorted by distance')
parser.add_argument('--server', type=PARSER_TYPE_INT, action='append',
help='Specify a server ID to test against. Can be '
'supplied multiple times')
parser.add_argument('--exclude', type=PARSER_TYPE_INT, action='append',
help='Exclude a server from selection. Can be '
'supplied multiple times')
parser.add_argument('--mini', help='URL of the Speedtest Mini server')
parser.add_argument('--source', help='Source IP address to bind to')
parser.add_argument('--timeout', default=10, type=PARSER_TYPE_FLOAT,
help='HTTP timeout in seconds. Default 10')
parser.add_argument('--secure', action='store_true',
help='Use HTTPS instead of HTTP when communicating '
'with speedtest.net operated servers')
parser.add_argument('--no-pre-allocate', dest='pre_allocate',
action='store_const', default=True, const=False,
help='Do not pre allocate upload data. Pre allocation '
'is enabled by default to improve upload '
'performance. To support systems with '
'insufficient memory, use this option to avoid a '
'MemoryError')
parser.add_argument('--version', action='store_true',
help='Show the version number and exit')
parser.add_argument('--debug', action='store_true',
help=ARG_SUPPRESS, default=ARG_SUPPRESS)
options = parser.parse_args(args=[])
if isinstance(options, tuple):
args = options[0]
else:
args = options
return args
def validate_optional_args(args):
"""Check if an argument was provided that depends on a module that may
not be part of the Python standard library.
If such an argument is supplied, and the module does not exist, exit
with an error stating which module is missing.
"""
optional_args = {
'json': ('json/simplejson python module', json),
'secure': ('SSL support', HTTPSConnection),
}
for arg, info in optional_args.items():
if getattr(args, arg, False) and info[1] is None:
raise SystemExit('%s is not installed. --%s is '
'unavailable' % (info[0], arg))
def printer(string, quiet=False, debug=False, error=False, **kwargs):
"""Helper function print a string with various features"""
if debug and not DEBUG:
return
if debug:
if sys.stdout.isatty():
out = '\033[1;30mDEBUG: %s\033[0m' % string
else:
out = 'DEBUG: %s' % string
else:
out = string
if error:
kwargs['file'] = sys.stderr
if not quiet:
print_(out, **kwargs)
def shell():
"""Run the full speedtest.net test"""
global DEBUG
shutdown_event = threading.Event()
signal.signal(signal.SIGINT, ctrl_c(shutdown_event))
args = parse_args()
# Print the version and exit
if args.version:
version()
if not args.download and not args.upload:
raise SpeedtestCLIError('Cannot supply both --no-download and '
'--no-upload')
if len(args.csv_delimiter) != 1:
raise SpeedtestCLIError('--csv-delimiter must be a single character')
if args.csv_header:
csv_header(args.csv_delimiter)
validate_optional_args(args)
debug = getattr(args, 'debug', False)
if debug == 'SUPPRESSHELP':
debug = False
if debug:
DEBUG = True
if args.simple or args.csv or args.json:
quiet = True
else:
quiet = False
if args.csv or args.json:
machine_format = True
else:
machine_format = False
# Don't set a callback if we are running quietly
if quiet or debug:
callback = do_nothing
else:
callback = print_dots(shutdown_event)
printer('Retrieving speedtest.net configuration...', quiet)
try:
speedtest = Speedtest(
source_address=args.source,
timeout=args.timeout,
secure=args.secure
)
except (ConfigRetrievalError,) + HTTP_ERRORS:
printer('Cannot retrieve speedtest configuration', error=True)
raise SpeedtestCLIError(get_exception())
if args.list:
try:
speedtest.get_servers()
except (ServersRetrievalError,) + HTTP_ERRORS:
printer('Cannot retrieve speedtest server list', error=True)
raise SpeedtestCLIError(get_exception())
for _, servers in sorted(speedtest.servers.items()):
for server in servers:
line = ('%(id)5s) %(sponsor)s (%(name)s, %(country)s) '
'[%(d)0.2f km]' % server)
try:
printer(line)
except IOError:
e = get_exception()
if e.errno != errno.EPIPE:
raise
sys.exit(0)
printer('Testing from %(isp)s (%(ip)s)...' % speedtest.config['client'],
quiet)
if not args.mini:
printer('Retrieving speedtest.net server list...', quiet)
try:
speedtest.get_servers(servers=args.server, exclude=args.exclude)
except NoMatchedServers:
raise SpeedtestCLIError(
'No matched servers: %s' %
', '.join('%s' % s for s in args.server)
)
except (ServersRetrievalError,) + HTTP_ERRORS:
printer('Cannot retrieve speedtest server list', error=True)
raise SpeedtestCLIError(get_exception())
except InvalidServerIDType:
raise SpeedtestCLIError(
'%s is an invalid server type, must '
'be an int' % ', '.join('%s' % s for s in args.server)
)
if args.server and len(args.server) == 1:
printer('Retrieving information for the selected server...', quiet)
else:
printer('Selecting best server based on ping...', quiet)
speedtest.get_best_server()
elif args.mini:
speedtest.get_best_server(speedtest.set_mini_server(args.mini))
results = speedtest.results
printer('Hosted by %(sponsor)s (%(name)s) [%(d)0.2f km]: '
'%(latency)s ms' % results.server, quiet)
if args.download:
printer('Testing download speed', quiet,
end=('', '\n')[bool(debug)])
speedtest.download(
callback=callback,
threads=(None, 1)[args.single]
)
printer('Download: %0.2f M%s/s' %
((results.download / 1000.0 / 1000.0) / args.units[1],
args.units[0]),
quiet)
else:
printer('Skipping download test', quiet)
if args.upload:
printer('Testing upload speed', quiet,
end=('', '\n')[bool(debug)])
speedtest.upload(
callback=callback,
pre_allocate=args.pre_allocate,
threads=(None, 1)[args.single]
)
printer('Upload: %0.2f M%s/s' %
((results.upload / 1000.0 / 1000.0) / args.units[1],
args.units[0]),
quiet)
else:
printer('Skipping upload test', quiet)
printer('Results:\n%r' % results.dict(), debug=True)
if not args.simple and args.share:
results.share()
if args.simple:
printer('Ping: %s ms\nDownload: %0.2f M%s/s\nUpload: %0.2f M%s/s' %
(results.ping,
(results.download / 1000.0 / 1000.0) / args.units[1],
args.units[0],
(results.upload / 1000.0 / 1000.0) / args.units[1],
args.units[0]))
elif args.csv:
printer(results.csv(delimiter=args.csv_delimiter))
elif args.json:
printer(results.json())
if args.share and not machine_format:
printer('Share results: %s' % results.share())
def main():
try:
shell()
except KeyboardInterrupt:
printer('\nCancelling...', error=True)
except (SpeedtestException, SystemExit):
e = get_exception()
# Ignore a successful exit, or argparse exit
if getattr(e, 'code', 1) not in (0, 2):
msg = '%s' % e
if not msg:
msg = '%r' % e
raise SystemExit('ERROR: %s' % msg)
if __name__ == '__main__':
main()
```
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import matplotlib as mpl
import numpy as np
np.set_printoptions(threshold=np.inf)
import time
import random
import pylab
import time, sys
from IPython.display import clear_output
from random import randrange
from hrr import *
seed = np.random.randint(100000, size=1)[0]
random.seed(12)
np.random.seed(12)
live_graph = False
# Number of training cycles
episodes = 50000
# Hrr parameters
hrr_length = 15000
normalized = True
# How many steps to take before quiting
steps_till_quit = 300
signals = ["R"]
goals = [[0, 5], [10], [15]]
# Maze parameters
size_of_maze = 20
non_obs_task_switch_rate = 1000
num_non_obs_tasks = len(goals)
num_obs_tasks = len(signals)
# Arguments for neural network
input_size = hrr_length
output_size = 1
discount = 0.9
alpha = 0.1
# Reward for temporal difference learning
reward_bad = -1
reward_good = 0
num_of_atrs = 1
atr_alpha = 0.001
atr_values = (np.ones(num_of_atrs) * reward_good).tolist()
atr_threshold = -1.5
threshold_vals = []
# Expolration rate
e_soft = 0.00001
rand_on = 1
# Threshold for non observable task switching
# threshold = 0.3
threshold = -1 * reward_good
threshold_alpha = 0.0001
# Eligibility trace
eligibility = np.zeros(hrr_length)
# Eligibility trace rate
eli_lambda = 0.01
# Neural network
weights = hrr(hrr_length, normalized)
bias = 1
percent_check = 9
non_obs = 0
atr = 0
current_atr = atr
current_wm = "I"
debug = False
debug2 = False
step_store = []
pos_err_store = []
neg_err_store = []
total_error = []
total_goal_error = []
switch_error = []
norm_error = []
ltm = LTM(hrr_length, normalized)
if live_graph:
mpl.rcParams['axes.prop_cycle'] = mpl.cycler(color=["r", "g", "b", "y"])
def update_progress(progress, episode):
bar_length = 50
if isinstance(progress, int):
progress = float(progress)
if not isinstance(progress, float):
progress = 0
if progress < 0:
progress = 0
if progress >= 1:
progress = 1
block = int(round(bar_length * progress))
clear_output(wait = True)
text = "Episode {0}, Progress: [{1}] {2:.1f}%".format(episode, "=" * block + "." * (bar_length - block), progress * 100)
print(text)
def get_moves(state, size_of_maze):
if(state == 0):
return size_of_maze - 1, 1
elif(state == size_of_maze - 1):
return size_of_maze - 2, 0
else:
return state - 1, state + 1
def build_hrr_string(wm, signal, state, atr):
if wm == "I" and signal == "I":
return "State:" + str(state) + "*" + "Atr:" + str(atr)
elif wm == "I":
return "Signal:" + str(signal) + "*" + "State:" + str(state) + "*" + "Atr:" + str(atr)
elif signal == "I":
return "WM:" + str(wm) + "*" + "State:" + str(state) + "*" + "Atr:" + str(atr)
else:
return "WM:" + str(wm) + "*" + "Signal:" + str(signal) + "*" + "State:" + str(state) + "*" + "Atr:" + str(atr)
def context_policy_negative(atr):
return (atr + 1)%num_of_atrs
def context_policy_positive(wm, signal, state, atr):
val = -9999
temp = -9999
for atr in range(0, num_of_atrs):
encode_str = build_hrr_string(wm, signal, state, atr)
temp = np.dot(weights, ltm.encode(encode_str)) + bias
if temp > val:
val = temp
s_atr = atr
return s_atr
def move_policy(goal, moves, wms, signals, atr, rand_on):
val = -9999
temp = -9999
for move in moves:
for wm in list(dict.fromkeys(wms + ["I"])):
for signal in list(dict.fromkeys(signals + ["I"])):
if move == goal:
encode_str = build_hrr_string(wm, signal, str(move) + "*rewardTkn", atr)
else:
encode_str = build_hrr_string(wm, signal, str(move), atr)
if (debug):
print(encode_str)
temp = np.dot(weights, ltm.encode(encode_str)) + bias
if debug:
if signal != "I":
print("Move: {0}, WM: {1}, Signal: {2}In, Atr: {3}, Value: {4}".format(move, wm, signal, atr, temp))
else:
print("Move: {0}, WM: {1}, Signal: {2}, Atr: {3}, Value: {4}".format(move, wm, signal, atr, temp))
if temp > val:
val = temp
s_move = move
if signal != "I":
s_wm = signal + "In"
else:
s_wm = wm
# Random move
if(np.random.random_sample() < e_soft) and rand_on:
if(debug):
print("RANDOM MOVE")
return (np.random.choice(moves), wm, atr, 1)
return (s_move, s_wm, atr, 0)
def get_opt_steps(start, goal, size_of_maze):
opt = abs(goal - start)
if opt > size_of_maze / 2:
opt = size_of_maze - opt
return opt
def logmod(x):
return np.sign(x)*np.log(abs(x)+1)
def animate(i):
for x in range(num_non_obs_tasks):
x_ind = x
y_for_no_rwd = 0
for wm in list(dict.fromkeys([signal + "In" if signal != "I" else signal for signal in signals] + ["I"])):
position = np.arange(size_of_maze)
value = np.zeros(size_of_maze)
for signal in list(dict.fromkeys(signals + ["I"])):
for state in range(size_of_maze):
encode_str = build_hrr_string(wm, signal, str(state), x)
value[state] = np.dot(weights, ltm.encode(encode_str)) + bias
axes[x_ind,y_for_no_rwd].clear
axes[x_ind,y_for_no_rwd].plot(position, value)
y_for_no_rwd += 1
plt.tight_layout(rect=[0, 0.03, 1, 0.95])
if live_graph:
fig, axes = plt.subplots(nrows=num_non_obs_tasks, ncols=num_obs_tasks+1)
for x in range(num_non_obs_tasks):
x_ind = x
y_for_no_rwd = 0
for wm in list(dict.fromkeys([signal + "In" if signal != "I" else signal for signal in signals] + ["I"])):
position = np.arange(size_of_maze)
value = np.zeros(size_of_maze)
for signal in list(dict.fromkeys(signals + ["I"])):
lab = "WM:" + wm + "*Signal:" + signal + "*Atr:" + str(x)
for state in range(size_of_maze):
encode_str = build_hrr_string(wm, signal, str(state), x)
value[state] = np.dot(weights, ltm.encode(encode_str)) + bias
axes[x_ind,y_for_no_rwd].title.set_text(wm + " Atr: " + str(x))
axes[x_ind,y_for_no_rwd].plot(position, value,label=lab)
axes[x_ind,y_for_no_rwd].legend(loc='upper center', bbox_to_anchor=(0.5, -0.05),
fancybox=True, shadow=True, ncol=1, prop={'size': 10})
y_for_no_rwd += 1
plt.tight_layout(rect=[0, 0.03, 1, 0.95])
ani = animation.FuncAnimation(fig, animate, interval=60000)
plt.show()
plt.suptitle("{0} Non-Observable tasks and {1} Observable tasks with goals: {2}".format(num_non_obs_tasks, num_obs_tasks, goals), fontsize=30)
t0 = time.time()
for x in range(episodes):
current_state = random.randint(0, size_of_maze - 1)
start = current_state
current_signal = np.random.choice(signals)
if x%non_obs_task_switch_rate == 0:
non_obs = (non_obs+1)%len(goals)
if num_obs_tasks == 1:
goal = goals[non_obs][0]
else:
goal = goals[non_obs][signals.index(current_signal)]
steps = 0
opt_steps = get_opt_steps(current_state, goal, size_of_maze)
# Reset trace
eligibility *= 0.0
if debug2 == False and x > ((episodes*percent_check) / 10):
debug2 = True
rand_on = 0
alpha = 0.01
threshold_alpha = 0
if debug:
print("Goal: {0}, Signal: {1}, Non_Observable: {2}".format(goal, current_signal, non_obs))
episode_memory = []
for y in range(steps_till_quit):
threshold_vals += [threshold]
if (current_state == goal):
encode_str = build_hrr_string(current_wm, current_signal, str(current_state) + "*rewardTkn", current_atr)
goal_hrr = ltm.encode(encode_str)
goal_value = np.dot(weights, goal_hrr) + bias
episode_memory += [[current_state, goal_value, goal]]
error = reward_good - goal_value
eligibility *= eli_lambda
eligibility = eligibility + goal_hrr
weights = np.add(weights, (alpha * logmod(error) * eligibility))
threshold += threshold_alpha * logmod(error)
atr_values[atr] += atr_alpha * logmod(error)
total_goal_error += [error]
if(debug):
print("In goal with value {0}".format(goal_value))
break
# Store info about previous state
previous_wm = current_wm
previous_signal = current_signal
previous_state = current_state
previous_atr = current_atr
if debug:
print("Previous WM:, {0}, Signal:, {1}, State, {2}, ATR:, {3}".format(previous_wm, previous_signal, previous_state, previous_atr))
encode_str = build_hrr_string(previous_wm, previous_signal, previous_state, previous_atr)
previous_state_hrr = ltm.encode(encode_str)
previous_value = np.dot(weights, previous_state_hrr) + bias
if debug:
print("Started with state: {0}, State Value: {1}, WM: {2}, Atr: {3}".format(previous_state, previous_value, previous_wm, previous_atr))
current_signal = "I"
left, right = get_moves(previous_state, size_of_maze)
if previous_signal != "I":
previous_signal += "In"
# Make the move
move, wm, atr, random_move = move_policy(goal, [left, right], [previous_wm, previous_signal], [current_signal], previous_atr, rand_on)
steps += 1
current_wm = wm
current_state = move
current_atr = atr
if random_move:
eligibility *= 0.0
if(debug):
print("Moves {0}, taken {1}".format([left, right], move))
if debug:
print("Current WM {0}, Current Signal {1}, Current state {2}, Current ATR {3}".format(current_wm, current_signal, current_state, current_atr))
if current_state == goal:
encode_str = build_hrr_string(current_wm, current_signal, str(current_state) + "*rewardTkn", current_atr)
if debug:
print("In goal: WM: {1}, ATR: {2}".format(current_wm, current_atr))
else:
encode_str = build_hrr_string(current_wm, current_signal, current_state, current_atr)
current_state_hrr = ltm.encode(encode_str)
current_value = np.dot(weights, current_state_hrr) + bias
sarsa_error = (reward_bad + discount * current_value) - previous_value
eligibility *= eli_lambda
eligibility = eligibility + previous_state_hrr
weights = np.add(weights, (alpha * logmod(sarsa_error) * eligibility))
total_error += [sarsa_error]
norm_error += [sarsa_error]
threshold += threshold_alpha * logmod(sarsa_error)
atr_values[atr] += atr_alpha * logmod(sarsa_error)
if sarsa_error > threshold and num_non_obs_tasks > 0:
if np.mean(atr_values) < atr_threshold:
num_of_atrs += 1
atr_values = [1 * reward_good] * num_of_atrs
threshold = -1 * reward_good
ltm.clear()
switch_error += [sarsa_error]
if debug2:
pos_err_store += [sarsa_error]
current_atr = context_policy_positive(current_wm, current_signal, current_state, current_atr)
eligibility *= 0.0
steps = 0
start = current_state
opt_steps = get_opt_steps(current_state, goal, size_of_maze)
if(debug):
print("Changed atr from {0} to {1}".format(previous_atr, current_atr))
if sarsa_error < -threshold and num_non_obs_tasks > 0:
if np.mean(atr_values) < atr_threshold:
num_of_atrs += 1
atr_values = [1 * reward_good] * num_of_atrs
threshold = -1 * reward_good
ltm.clear()
switch_error += [sarsa_error]
if debug2:
neg_err_store += [sarsa_error]
current_atr = context_policy_negative(previous_atr)
eligibility *= 0.0
steps = 0
start = current_state
opt_steps = get_opt_steps(current_state, goal, size_of_maze)
if(debug):
print("Changed atr from {0} to {1}".format(previous_atr, current_atr))
if debug:
input("")
if debug2:
if current_state == goal:
step_store += [steps - opt_steps]
print("here")
else:
step_store += [steps]
update_progress(x / episodes, x)
if live_graph:
plt.pause(0.001)
update_progress(1, episodes)
plt.show()
t1 = time.time()
print(t1-t0)
plt.plot(step_store)
plt.show()
accuracy = (len(step_store)-np.count_nonzero(step_store))*100.0 / len(step_store)
print(accuracy)
fig, axes = plt.subplots(nrows=num_of_atrs, ncols=num_obs_tasks+1)
fig.set_figwidth(15)
fig.set_figheight(15)
plt.rcParams.update({'font.size': 14})
for x in range(num_of_atrs):
x_ind = x
y_for_rwd = 0
y_for_no_rwd = 0
for wm in list(dict.fromkeys([signal + "In" if signal != "I" else signal for signal in signals] + ["I"])):
position = np.arange(size_of_maze)
value = np.zeros(size_of_maze)
for signal in signals + ["I"]:
lab = "WM:" + wm + "*Signal:" + signal + "*rewardTkn*Atr:" + str(x)
for state in range(size_of_maze):
encode_str = build_hrr_string(wm, signal, str(state) + "*rewardTkn", x)
value[state] = np.dot(weights, ltm.encode(encode_str)) + bias
axes[x_ind,y_for_rwd].title.set_text(wm + " with rewardTkn " + "Atr: " + str(x))
axes[x_ind,y_for_rwd].plot(position, value, label=lab)
axes[x_ind,y_for_no_rwd].tick_params(direction='out', length=6, width=2,
grid_color='r', grid_alpha=0.5)
axes[x_ind,y_for_rwd].legend(loc='upper center', bbox_to_anchor=(0.5, -0.1),
fancybox=True, shadow=True, ncol=1, prop={'size': 10})
y_for_rwd += 1
y = x + 1
value = np.zeros(size_of_maze)
for signal in list(dict.fromkeys(signals + ["I"])):
lab = "WM:" + wm + "*Signal:" + signal + "*Atr:" + str(x)
for state in range(size_of_maze):
encode_str = build_hrr_string(wm, signal, str(state), x)
value[state] = np.dot(weights, ltm.encode(encode_str)) + bias
axes[x_ind,y_for_no_rwd].title.set_text(wm + " Atr: " + str(x))
axes[x_ind,y_for_no_rwd].plot(position, value, label=lab)
axes[x_ind,y_for_no_rwd].tick_params(direction='out', length=6, width=2,
grid_color='r', grid_alpha=0.5)
axes[x_ind,y_for_no_rwd].legend(loc='upper center', bbox_to_anchor=(0.5, -0.1),
fancybox=True, shadow=True, ncol=1, prop={'size': 10})
y_for_no_rwd += 1
plt.tight_layout(rect=[0, 0.03, 1, 0.95])
plt.savefig('combined.png', dpi=500)
plt.show()
plt.plot(pos_err_store)
plt.plot(neg_err_store)
ltm.count()
# plt.savefig("{0}5.png".format(accuracy))
plt.plot(total_error)
plt.plot(total_goal_error)
plt.plot(switch_error)
plt.plot(norm_error)
threshold
atr_values
plt.plot(threshold_vals)
```
| github_jupyter |
```
"""
Please run notebook locally (if you have all the dependencies and a GPU).
Technically you can run this notebook on Google Colab but you need to set up microphone for Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
5. Set up microphone for Colab
"""
# If you're using Google Colab and not running locally, run this cell.
## Install dependencies
!pip install wget
!apt-get install sox libsndfile1 ffmpeg portaudio19-dev
!pip install unidecode
!pip install pyaudio
# ## Install NeMo
!python -m pip install --upgrade git+https://github.com/NVIDIA/NeMo.git#egg=nemo_toolkit[asr]
## Install TorchAudio
!pip install torchaudio>=0.6.0 -f https://download.pytorch.org/whl/torch_stable.html
```
This notebook demonstrates VAD (Voice Activity Detection) from a microphone's stream in NeMo.
It is **not a recommended** way to do inference in production workflows. If you are interested in
production-level inference using NeMo ASR models, please sign-up to Jarvis early access program: https://developer.nvidia.com/nvidia-jarvis
The notebook requires PyAudio library to get a signal from an audio device.
For Ubuntu, please run the following commands to install it:
```
sudo apt-get install -y portaudio19-dev
pip install pyaudio
```
This notebook requires the `torchaudio` library to be installed for MatchboxNet. Please follow the instructions available at the [torchaudio Github page](https://github.com/pytorch/audio#installation) to install the appropriate version of torchaudio.
If you would like to install the latest version, please run the following command to install it:
```
conda install -c pytorch torchaudio
```
```
import numpy as np
import pyaudio as pa
import os, time
import librosa
import IPython.display as ipd
import matplotlib.pyplot as plt
%matplotlib inline
import nemo
import nemo.collections.asr as nemo_asr
# sample rate, Hz
SAMPLE_RATE = 16000
```
## Restore the model from NGC
```
vad_model = nemo_asr.models.EncDecClassificationModel.from_pretrained('MatchboxNet-VAD-3x2')
```
## Observing the config of the model
```
from omegaconf import OmegaConf
import copy
# Preserve a copy of the full config
cfg = copy.deepcopy(vad_model._cfg)
print(OmegaConf.to_yaml(cfg))
```
## Setup preprocessor with these settings
```
vad_model.preprocessor = vad_model.from_config_dict(cfg.preprocessor)
# Set model to inference mode
vad_model.eval();
vad_model = vad_model.to(vad_model.device)
```
## Setting up data for Streaming Inference
```
from nemo.core.classes import IterableDataset
from nemo.core.neural_types import NeuralType, AudioSignal, LengthsType
import torch
from torch.utils.data import DataLoader
# simple data layer to pass audio signal
class AudioDataLayer(IterableDataset):
@property
def output_types(self):
return {
'audio_signal': NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),
'a_sig_length': NeuralType(tuple('B'), LengthsType()),
}
def __init__(self, sample_rate):
super().__init__()
self._sample_rate = sample_rate
self.output = True
def __iter__(self):
return self
def __next__(self):
if not self.output:
raise StopIteration
self.output = False
return torch.as_tensor(self.signal, dtype=torch.float32), \
torch.as_tensor(self.signal_shape, dtype=torch.int64)
def set_signal(self, signal):
self.signal = signal.astype(np.float32)/32768.
self.signal_shape = self.signal.size
self.output = True
def __len__(self):
return 1
data_layer = AudioDataLayer(sample_rate=cfg.train_ds.sample_rate)
data_loader = DataLoader(data_layer, batch_size=1, collate_fn=data_layer.collate_fn)
# inference method for audio signal (single instance)
def infer_signal(model, signal):
data_layer.set_signal(signal)
batch = next(iter(data_loader))
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(vad_model.device), audio_signal_len.to(vad_model.device)
logits = model.forward(input_signal=audio_signal, input_signal_length=audio_signal_len)
return logits
# class for streaming frame-based VAD
# 1) use reset() method to reset FrameVAD's state
# 2) call transcribe(frame) to do VAD on
# contiguous signal's frames
class FrameVAD:
def __init__(self, model_definition,
frame_len=2, frame_overlap=2.5,
offset=10):
'''
Args:
frame_len: frame's duration, seconds
frame_overlap: duration of overlaps before and after current frame, seconds
offset: number of symbols to drop for smooth streaming
'''
self.vocab = list(model_definition['labels'])
self.vocab.append('_')
self.sr = model_definition['sample_rate']
self.frame_len = frame_len
self.n_frame_len = int(frame_len * self.sr)
self.frame_overlap = frame_overlap
self.n_frame_overlap = int(frame_overlap * self.sr)
timestep_duration = model_definition['AudioToMFCCPreprocessor']['window_stride']
for block in model_definition['JasperEncoder']['jasper']:
timestep_duration *= block['stride'][0] ** block['repeat']
self.buffer = np.zeros(shape=2*self.n_frame_overlap + self.n_frame_len,
dtype=np.float32)
self.offset = offset
self.reset()
def _decode(self, frame, offset=0):
assert len(frame)==self.n_frame_len
self.buffer[:-self.n_frame_len] = self.buffer[self.n_frame_len:]
self.buffer[-self.n_frame_len:] = frame
logits = infer_signal(vad_model, self.buffer).cpu().numpy()[0]
decoded = self._greedy_decoder(
logits,
self.vocab
)
return decoded
@torch.no_grad()
def transcribe(self, frame=None):
if frame is None:
frame = np.zeros(shape=self.n_frame_len, dtype=np.float32)
if len(frame) < self.n_frame_len:
frame = np.pad(frame, [0, self.n_frame_len - len(frame)], 'constant')
unmerged = self._decode(frame, self.offset)
return unmerged
def reset(self):
'''
Reset frame_history and decoder's state
'''
self.buffer=np.zeros(shape=self.buffer.shape, dtype=np.float32)
self.prev_char = ''
@staticmethod
def _greedy_decoder(logits, vocab):
s = []
if logits.shape[0]:
probs = torch.softmax(torch.as_tensor(logits), dim=-1)
probas, preds = torch.max(probs, dim=-1)
s = [preds.item(), str(vocab[preds]), probs[0].item(), probs[1].item(), str(logits)]
return s
```
# Streaming Inference
Streaming inference depends on a few factors, such as the frame length (STEP) and buffer size (WINDOW SIZE). Experiment with a few values to see their effects in the below cells.
## offline inference
```
STEP_LIST = [0.01,0.01,]
WINDOW_SIZE_LIST = [0.31,0.15,]
import wave
def offline_inference(wave_file, STEP = 0.025, WINDOW_SIZE = 0.5):
FRAME_LEN = STEP # infer every STEP seconds
CHANNELS = 1 # number of audio channels (expect mono signal)
RATE = 16000 # sample rate, Hz
CHUNK_SIZE = int(FRAME_LEN*RATE)
vad = FrameVAD(model_definition = {
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': cfg.preprocessor.params,
'JasperEncoder': cfg.encoder.params,
'labels': cfg.labels
},
frame_len=FRAME_LEN, frame_overlap = (WINDOW_SIZE-FRAME_LEN)/2,
offset=0)
wf = wave.open(wave_file, 'rb')
p = pa.PyAudio()
empty_counter = 0
preds = []
proba_b = []
proba_s = []
stream = p.open(format=p.get_format_from_width(wf.getsampwidth()),
channels=CHANNELS,
rate=RATE,
output = True)
data = wf.readframes(CHUNK_SIZE)
while len(data) > 0:
data = wf.readframes(CHUNK_SIZE)
signal = np.frombuffer(data, dtype=np.int16)
result = vad.transcribe(signal)
preds.append(result[0])
proba_b.append(result[2])
proba_s.append(result[3])
if len(result):
print(result,end='\n')
empty_counter = 3
elif empty_counter > 0:
empty_counter -= 1
if empty_counter == 0:
print(' ',end='')
p.terminate()
vad.reset()
return preds, proba_b, proba_s
```
### Here we show an example of offline streaming inference
You can use your file or download the provided demo audio file.
```
demo_wave = 'VAD_demo.wav'
if not os.path.exists(demo_wave):
!wget "https://dldata-public.s3.us-east-2.amazonaws.com/VAD_demo.wav"
wave_file = demo_wave
CHANNELS = 1
RATE = 16000
audio, sample_rate = librosa.load(wave_file, sr=RATE)
dur = librosa.get_duration(audio)
print(dur)
ipd.Audio(audio, rate=sample_rate)
results = []
for STEP, WINDOW_SIZE in zip(STEP_LIST, WINDOW_SIZE_LIST):
print(f'====== STEP is {STEP}s, WINDOW_SIZE is {WINDOW_SIZE}s ====== ')
preds, proba_b, proba_s = offline_inference(wave_file, STEP, WINDOW_SIZE)
results.append([STEP, WINDOW_SIZE, preds, proba_b, proba_s])
```
Let's plot the prediction and melspectrogram
```
import librosa.display
plt.figure(figsize=[20,10])
num = len(results)
for i in range(num):
len_pred = len(results[i][2])
FRAME_LEN = results[i][0]
ax1 = plt.subplot(num+1,1,i+1)
ax1.plot(np.arange(audio.size) / sample_rate, audio, 'b')
ax1.set_xlim([-0.01, int(dur)+1])
ax1.tick_params(axis='y', labelcolor= 'b')
ax1.set_ylabel('Signal')
ax1.set_ylim([-1, 1])
ax2 = ax1.twinx()
ax2.plot(np.arange(len_pred)/(1/results[i][0]), np.array(results[i][2]) , 'r', label='pred')
ax2.plot(np.arange(len_pred)/(1/results[i][0]), np.array(results[i][4]) , 'g--', label='speech prob')
ax2.tick_params(axis='y', labelcolor='r')
legend = ax2.legend(loc='lower right', shadow=True)
ax1.set_ylabel('prediction')
ax2.set_title(f'step {results[i][0]}s, buffer size {results[i][1]}s')
ax2.set_ylabel('Preds and Probas')
ax = plt.subplot(num+1,1,i+2)
S = librosa.feature.melspectrogram(y=audio, sr=sample_rate, n_mels=64, fmax=8000)
S_dB = librosa.power_to_db(S, ref=np.max)
librosa.display.specshow(S_dB, x_axis='time', y_axis='mel', sr=sample_rate, fmax=8000)
ax.set_title('Mel-frequency spectrogram')
ax.grid()
plt.show()
```
## Online inference through microphone
```
STEP = 0.01
WINDOW_SIZE = 0.31
CHANNELS = 1
RATE = 16000
FRAME_LEN = STEP
CHUNK_SIZE = int(STEP * RATE)
vad = FrameVAD(model_definition = {
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': cfg.preprocessor.params,
'JasperEncoder': cfg.encoder.params,
'labels': cfg.labels
},
frame_len=FRAME_LEN, frame_overlap=(WINDOW_SIZE - FRAME_LEN) / 2,
offset=0)
vad.reset()
p = pa.PyAudio()
print('Available audio input devices:')
input_devices = []
for i in range(p.get_device_count()):
dev = p.get_device_info_by_index(i)
if dev.get('maxInputChannels'):
input_devices.append(i)
print(i, dev.get('name'))
if len(input_devices):
dev_idx = -2
while dev_idx not in input_devices:
print('Please type input device ID:')
dev_idx = int(input())
empty_counter = 0
def callback(in_data, frame_count, time_info, status):
global empty_counter
signal = np.frombuffer(in_data, dtype=np.int16)
text = vad.transcribe(signal)
if len(text):
print(text,end='\n')
empty_counter = vad.offset
elif empty_counter > 0:
empty_counter -= 1
if empty_counter == 0:
print(' ',end='\n')
return (in_data, pa.paContinue)
stream = p.open(format=pa.paInt16,
channels=CHANNELS,
rate=SAMPLE_RATE,
input=True,
input_device_index=dev_idx,
stream_callback=callback,
frames_per_buffer=CHUNK_SIZE)
print('Listening...')
stream.start_stream()
# Interrupt kernel and then speak for a few more words to exit the pyaudio loop !
try:
while stream.is_active():
time.sleep(0.1)
finally:
stream.stop_stream()
stream.close()
p.terminate()
print()
print("PyAudio stopped")
else:
print('ERROR: No audio input device found.')
```
| github_jupyter |
# Deep Learning Beyond the Basics
## Classifying images with CNN
```
# Make sure to have the dataset in place, downloaded from:
# http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset
#
# On linux systems:
#! wget -q http://benchmark.ini.rub.de/Dataset/GTSRB_Final_Training_Images.zip
#! unzip -q GTSRB_Final_Training_Images.zip
import keras
N_CLASSES = 43
RESIZED_IMAGE = (32, 32)
import matplotlib.pyplot as plt
import glob
from skimage.color import rgb2lab
from skimage.transform import resize
from collections import namedtuple
import numpy as np
np.random.seed(101)
%matplotlib inline
Dataset = namedtuple('Dataset', ['X', 'y'])
def to_tf_format(imgs):
return np.stack([img[:, :, np.newaxis] for img in imgs], axis=0).astype(np.float32)
def read_dataset_ppm(rootpath, n_labels, resize_to):
images = []
labels = []
for c in range(n_labels):
full_path = rootpath + '/' + format(c, '05d') + '/'
for img_name in glob.glob(full_path + "*.ppm"):
img = plt.imread(img_name).astype(np.float32)
img = rgb2lab(img / 255.0)[:,:,0]
if resize_to:
img = resize(img, resize_to, mode='reflect', anti_aliasing=True)
label = np.zeros((n_labels, ), dtype=np.float32)
label[c] = 1.0
images.append(img.astype(np.float32))
labels.append(label)
return Dataset(X = to_tf_format(images).astype(np.float32),
y = np.matrix(labels).astype(np.float32))
dataset = read_dataset_ppm('GTSRB/Final_Training/Images', N_CLASSES, RESIZED_IMAGE)
print(dataset.X.shape)
print(dataset.y.shape)
plt.imshow(dataset.X[0, :, :, :].reshape(RESIZED_IMAGE))
print("Label:", dataset.y[0, :])
plt.imshow(dataset.X[1000, :, :, :].reshape(RESIZED_IMAGE))
print("Label:", dataset.y[1000, :])
from sklearn.model_selection import train_test_split
idx_train, idx_test = train_test_split(range(dataset.X.shape[0]), test_size=0.25, random_state=101)
X_train = dataset.X[idx_train, :, :, :]
X_test = dataset.X[idx_test, :, :, :]
y_train = dataset.y[idx_train, :]
y_test = dataset.y[idx_test, :]
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
from keras.models import Sequential
from keras.layers.core import Dense, Flatten
from keras.layers.convolutional import Conv2D
from keras.optimizers import SGD
from keras import backend as K
K.set_image_data_format('channels_last')
def cnn_model_1():
model = Sequential()
model.add(Conv2D(32, (3, 3),
padding='same',
input_shape=(RESIZED_IMAGE[0], RESIZED_IMAGE[1], 1),
activation='relu'))
model.add(Flatten())
model.add(Dense(N_CLASSES, activation='softmax'))
return model
cnn = cnn_model_1()
cnn.compile(loss='categorical_crossentropy',
optimizer=SGD(lr=0.001, decay=1e-6),
metrics=['accuracy'])
cnn.fit(X_train,
y_train,
batch_size=32,
epochs=10,
validation_data=(X_test, y_test))
from sklearn.metrics import classification_report, confusion_matrix
def test_and_plot(model, X, y):
y_pred = cnn.predict(X)
y_pred_softmax = np.argmax(y_pred, axis=1).astype(np.int32)
y_test_softmax = np.argmax(y, axis=1).astype(np.int32).A1
print(classification_report(y_test_softmax, y_pred_softmax))
cm = confusion_matrix(y_test_softmax, y_pred_softmax)
plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
plt.colorbar()
plt.tight_layout()
plt.show()
# And the log2 version, to enphasize the misclassifications
plt.imshow(np.log2(cm + 1), interpolation='nearest', cmap=plt.get_cmap("tab20"))
plt.colorbar()
plt.tight_layout()
plt.show()
test_and_plot(cnn, X_test, y_test)
from keras.layers.core import Dropout
from keras.layers.pooling import MaxPooling2D
from keras.optimizers import Adam
from keras.layers import BatchNormalization
def cnn_model_2():
model = Sequential()
model.add(Conv2D(32, (3, 3),
padding='same',
input_shape=(RESIZED_IMAGE[0], RESIZED_IMAGE[1], 1),
activation='relu'))
model.add(BatchNormalization())
model.add(Conv2D(32, (3, 3),
padding='same',
input_shape=(RESIZED_IMAGE[0], RESIZED_IMAGE[1], 1),
activation='relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(N_CLASSES, activation='softmax'))
return model
cnn = cnn_model_2()
cnn.compile(loss='categorical_crossentropy',
optimizer=Adam(lr=0.001, decay=1e-6),
metrics=['accuracy'])
cnn.fit(X_train,
y_train,
batch_size=32,
epochs=10,
validation_data=(X_test, y_test))
test_and_plot(cnn, X_test, y_test)
```
## Using pre-trained models
```
# Make sure to have the dataset in place, downloaded from:
# http://www.vision.caltech.edu/Image_Datasets/Caltech101/
#
# On linux systems:
#! wget -q http://www.vision.caltech.edu/Image_Datasets/Caltech101/101_ObjectCategories.tar.gz
#! tar -xf 101_ObjectCategories.tar.gz
import keras
from keras.applications.inception_v3 import InceptionV3,\
preprocess_input, decode_predictions
from keras.preprocessing import image
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
model = InceptionV3(weights='imagenet')
def predict_top_3(model, img_path):
img = image.load_img(img_path, target_size=(299, 299))
plt.imshow(img)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
preds = model.predict(x)
print('Predicted:', decode_predictions(preds, top=3)[0])
predict_top_3(model, "101_ObjectCategories/umbrella/image_0001.jpg")
predict_top_3(model, "101_ObjectCategories/bonsai/image_0001.jpg")
print([l.name for l in model.layers])
from keras.models import Model
feat_model = Model(inputs=model.input, outputs=model.get_layer('avg_pool').output)
def extract_features(feat_model, img_path):
img = image.load_img(img_path, target_size=(299, 299))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
return feat_model.predict(x)
f = extract_features(feat_model, "101_ObjectCategories/bonsai/image_0001.jpg")
print(f.shape)
print(f)
```
## Working with temporal sequences
```
import keras
from keras.datasets import imdb
(data_train, y_train), (data_test, y_test) = imdb.load_data(num_words=25000)
print(data_train.shape)
print(data_train[0])
print(len(data_train[0]))
from keras.preprocessing.sequence import pad_sequences
X_train = pad_sequences(data_train, maxlen=100)
X_test = pad_sequences(data_test, maxlen=100)
print(X_train[0])
print(X_train[0].shape)
from keras.models import Sequential
from keras.layers import LSTM, Dense
from keras.layers.embeddings import Embedding
from keras.optimizers import Adam
model = Sequential()
model.add(Embedding(25000, 256, input_length=100))
model.add(LSTM(256, dropout=0.4, recurrent_dropout=0.4))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=Adam(),
metrics=['accuracy'])
model.fit(X_train,
y_train,
batch_size=64,
epochs=10,
validation_data=(X_test, y_test))
```
| github_jupyter |
## Tutorial 20. Sentiment analysis
Created by Emanuel Flores-Bautista 2019 All content contained in this notebook is licensed under a [Creative Commons License 4.0 BY NC](https://creativecommons.org/licenses/by-nc/4.0/). The code is licensed under a [MIT license](https://opensource.org/licenses/MIT).
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.neural_network import MLPClassifier
from sklearn.decomposition import PCA
from sklearn.metrics import classification_report,accuracy_score,confusion_matrix
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from keras.datasets import imdb
import TCD19_utils as TCD
TCD.set_plotting_style_2()
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
```
We will train a classifier movie for reviews in the IMDB data set.
```
import tensorflow as tf
#from tensorflow import keras as tf.keras
#import numpy as np
# save np.load
np_load_old = np.load
# modify the default parameters of np.load
np.load = lambda *a,**k: np_load_old(*a, allow_pickle=True, **k)
# call load_data with allow_pickle implicitly set to true
(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=5000)
# restore np.load for future normal usage
np.load = np_load_old
```
from keras.datasets import imdb
(x_train, y_train), (x_test, y_test) = imdb.load_data(path="imdb.npz",
num_words=5000,
skip_top=0,
maxlen=None,
seed=113,
start_char=1,
oov_char=2,
index_from=3)
(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words = vocabulary_size)#,allow_pickle = True)
#print('Loaded dataset with {} training samples,{} test samples'.format(len(X_train), len(X_test)))
```
len(X_train[0])
print('---review---')
print(X_train[6])
print('---label---')
print(y_train[6])
```
Note that the review is stored as a sequence of integers. From the [Keras documentation](https://keras.io/datasets/) we can see that these are words IDs that have been pre-assigned to individual words, and the label is an integer (0 for negative, 1 for positive). We can go ahead and access the words from each review with the `get_word_index()` method from the `imdb` object.
```
word2id = imdb.get_word_index()
id2word = {i: word for word, i in word2id.items()}
print('---review with words---')
print([id2word.get(i, ' ') for i in X_train[6]])
print('---label---')
print(y_train[6])
```
Because we cannot feed the index matrix directly to the classifier, we need to perform some data wrangling and feature extraction abilities. We're going to write a couple of functions, in order to
1. Get a list of reviews, consisting of full length strings.
2. Perform TF-IDF feature extraction on the reviews documents.
### Feature engineering
```
def get_joined_rvw(X):
"""
Given an X_train or X_test dataset from the IMDB reviews
of Keras, return a list of the reviews in string format.
"""
#Get word to index dictionary
word2id = imdb.get_word_index()
#Get index to word mapping dictionary
id2word = {i: word for word, i in word2id.items()}
#Initialize reviews list
doc_list = []
for review in X:
#Extract review
initial_rvw = [id2word.get(i) for i in review]
#Join strings followed by spaces
joined_rvw = " ".join(initial_rvw)
#Append review to the doc_list
doc_list.append(joined_rvw)
return doc_list
train_rvw = get_joined_rvw(X_train)
test_rvw = get_joined_rvw(X_test)
tf_idf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2,
max_features=vocabulary_size,
stop_words='english')
tf_idf_train = tf_idf_vectorizer.fit_transform(train_rvw)
tf_idf_test = tf_idf_vectorizer.fit_transform(test_rvw)
#tf_idf_feature_names = tf_idf_vectorizer.get_feature_names()
#tf_idf = np.vstack([tf_idf_train.toarray(), tf_idf_test.toarray()])
#X_new = pd.DataFrame(tf_idf, columns=tf_idf_feature_names)
X_train_new = tf_idf_train.toarray()
X_test_new = tf_idf_test.toarray()
X_test_new.shape
def get_data_from_keras_imdb():
"""
Extract TF-IDF matrices for the Keras IMDB dataset.
"""
vocabulary_size = 1000
(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words = vocabulary_size)
#X = np.vstack([X_train[:, None], X_test[:, None]])
X_train_docs = get_joined_rvw(X_train)
X_test_docs = get_joined_rvw(X_test)
tf_idf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2,
max_features=vocabulary_size,
stop_words='english')
tf_idf_train = tf_idf_vectorizer.fit_transform(X_train_docs)
tf_idf_test = tf_idf_vectorizer.fit_transform(X_test_docs)
#tf_idf_feature_names = tf_idf_vectorizer.get_feature_names()
#tf_idf = np.vstack([tf_idf_train.toarray(), tf_idf_test.toarray()])
#X_new = pd.DataFrame(tf_idf, columns=tf_idf_feature_names)
X_train_new = tf_idf_train.toarray()
X_test_new = tf_idf_test.toarray()
return X_train_new, y_train, X_test_new, y_test
```
X_train, y_train, X_test, y_test = get_data_from_keras_imdb()
```
print('train dataset shape', X_train.shape)
print('test dataset shape', X_test.shape)
```
We can readily see that we are ready to train our classification algorithm with the TF-IDF matrices.
### ML Classification: Model bulding and testing
```
model = RandomForestClassifier(n_estimators=200, max_depth=3, random_state=42)
model.fit(X_train_new[:, :-1], y_train)
y_pred = model.predict(X_test_new)
print(classification_report(y_test, y_pred))
print('Accuracy score : ', accuracy_score(y_test, y_pred))
model = MLPClassifier()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print(classification_report(y_test, y_pred))
print('Accuracy score : ', accuracy_score(y_test, y_pred))
from sklearn.model_selection import cross_val_score
cross_val_score(model, X_train_new[], y_train, cv=5)
import manu_utils as TCD
palette = TCD.palette(cmap = True)
C = confusion_matrix(y_test, y_pred)
c_normed = C / C.astype(np.float).sum(axis=1) [:, np.newaxis]
sns.heatmap(c_normed, cmap = palette,
xticklabels=['negative', 'positive'],
yticklabels=['negative', 'positive'],
annot= True, vmin = 0, vmax = 1,
cbar_kws = {'label': 'recall'})
#
plt.ylabel('True label')
plt.xlabel('Predicted label');
```
### Sci-kit learn pipelines
```
from sklearn.pipeline import make_pipeline
pipe = make_pipeline(TfidfVectorizer(max_df=0.95, min_df=2,
max_features=vocabulary_size,
stop_words='english'), MLPClassifier())
pipe.fit(train_rvw, y_train)
labels = pipe.predict(test_rvw)
targets = ['negative','positive']
def predict_category(s, model=pipe):
pred = pipe.predict([s])
return targets[pred[0]]
predict_category('this was a hell of a good movie')
predict_category('this was a freaking crappy time yo')
```
| github_jupyter |
# Distributed data parallel BERT training with TensorFlow2 and SMDataParallel
HSMDataParallel is a new capability in Amazon SageMaker to train deep learning models faster and cheaper. SMDataParallel is a distributed data parallel training framework for TensorFlow, PyTorch, and MXNet.
This notebook example shows how to use SMDataParallel with TensorFlow(version 2.3.1) on [Amazon SageMaker](https://aws.amazon.com/sagemaker/) to train a BERT model using [Amazon FSx for Lustre file-system](https://aws.amazon.com/fsx/lustre/) as data source.
The outline of steps is as follows:
1. Stage dataset in [Amazon S3](https://aws.amazon.com/s3/). Original dataset for BERT pretraining consists of text passages from BooksCorpus (800M words) (Zhu et al. 2015) and English Wikipedia (2,500M words). Please follow original guidelines by NVidia to prepare training data in hdf5 format -
https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/LanguageModeling/BERT/README.md#getting-the-data
2. Create Amazon FSx Lustre file-system and import data into the file-system from S3
3. Build Docker training image and push it to [Amazon ECR](https://aws.amazon.com/ecr/)
4. Configure data input channels for SageMaker
5. Configure hyper-prarameters
6. Define training metrics
7. Define training job, set distribution strategy to SMDataParallel and start training
**NOTE:** With large traning dataset, we recommend using (Amazon FSx)[https://aws.amazon.com/fsx/] as the input filesystem for the SageMaker training job. FSx file input to SageMaker significantly cuts down training start up time on SageMaker because it avoids downloading the training data each time you start the training job (as done with S3 input for SageMaker training job) and provides good data read throughput.
**NOTE:** This example requires SageMaker Python SDK v2.X.
## Amazon SageMaker Initialization
Initialize the notebook instance. Get the aws region, sagemaker execution role.
The IAM role arn used to give training and hosting access to your data. See the [Amazon SageMaker Roles](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the sagemaker.get_execution_role() with the appropriate full IAM role arn string(s). As described above, since we will be using FSx, please make sure to attach `FSx Access` permission to this IAM role.
```
%%time
! python3 -m pip install --upgrade sagemaker
import sagemaker
from sagemaker import get_execution_role
from sagemaker.estimator import Estimator
import boto3
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
role = get_execution_role() # provide a pre-existing role ARN as an alternative to creating a new role
print(f'SageMaker Execution Role:{role}')
client = boto3.client('sts')
account = client.get_caller_identity()['Account']
print(f'AWS account:{account}')
session = boto3.session.Session()
region = session.region_name
print(f'AWS region:{region}')
```
## Prepare SageMaker Training Images
1. SageMaker by default use the latest [Amazon Deep Learning Container Images (DLC)](https://github.com/aws/deep-learning-containers/blob/master/available_images.md) TensorFlow training image. In this step, we use it as a base image and install additional dependencies required for training BERT model.
2. In the Github repository https://github.com/HerringForks/DeepLearningExamples.git we have made TensorFlow2-SMDataParallel BERT training script available for your use. This repository will be cloned in the training image for running the model training.
### Build and Push Docker Image to ECR
Run the below command build the docker image and push it to ECR.
```
image = "<IMAGE_NAME>" # Example: tf2-smdataparallel-bert-sagemaker
tag = "<IMAGE_TAG>" # Example: latest
!pygmentize ./Dockerfile
!pygmentize ./build_and_push.sh
%%time
! chmod +x build_and_push.sh; bash build_and_push.sh {region} {image} {tag}
```
## Preparing FSx Input for SageMaker
1. Download and prepare your training dataset on S3.
2. Follow the steps listed here to create a FSx linked with your S3 bucket with training data - https://docs.aws.amazon.com/fsx/latest/LustreGuide/create-fs-linked-data-repo.html. Make sure to add an endpoint to your VPC allowing S3 access.
3. Follow the steps listed here to configure your SageMaker training job to use FSx https://aws.amazon.com/blogs/machine-learning/speed-up-training-on-amazon-sagemaker-using-amazon-efs-or-amazon-fsx-for-lustre-file-systems/
### Important Caveats
1. You need use the same `subnet` and `vpc` and `security group` used with FSx when launching the SageMaker notebook instance. The same configurations will be used by your SageMaker training job.
2. Make sure you set appropriate inbound/output rules in the `security group`. Specically, opening up these ports is necessary for SageMaker to access the FSx filesystem in the training job. https://docs.aws.amazon.com/fsx/latest/LustreGuide/limit-access-security-groups.html
3. Make sure `SageMaker IAM Role` used to launch this SageMaker training job has access to `AmazonFSx`.
## SageMaker TensorFlow Estimator function options
In the following code block, you can update the estimator function to use a different instance type, instance count, and distrubtion strategy. You're also passing in the training script you reviewed in the previous cell.
**Instance types**
SMDataParallel supports model training on SageMaker with the following instance types only:
1. ml.p3.16xlarge
1. ml.p3dn.24xlarge [Recommended]
1. ml.p4d.24xlarge [Recommended]
**Instance count**
To get the best performance and the most out of SMDataParallel, you should use at least 2 instances, but you can also use 1 for testing this example.
**Distribution strategy**
Note that to use DDP mode, you update the the `distribution` strategy, and set it to use `smdistributed dataparallel`.
### Training script
In the Github repository https://github.com/HerringForks/deep-learning-models.git we have made reference TensorFlow-SMDataParallel BERT training script available for your use. Clone the repository.
```
# Clone herring forks repository for reference implementation BERT with TensorFlow2-SMDataParallel
!rm -rf deep-learning-models
!git clone --recursive https://github.com/HerringForks/deep-learning-models.git
from sagemaker.tensorflow import TensorFlow
instance_type = "ml.p3dn.24xlarge" # Other supported instance type: ml.p3.16xlarge, ml.p4d.24xlarge
instance_count = 2 # You can use 2, 4, 8 etc.
docker_image = f"{account}.dkr.ecr.{region}.amazonaws.com/{image}:{tag}" # YOUR_ECR_IMAGE_BUILT_WITH_ABOVE_DOCKER_FILE
username = 'AWS'
subnets = ['<SUBNET_ID>'] # Should be same as Subnet used for FSx. Example: subnet-0f9XXXX
security_group_ids = ['<SECURITY_GROUP_ID>'] # Should be same as Security group used for FSx. sg-03ZZZZZZ
job_name = 'smdataparallel-bert-tf2-fsx-2p3dn' # This job name is used as prefix to the sagemaker training job. Makes it easy for your look for your training job in SageMaker Training job console.
file_system_id = '<FSX_ID>' # FSx file system ID with your training dataset. Example: 'fs-0bYYYYYY'
SM_DATA_ROOT = '/opt/ml/input/data/train'
hyperparameters={
"train_dir": '/'.join([SM_DATA_ROOT, 'tfrecords/train/max_seq_len_128_max_predictions_per_seq_20_masked_lm_prob_15']),
"val_dir": '/'.join([SM_DATA_ROOT, 'tfrecords/validation/max_seq_len_128_max_predictions_per_seq_20_masked_lm_prob_15']),
"log_dir": '/'.join([SM_DATA_ROOT, 'checkpoints/bert/logs']),
"checkpoint_dir": '/'.join([SM_DATA_ROOT, 'checkpoints/bert']),
"load_from": "scratch",
"model_type": "bert",
"model_size": "large",
"per_gpu_batch_size": 64,
"max_seq_length": 128,
"max_predictions_per_seq": 20,
"optimizer": "lamb",
"learning_rate": 0.005,
"end_learning_rate": 0.0003,
"hidden_dropout_prob": 0.1,
"attention_probs_dropout_prob": 0.1,
"gradient_accumulation_steps": 1,
"learning_rate_decay_power": 0.5,
"warmup_steps": 2812,
"total_steps": 2000,
"log_frequency": 10,
"run_name" : job_name,
"squad_frequency": 0
}
estimator = TensorFlow(entry_point='albert/run_pretraining.py',
role=role,
image_uri=docker_image,
source_dir='deep-learning-models/models/nlp',
framework_version='2.3.1',
py_version='py3',
instance_count=instance_count,
instance_type=instance_type,
sagemaker_session=sagemaker_session,
subnets=subnets,
hyperparameters=hyperparameters,
security_group_ids=security_group_ids,
debugger_hook_config=False,
# Training using SMDataParallel Distributed Training Framework
distribution={'smdistributed':{
'dataparallel':{
'enabled': True
}
}
}
)
# Configure FSx Input for your SageMaker Training job
from sagemaker.inputs import FileSystemInput
#YOUR_MOUNT_PATH_FOR_TRAINING_DATA # NOTE: '/fsx/' will be the root mount path. Example: '/fsx/albert''''
file_system_directory_path='<FSX_DIRECTORY_PATH>'
file_system_access_mode='rw'
file_system_type='FSxLustre'
train_fs = FileSystemInput(file_system_id=file_system_id,
file_system_type=file_system_type,
directory_path=file_system_directory_path,
file_system_access_mode=file_system_access_mode)
data_channels = {'train': train_fs}
# Submit SageMaker training job
estimator.fit(inputs=data_channels, job_name=job_name)
```
| github_jupyter |
## Prerequisites
This notebook contains examples which are expected *to be run with exactly 4 MPI processes*; not because they wouldn't work otherwise, but simply because it's what their description assumes. For this, you need to:
* Install an MPI distribution on your system, such as OpenMPI, MPICH, or Intel MPI (if not already available).
* Install some optional dependencies, including `mpi4py` and `ipyparallel`; from the root Devito directory, run
```
pip install -r requirements-optional.txt
```
* Create an `ipyparallel` MPI profile, by running our simple setup script. From the root directory, run
```
./scripts/create_ipyparallel_mpi_profile.sh
```
## Launch and connect to an ipyparallel cluster
We're finally ready to launch an ipyparallel cluster. Open a new terminal and run the following command
```
ipcluster start --profile=mpi -n 4
```
Once the engines have started successfully, we can connect to the cluster
```
import ipyparallel as ipp
c = ipp.Client(profile='mpi')
```
In this tutorial, to run commands in parallel over the engines, we will use the %px line magic.
```
%%px --group-outputs=engine
from mpi4py import MPI
print(f"Hi, I'm rank %d." % MPI.COMM_WORLD.rank)
```
## Overview of MPI in Devito
Distributed-memory parallelism via MPI is designed so that users can "think sequentially" for as much as possible. The few things requested to the user are:
* Like any other MPI program, run with `mpirun -np X python ...`
* Some pre- and/or post-processing may be rank-specific (e.g., we may want to plot on a given MPI rank only, even though this might be hidden away in the next Devito releases, when newer support APIs will be provided.
* Parallel I/O (if and when necessary) to populate the MPI-distributed datasets in input to a Devito Operator. If a shared file system is available, there are a few simple alternatives to pick from, such as NumPy’s memory-mapped arrays.
To enable MPI, users have two options. Either export the environment variable `DEVITO_MPI=1` or, programmatically:
```
%%px
from devito import configuration
configuration['mpi'] = True
%%px
# Keep generated code as simple as possible
configuration['openmp'] = False
# Fix platform so that this notebook can be tested by py.test --nbval
configuration['platform'] = 'knl7210'
```
An `Operator` will then generate MPI code, including sends/receives for halo exchanges. Below, we introduce a running example through which we explain how domain decomposition as well as data access (read/write) and distribution work. Performance optimizations are discussed [in a later section](#Performance-optimizations).
Let's start by creating a `TimeFunction`.
```
%%px
from devito import Grid, TimeFunction, Eq, Operator
grid = Grid(shape=(4, 4))
u = TimeFunction(name="u", grid=grid, space_order=2, time_order=0)
```
Domain decomposition is performed when creating a `Grid`. Users may supply their own domain decomposition, but this is not shown in this notebook. Devito exploits the MPI Cartesian topology abstraction to logically split the `Grid` over the available MPI processes. Since `u` is defined over a decomposed `Grid`, its data get distributed too.
```
%%px --group-outputs=engine
u.data
```
Globally, `u` consists of 4x4 points -- this is what users "see". But locally, as shown above, each rank has got a 2x2 subdomain. The key point is: **for the user, the fact that `u.data` is distributed is completely abstracted away -- the perception is that of indexing into a classic NumPy array, regardless of whether MPI is enabled or not**. All sort of NumPy indexing schemes (basic, slicing, etc.) are supported. For example, we can write into a slice-generated view of our data.
```
%%px
u.data[0, 1:-1, 1:-1] = 1.
%%px --group-outputs=engine
u.data
```
The only limitation, currently, is that a data access cannot require a direct data exchange among two or more processes (e.g., the assignment `u.data[0, 0] = u.data[3, 3]` will raise an exception unless both entries belong to the same MPI rank).
We can finally write out a trivial `Operator` to try running something.
```
%%px
op = Operator(Eq(u.forward, u + 1))
summary = op.apply(time_M=0)
```
And we can now check again the (distributed) content of our `u.data`
```
%%px --group-outputs=engine
u.data
```
Everything as expected. We could also peek at the generated code, because we may be curious to see what sort of MPI calls Devito has generated...
```
%%px --targets 0
print(op)
```
Hang on. There's nothing MPI-specific here! At least apart from the header file `#include "mpi.h"`. What's going on? Well, it's simple. Devito was smart enough to realize that this trivial `Operator` doesn't even need any sort of halo exchange -- the `Eq` implements a pure "map computation" (i.e., fully parallel), so it can just let each MPI process do its job without ever synchronizing with halo exchanges. We might want try again with a proper stencil `Eq`.
```
%%px --targets 0
op = Operator(Eq(u.forward, u.dx + 1))
print(op)
```
Uh-oh -- now the generated code looks more complicated than before, though it still is pretty much human-readable. We can spot the following routines:
* `haloupdate0` performs a blocking halo exchange, relying on three additional functions, `gather0`, `sendrecv0`, and `scatter0`;
* `gather0` copies the (generally non-contiguous) boundary data into a contiguous buffer;
* `sendrecv0` takes the buffered data and sends it to one or more neighboring processes; then it waits until all data from the neighboring processes is received;
* `scatter0` copies the received data into the proper array locations.
This is the simplest halo exchange scheme available in Devito. There are a few, and some of them apply aggressive optimizations, [as shown later on](#Performance-optimizations).
Before looking at other scenarios and performance optimizations, there is one last thing it is worth discussing -- the `data_with_halo` view.
```
%%px --group-outputs=engine
u.data_with_halo
```
This is again a global data view. The shown *with_halo* is the "true" halo surrounding the physical domain, **not** the halo used for the MPI halo exchanges (often referred to as "ghost region"). So it gets trivial for a user to initialize the "true" halo region (which is typically read by a stencil `Eq` when an `Operator` iterates in proximity of the domain bounday).
```
%%px
u.data_with_halo[:] = 1.
%%px --group-outputs=engine
u.data_with_halo
```
## MPI and SparseFunction
A `SparseFunction` represents a sparse set of points which are generically unaligned with the `Grid`. A sparse point could be anywhere within a grid, and is therefore attached some coordinates. Given a sparse point, Devito looks at its coordinates and, based on the domain decomposition, **logically** assigns it to a given MPI process; this is purely logical ownership, as in Python-land, before running an Operator, the sparse point physically lives on the MPI rank which created it. Within `op.apply`, right before jumping to C-land, the sparse points are scattered to their logical owners; upon returning to Python-land, the sparse points are gathered back to their original location.
In the following example, we attempt injection of four sparse points into the neighboring grid points via linear interpolation.
```
%%px
from devito import Function, SparseFunction
grid = Grid(shape=(4, 4), extent=(3.0, 3.0))
x, y = grid.dimensions
f = Function(name='f', grid=grid)
coords = [(0.5, 0.5), (1.5, 2.5), (1.5, 1.5), (2.5, 1.5)]
sf = SparseFunction(name='sf', grid=grid, npoint=len(coords), coordinates=coords)
```
Let:
* O be a grid point
* x be a halo point
* A, B, C, D be the sparse points
We show the global view, that is what the user "sees".
```
O --- O --- O --- O
| A | | |
O --- O --- O --- O
| | C | B |
O --- O --- O --- O
| | D | |
O --- O --- O --- O
```
And now the local view, that is what the MPI ranks own when jumping to C-land.
```
Rank 0 Rank 1
O --- O --- x x --- O --- O
| A | | | | |
O --- O --- x x --- O --- O
| | C | | C | B |
x --- x --- x x --- x --- x
Rank 2 Rank 3
x --- x --- x x --- x --- x
| | C | | C | B |
O --- O --- x x --- O --- O
| | D | | D | |
O --- O --- x x --- O --- O
```
We observe that the sparse points along the boundary of two or more MPI ranks are _duplicated_ and thus redundantly computed over multiple processes. However, the contributions from these points to the neighboring halo points are naturally ditched, so the final result of the interpolation is as expected. Let's convince ourselves that this is the case. We assign a value of $5$ to each sparse point. Since we are using linear interpolation and all points are placed at the exact center of a grid quadrant, we expect that the contribution of each sparse point to a neighboring grid point will be $5 * 0.25 = 1.25$. Based on the global view above, we eventually expect `f` to look like as follows:
```
1.25 --- 1.25 --- 0.00 --- 0.00
| | | |
1.25 --- 2.50 --- 2.50 --- 1.25
| | | |
0.00 --- 2.50 --- 3.75 --- 1.25
| | | |
0.00 --- 1.25 --- 1.25 --- 0.00
```
Let's check this out.
```
%%px
sf.data[:] = 5.
op = Operator(sf.inject(field=f, expr=sf))
summary = op.apply()
%%px --group-outputs=engine
f.data
```
## Performance optimizations
The Devito compiler applies several optimizations before generating code.
* Redundant halo exchanges are identified and removed. A halo exchange is redundant if a prior halo exchange carries out the same `Function` update and the data is not “dirty” yet.
* Computation/communication overlap, with explicit prodding of the asynchronous progress engine to make sure that non-blocking communications execute in background during the compute part.
* Halo exchanges could also be reshuffled to maximize the extension of the computation/communication overlap region.
To run with all these optimizations enabled, instead of `DEVITO_MPI=1`, users should set `DEVITO_MPI=full`, or, equivalently
```
%%px
configuration['mpi'] = 'full'
```
We could now peek at the generated code to see that things now look differently.
```
%%px
op = Operator(Eq(u.forward, u.dx + 1))
# Uncomment below to show code (it's quite verbose)
# print(op)
```
The body of the time-stepping loop has changed, as it now implements a classic computation/communication overlap scheme:
* `haloupdate0` triggers non-blocking communications;
* `compute0` executes the core domain region, that is the sub-region which doesn't require reading from halo data to be computed;
* `halowait0` wait and terminates the non-blocking communications;
* `remainder0`, which internally calls `compute0`, computes the boundary region requiring the now up-to-date halo data.
| github_jupyter |
### Tutorial: Download data with ```tf.data```
```
import tensorflow_datasets as tfds
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
# Mandatory: to launch
#tf.enable_eager_execution()
mnist_data, info = tfds.load("mnist", with_info=True, as_supervised=True)
mnist_train, mnist_test = mnist_data["train"], mnist_data["test"]
mnist_train.
mnist_test
mnist_example, = mnist_train.take(1)
image, label = mnist_example["image"], mnist_example["label"]
plt.imshow(image.numpy()[:, :, 0].astype(np.float32), cmap=plt.get_cmap("gray"))
print("Label: %d" % label.numpy())
print(info.features)
print(info.features["label"].num_classes)
print(info.features["label"].names)
print(info.splits["train"].num_examples)
print(info.splits["test"].num_examples)
info = mnist_builder.info
print(info)
train_ds = tfds.load("mnist", split="train")
train_ds
import tensorflow_datasets as tfds
(train_features, train_labels), (test_features, test_labels) = tfds.load("mnist",split=["train", "test"], as_supervised=True)
```
<img src="images/two-layer-network.png" style="height:150px">
### Multilayer Neural Networks
In this lesson, you'll learn how to build multilayer neural networks with TensorFlow. Adding a hidden layer to a network allows it to model more complex functions. Also, using a non-linear activation function on the hidden layer lets it model non-linear functions.
We shall learn about ReLU, a non-linear function, or rectified linear unit. The ReLU function is $0$ for negative inputs and xx for all inputs $x >0$.
Next, you'll see how a ReLU hidden layer is implemented in TensorFlow.
### TensorFlow ReLUs
TensorFlow provides the ReLU function as ```tf.nn.relu()```, as shown below.
```
import tensorflow as tf
# Hidden Layer with ReLU activation function
hidden_layer = tf.add(tf.matmul(features, hidden_weights), hidden_biases)
hidden_layer = tf.nn.relu(hidden_layer)
output = tf.add(tf.matmul(hidden_layer, output_weights), output_biases)
```
The above code applies the ```tf.nn.relu()``` function to the hidden_layer, effectively turning off any negative weights and acting like an on/off switch. Adding additional layers, like the output layer, after an activation function turns the model into a nonlinear function. This nonlinearity allows the network to solve more complex problems.
### Quiz
```
# Solution is available in the other "solution.py" tab
import tensorflow as tf
output = None
hidden_layer_weights = [
[0.1, 0.2, 0.4],
[0.4, 0.6, 0.6],
[0.5, 0.9, 0.1],
[0.8, 0.2, 0.8]]
out_weights = [
[0.1, 0.6],
[0.2, 0.1],
[0.7, 0.9]]
# Weights and biases
weights = [
tf.Variable(hidden_layer_weights),
tf.Variable(out_weights)]
biases = [
tf.Variable(tf.zeros(3)),
tf.Variable(tf.zeros(2))]
# Input
features = tf.Variable([[1.0, 2.0, 3.0, 4.0], [-1.0, -2.0, -3.0, -4.0], [11.0, 12.0, 13.0, 14.0]])
# TODO: Create Model
# Hidden Layer with ReLU activation function
hidden_layer = tf.add(tf.matmul(features, weights[0]), biases[0])
hidden_layer = tf.nn.relu(hidden_layer)
output = tf.add(tf.matmul(hidden_layer, weights[1]), biases[1])
# TODO: Print session results
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(output))
```
## 1. Deep Neural Network in TensorFlow
What I have learnt:
* ```tf.reshape``` is used to turn a picture of size $n \times m$ to a feature matrix with $n \times m$ columns
* How to train a one hidden layer NN
You've seen how to build a logistic classifier using TensorFlow. Now you're going to see how to use the logistic classifier to build a deep neural network.
### Step by Step
In the following walkthrough, we'll step through TensorFlow code written to classify the letters in the MNIST database. If you would like to run the network on your computer, the file is provided here. You can find this and many more examples of TensorFlow at [Aymeric Damien's GitHub repository](https://github.com/aymericdamien/TensorFlow-Examples).
### Code
### TensorFlow MNIST
```
import tensorflow as tf
# Parameters
learning_rate = 0.001
training_epochs = 20
batch_size = 128 # Decrease batch size if you don't have enough memory
display_step = 1
n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
from tensorflow.examples.tutorials.mnist import input_data
from keras.utils import to_categorical
import tensorflow as tf
from tensorflow import keras
import numpy as np
mnist = keras.datasets.mnist
(train_features, train_labels), (test_features, test_labels) = mnist.load_data()
train_features = np.reshape(train_features, [-1, n_input])
#test_features = np.reshape(test_features, [-1, n_input])
# to_categorical: one hot encoding
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
#mnist = input_data.read_data_sets("MNIST_data", one_hot=True)
```
You'll use the MNIST dataset provided by TensorFlow, which batches and One-Hot encodes the data for you.
### Learning Parameters
The focus here is on the architecture of multilayer neural networks, not parameter tuning, so here we'll just give you the learning parameters.
### Hidden Layer Parameters
```
n_hidden_layer = 256 # layer number of features
```
The variable n_hidden_layer determines the size of the hidden layer in the neural network. This is also known as the width of a layer.
### Weights and Biases
```
# Store layers weight & bias
weights = {
'hidden_layer': tf.Variable(tf.random_normal([n_input, n_hidden_layer])),
'out': tf.Variable(tf.random_normal([n_hidden_layer, n_classes]))
}
biases = {
'hidden_layer': tf.Variable(tf.random_normal([n_hidden_layer])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
```
Deep neural networks use multiple layers with each layer requiring it's own weight and bias. The ```'hidden_layer'``` weight and bias is for the hidden layer. The ```'out'``` weight and bias is for the output layer. If the neural network were deeper, there would be weights and biases for each additional layer.
### Input
```
# tf Graph input
x = tf.placeholder(tf.float32, shape=[None, 28, 28, 1])
y = tf.placeholder(tf.float32, shape=[None, n_classes])
x_flat = tf.reshape(x, [-1, n_input])
```
The MNIST data is made up of 28px by 28px images with a single <a target="_blank" href="https://en.wikipedia.org/wiki/Channel_(digital_image%29">channel</a> . The ```tf.reshape()``` function above reshapes the 28px by 28px matrices in ```x``` into row vectors of 784px.
### Multilayer Perceptron
<img src="images/multi-layer.png" style="height:150px">
```
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x_flat, weights['hidden_layer']),\
biases['hidden_layer'])
layer_1 = tf.nn.relu(layer_1)
# Output layer with linear activation
logits = tf.add(tf.matmul(layer_1, weights['out']), biases['out'])
```
You've seen the linear function ```tf.add(tf.matmul(x_flat, weights['hidden_layer'])```, ```biases['hidden_layer'])``` before, also known as ```xw + b```. Combining linear functions together using a ReLU will give you a two layer network.
### Optimizer
```
# Define loss and optimizer
cost = tf.reduce_mean(\
tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)\
.minimize(cost)
```
This is the same optimization technique used in the Intro to TensorFLow lab.
### Session
```
def batches(batch_size, features, labels):
"""
Create batches of features and labels
:param batch_size: The batch size
:param features: List of features
:param labels: List of labels
:return: Batches of (Features, Labels)
"""
assert len(features) == len(labels)
# TODO: Implement batching
output_batches = []
sample_size = len(features)
for start_i in range(0, sample_size, batch_size):
end_i = start_i + batch_size
batch = [features[start_i:end_i], labels[start_i:end_i]]
output_batches.append(batch)
return output_batches
train_feed_dict
# Initializing the variables
init = tf.global_variables_initializer()
train_batches = batches(batch_size, train_features, train_labels)
# Launch the graph
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
# Loop over all batches
for batch_features, batch_labels in train_batches:
train_feed_dict = {
x_flat: batch_features,
y: batch_labels}
loss = sess.run(optimizer, feed_dict=train_feed_dict)
# Calculate accuracy for test dataset
#test_accuracy = sess.run(
# accuracy,
# feed_dict={features: test_features, labels: test_labels})
#print('Test Accuracy: {}'.format(test_accuracy))
# Initializing the variables
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value)
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
```
The MNIST library in TensorFlow provides the ability to receive the dataset in batches. Calling the ```mnist.train.next_batch()``` function returns a subset of the training data.
### Deeper Neural Network
<img src="images/layers.png" style="height:200px">
That's it! Going from one layer to two is easy. Adding more layers to the network allows you to solve more complicated problems.
* A deep network allows to detect shapes at each stage, which can end up as a face at the end
| github_jupyter |
# Permutation and the t-test
In [the idea of permutation](https://matthew-brett.github.io/cfd2019/chapters/05/permutation_idea),
we use permutation to compare a difference between two groups of numbers.
In our case, each number corresponded to one person in the study. The number
for each subject was the number of mosquitoes flying towards them. The subjects
were from two groups: people who had just drunk beer, and people who had just
drunk water. There were 25 subjects who had drunk beer, and therefore, 25
numbers of mosquitoes corresponding to the "beer" group. There were 18
subjects who had drunk water, and 18 numbers corresponding to the "water" group.
Here we repeat the permutation test, as a reminder.
As before, you can download the data from [mosquito_beer.csv](https://matthew-brett.github.io/cfd2019/data/mosquito_beer.csv).
See [this
page](https://github.com/matthew-brett/datasets/tree/master/mosquito_beer) for
more details on the dataset, and [the data license page](https://matthew-brett.github.io/cfd2019/data/license).
```
# Import Numpy library, rename as "np"
import numpy as np
# Import Pandas library, rename as "pd"
import pandas as pd
# Set up plotting
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('fivethirtyeight')
```
Read in the data, get the numbers of mosquitoes flying towards the beer
drinkers, and towards the water drinkers, after they had drunk their beer or
water. See the [the idea of permutation](https://matthew-brett.github.io/cfd2019/chapters/05/permutation_idea),
```
# Read in the data, select beer and water values.
mosquitoes = pd.read_csv('mosquito_beer.csv')
after_rows = mosquitoes[mosquitoes['test'] == 'after']
beer_rows = after_rows[after_rows['group'] == 'beer']
beer_activated = np.array(beer_rows['activated'])
water_rows = after_rows[after_rows['group'] == 'water']
water_activated = np.array(water_rows['activated'])
```
There are 25 values in the beer group, and 18 in the water group:
```
print('Number in beer group:', len(beer_activated))
print('Number in water group:', len(water_activated))
```
We are interested in the difference between the means of these numbers:
```
observed_difference = np.mean(beer_activated) - np.mean(water_activated)
observed_difference
```
In the permutation test we simulate a ideal (null) world in which there is no
average difference between the numbers in the two groups. We do this by
pooling the beer and water numbers, shuffling them, and then making fake beer
and water groups when we know, from the shuffling, that the average difference
will, in the long run, be zero. By doing this shuffle, sample step many times
we build up the distribution of the average difference. This is the *sampling
distribution* of the mean difference:
```
pooled = np.append(beer_activated, water_activated)
n_iters = 10000
fake_differences = np.zeros(n_iters)
for i in np.arange(n_iters):
np.random.shuffle(pooled)
fake_differences[i] = np.mean(pooled[:25]) - np.mean(pooled[25:])
plt.hist(fake_differences)
plt.title('Sampling difference of means');
```
We can work out the proportion of the sampling distribution that is greater
than or equal to the observed value, to get an estimate of the probability of
the observed value, if we are in fact in the null (ideal) world:
```
permutation_p = np.count_nonzero(
fake_differences >= observed_difference)/ n_iters
permutation_p
```
Remember that the *standard deviation* is a measure of the spread of
a distribution.
```
sampling_sd = np.std(fake_differences)
sampling_sd
```
We can use the standard deviation as unit of distance in the distribution.
A way of getting an idea of how extreme the observed value is, is to ask how
many standard deviations the observed value is from the center of the
distribution, which is zero.
```
like_t = observed_difference / sampling_sd
like_t
```
Notice the variable name `like_t`. This number is rather like the famous [t
statistic](https://en.wikipedia.org/wiki/T-statistic).
The difference between this `like_t` value and the *t statistic* is that the t
statistic is the observed difference divided by another *estimate* of the
standard deviation of the sampling distribution. Specifically it is an
estimate that relies on the assumption that the `beer_activated` and
`water_activated` numbers come from a simple bell-shaped [normal
distribution](https://en.wikipedia.org/wiki/Normal_distribution).
The specific calculation relies on calculating the *prediction errors* when we
use the mean from each group as the prediction for the values in the group.
```
beer_errors = beer_activated - np.mean(beer_activated)
water_errors = water_activated - np.mean(water_activated)
all_errors = np.append(beer_errors, water_errors)
```
The estimate for the standard deviation of the sampling distribution follows
this formula. The derivation of the formula is well outside the scope of the
class.
```
# The t-statistic estimate.
n1 = len(beer_activated)
n2 = len(water_activated)
est_error_sd = np.sqrt(np.sum(all_errors ** 2) / (n1 + n2 - 2))
sampling_sd_estimate = est_error_sd * np.sqrt(1 / n1 + 1 / n2)
sampling_sd_estimate
```
Notice that this is rather similar to the estimate we got directly from the
permutation distribution:
```
sampling_sd
```
The t statistic is the observed mean difference divided by the estimate of the
standard deviation of the sampling distribution.
```
t_statistic = observed_difference / sampling_sd_estimate
t_statistic
```
This is the same t statistic value calculated by the *independent sample t
test* routine from Scipy:
```
from scipy.stats import ttest_ind
t_result = ttest_ind(beer_activated, water_activated)
t_result.statistic
```
The equivalent probability from a t test is also outside the scope of the
course, but, if the data we put into the t test is more or less compatible with
a normal distribution, then the matching p value is similar to that of the
permutation test.
```
# The "one-tailed" probability from the t-test.
t_result.pvalue / 2
# The permutation p value is very similar.
permutation_p
```
The permutation test is more general than the t test, because the t test relies
on the assumption that the numbers come from a normal distribution, but the
permutation test does not.
| github_jupyter |
# Métricas de avaliação dos modelos
<img src="img/metrics.png">
- Vermelho (inferior esquerdo) = métricas invariantes de prevalência.
- Azul (superior direito) = métricas variantes à prevalência.
- Roxo (inferior direito) = métricas compostas.
## 1. Introdução
O objetivo desta prática é analisar, na prática, as medidas de avaliação para os modelos de classificação mencionados.
Para isso, vamos trabalhar tentando prever a probabilidade de um funcionário deixar a empresa. Um conjunto de dados está disponível para isso
Os campos incluídos são:
1. Última avaliação
2. Quantidade de projetos nos quais trabalhou
3. Média de horas mensais trabalhadas
4. Tempo na empresa
5. Sofreu algum acidente de trabalho
6. Recebeu uma promoção no último ano
7. Nível salarial
O objetivo, portanto, é prever a probabilidade de $P(left=1 | X)$
## 2. Métricas de avaliação para problemas de classificação
Como de costume, importar os dados e o conjunto de dados
```
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
df = pd.read_csv('HR_comma_sep.csv')
df.sample(10)
```
Criamos a matriz de preditores ($X$) e o target ($y$)
```
train_cols = ['satisfaction_level', 'last_evaluation', 'number_project', 'average_montly_hours',
'time_spend_company', 'Work_accident', 'promotion_last_5years']
X = df[train_cols]
y = df['left']
```
Fazer o split entre train e test:
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
```
### 2.1 Treinando um primeiro classificador
Como primeira etapa (e para manter o problema simples), começamos treinando uma regressão logística.
```
clf = LogisticRegression(C=1e10)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
```
### 2.2 Métricas: Accuracy
Como você deve lembrar, a precisão é calculada como a proporção de amostras corretamente classificadas no total de amostras.
```
from sklearn.metrics import accuracy_score
print('Accuracy=', accuracy_score(y_test, y_pred))
```
Ou seja, neste caso, constatamos que 76% dos casos - no conjunto de teste - foram classificados corretamente.
Agora, quão bom é este classificador? O que significa o fato de podermos classificar corretamente essa proporção de casos?
Uma primeira maneira de começar a responder a essa questão é comparar o desempenho com um classificador muito simples e (quase) trivial: ele é geralmente chamado de "classificador nulo" e consiste simplesmente em prever levando em conta apenas a classe mais frequente.
```
y_test.value_counts()
y_test.mean()
```
Ou seja, 23% dos casos no conjunto de treino são 1, ou seja, deixarão a empresa. Portanto, a proporção de 0 (ou seja, casos que não deixarão a empresa) será:
```
1.0 - y_test.mean()
```
Nosso modelo simples, então, sempre faria previsões iguais a zero. Se fizéssemos as previsões em função desses dados, qual exatidão obteríamos? Na verdade, poderíamos esperar obter uma precisão próxima a 76%. Ou seja, esperaríamos estarmos corretos (sem nenhuma outra informação) em 76% dos casos.
Desta forma, pareceria que o modelo de regressão logística não é tão bom: não parece melhorar muito em comparação com o modelo simples. Se considerássemos somente a precisão, poderíamos ter cometido um erro na avaliação do nosso modelo. Por isso, considerar outras métricas de avaliação costuma ser útil.
### 2.3 Métricas: Matriz de confusão
Embora tenhamos trabalhado com essa entrada, até agora estamos fazendo isso intuitivamente. Tentaremos entender melhor o que é uma matriz de confusão.
Basicamente, é uma tabela de contingência que tabula a distribuição dos casos analisados de acordo com seu valor real ("observado") e seu valor estimado pelo modelo ("previsto").
Em `confusion_matrix` é importante lembrar que o primeiro argumento corresponde aos valores observados e o segundo, aos valores previstos:
```
from sklearn.metrics import confusion_matrix
confusion = confusion_matrix(y_test, y_pred)
print(confusion)
```
Os dados observados estão representados nas linhas (`y_test`). Os dados previstos pelo modelo estão representados nas colunas (`y_pred`).
** Matriz de confusão **
| | Pred Stay ($y\_pred=0$)| Pred Left ($y\_pred=1$)| Total|
| :-------------------- |:----------------------:| :---------------------:|-----:|
| Obs Stay ($y\_test=0$) | 3465 | 304 |3769 |
| Obs Left ($y\_test=1$) | 898 | 283 |1181 |
| Total | 4363 | 587 |N=4950|
Agora, cada caixa fornece informações sobre o desempenho do classificador:
* **True Positives (TP):** previmos corretamente que o funcionário deixará a empresa (295)
* **True Negatives (TN):** previmos corretamente que o funcionário permanecerá na empresa (3497)
* **False Positives (FP): ** previmos que o funcionário deixaria a empresa, mas ele permaneceu (265)
* **False Negatives (FN): ** previmos que o funcionário permaneceria, mas ele deixou a empresa (893)
<img src="img/0_RKKb0xdKkkjT4h2__.jpg">
Para realizar alguns cálculos, vamos atribuir esses casos a variáveis:
```
TP = confusion[1, 1]
TN = confusion[0, 0]
FP = confusion[0, 1]
FN = confusion[1, 0]
```
### 2.4 Métricas computadas a partir da matriz de confusão: Accuracy
```
print((TP + TN) / float(TP + TN + FP + FN))
print(accuracy_score(y_test, y_pred))
```
### 2.5 Métricas computadas a partir da matriz de confusão: Classification Error
Basicamente, é o complemento de accuracy. Quantifica o erro total cometido pelo classificador:
```
class_error = (FP + FN) / float(TP + TN + FP + FN)
print(class_error)
print(1 - accuracy_score(y_test, y_pred))
```
### 2.6 Métricas computadas a partir da matriz de confusão: Sensitivity (ou recall)
Mede a capacidade (quão “sensível”) do modelo de detectar os verdadeiros positivos (TP) em todos os casos que são positivos (FN + TP). Em nosso exemplo, do total de pessoas que deixarão a empresa, quantas o modelo consegue classificar corretamente?
```
from sklearn.metrics import recall_score
sensitivity = TP / float(FN + TP)
print(sensitivity)
print(recall_score(y_test, y_pred))
```
### 2.7 Métricas computadas a partir da matriz de confusão: Specificity
Mede a capacidade de detectar os verdadeiros negativos” (TN) no total de casos que são negativos (TN + FP). Quão específico ou seletivo é o modelo ao prever as instâncias positivas?
```
specificity = TN / (TN + FP)
print(specificity)
```
Nosso modelo parece ser muito específico e pouco sensitivo.
### 2.8 Métricas computadas a partir da matriz de confusão: Precision
Mede o quão “preciso” é o classificador ao prever as instâncias positivas. Ou seja, quando o classificador prevê um valor positivo, com qual frequência essa previsão está correta?
```
from sklearn.metrics import precision_score
precision = TP / float(TP + FP)
print(precision)
print(precision_score(y_test, y_pred))
```
### 2.9 Métricas computadas a partir da matriz de confusão: F1-Score
É uma média harmônica entre precision e recall.
```
from sklearn.metrics import f1_score
f1 = 2*((precision*sensitivity)/(precision+sensitivity))
print(f1)
print(f1_score(y_test,y_pred))
```
### 3. Conclusões
A matriz de confusão fornece um panorama mais completo do desempenho do classificador.
Em quais métricas deveria se concentrar? Obviamente, depende do problema, do objetivo.
* **Exemplo - filtro de SPAM:** Parece que os FN são mais aceitáveis (spam entra na caixa) do que os FP (e-mail útil é filtrado como SPAM).
* **Exemplo - detector de fraudes:** neste caso, parece preferível tolerar FP (transações que NÃO são falsas classificadas como falsas) do que deixar passar TP (transações fraudulentas que não são detectadas). Seria preferível minimizar sensitivity.
## 5. Bônus: fazendo validação cruzada com outras métricas
Suponha que queremos estimar o erro de generalização do nosso modelo de regressão, mas usando validação cruzada. A função `cross_val_score` pode ser usada trocando a métrica a ser avaliada:
```
from sklearn.model_selection import StratifiedKFold, cross_val_score
kf = StratifiedKFold(n_splits=10, shuffle = True)
print('F1-CV=', np.mean(cross_val_score(clf, X, y, cv=kf, scoring='f1')))
print('Recall-CV=', np.mean(cross_val_score(clf, X, y, cv=kf, scoring='recall')))
print('Specificity-CV=', np.mean(cross_val_score(clf, X, y, cv=kf, scoring='precision')))
```
# Escolhendo entre duas métricas, um exemplo da area da saúde:
A sensibilidade de um método reflete o quanto este é eficaz em identificar corretamente, dentre todos os indivíduos avaliados, aqueles que realmente apresentam a característica de interesse. Assim, no caso do IMC, a sensibilidade mede o quanto o método é capaz de identificar aqueles que de fato apresentam obesidade.
Já a especificidade de um método reflete o quanto ele é eficaz em identificar corretamente os indivíduos que não apresentam a condição de interesse (no exemplo dado, seriam os indivíduos que não são de fato obesos).
Assim, métodos de diagnóstico de excesso de peso que apresentam baixa sensibilidade são aqueles mais propensos a fornecer resultados chamados de falso-negativos (quando se deixa de detectar crianças que são realmente obesas) e métodos que apresentam baixa especificidade são mais propensos a dar resultados falso-positivos (detectando como obesas crianças que não apresentam obesidade).
A escolha entre uma maior ou menor sensibilidade do método diagnóstico, e sua relação inversa com a especificidade, depende da aplicação a ser dada. Do ponto de vista de saúde pública, é especialmente importante que o método diagnóstico da obesidade infantil tenha boa sensibilidade, a fim de identificar o maior número possível (de preferência 100%) das crianças que apresentam obesidade.
Nessa situação, é menos grave classificar algumas crianças que não são obesas como apresentando obesidade (as quais não seriam prejudicadas por intervenções de educação nutricional, por exemplo) do que deixar de detectar crianças que são realmente obesas, pois estas, ficando sem tratamento, apresentam um maior risco de desenvolver doenças na infância e permanecer obesas na vida adulta, com grandes prejuízos à saúde.
# Avançado - Curva ROC
O termo espaço de característica de operação do receptor (ROC) é um bocado, e proferir a frase é um modo seguro de gerar olhos vidrados entre o pessoal não estatístico. Mas acontece que é realmente muito simples.
Toda decisão que tomamos é um compromisso. Nós não sabemos a escolha “certa”, então temos que minimizar nossa chance de estar errado. Mas as escolhas têm duas maneiras de estar errado:
1) você decide fazer algo quando não deveria
2) você decide não fazer algo quando deveria ter
Estes são os dois elementos do espaço do ROC.
Vamos usar um exemplo cotidiano: você está tentando escolher um lugar para comer. Se você só vai a lugares que você apreciou no passado, você não terá muitas refeições ruins, mas você também não vai experimentar alguns lugares legais. Se você escolher aleatoriamente para onde ir, você pode ter muitas refeições terríveis, mas também encontrará algumas pedras preciosas que de outra forma você perderia.
O primeiro método tem uma alta taxa de falsos negativos , o que significa que você descarta muitos lugares que você poderia ter gostado. Ele também tem uma taxa de falsos positivos muito baixa , porque você raramente irá para um lugar que você não gosta.
O segundo método é o inverso, com baixa taxa de falsos negativos e alta taxa de falsos positivos. Cada pessoa tem um saldo diferente dessas duas taxas; Alguns preferem tentar coisas novas, outros preferem ficar com o que sabem, outros estão no meio.
Se esses termos forem novos para você, tire um segundo para considerá-los, porque eles são fundamentais para os testes de desempenho. Para reafirmar as definições, um falso negativo significa que você chamou algo negativo (não vá), mas na realidade era positivo (deveria ter sido, você teria gostado). Um falso positivo é o oposto; você chamou isso de positivo (então você tentou), mas você não deveria ter (porque você não gostou).
O importante aqui é que, para qualquer processo decisório, esses erros se equilibram. Se os falsos positivos aumentarem, os falsos negativos vão diminuir e vice-versa.
O espaço ROC é uma maneira de definir visualmente esse trade-off.
<img src="img/two-point-roc1.png">
O ponto laranja mostra uma pessoa que só vai para os mesmos lugares (eles têm refeições ruins 20% do tempo, mas perdem 80% dos bons restaurantes). O ponto azul é alguém que tenta coisas novas (eles têm refeições ruins 80% do tempo, mas só perdem 20% de boas experiências de comida).
Até aí tudo bem, espero. Mas o espaço do ROC é muito mais interessante do que isso, porque também pode nos mostrar como uma decisão é boa . A chance de você estar feliz com uma decisão de comida não é apenas decidida pela forma como você troca os dois tipos de erro, mas também pela precisão com que você consegue identificar um bom lugar para comer.
Você pode fazer muito melhor do que chance ao escolher um lugar para comer. Você poderia perguntar a seus amigos, escolher um lugar que serve uma cozinha que você geralmente gosta, olhar para comentários e assim por diante.
Quando você começa a tomar decisões melhores , sua posição no espaço do ROC se moverá para cima e para a esquerda. E isso tem a interessante propriedade de fazer o trade-off entre falsos positivos e falsos negativos formar uma curva . Essa curva é, de forma pouco surpreendente, chamada de curva ROC e é igualmente significativa para conjuntos de tomadores de decisão (por exemplo, uma série de especialistas) ou tomadores de decisão individuais (como uma pessoa que está variando conscientemente suas taxas de falso negativo e falso positivo) .
O ponto que você escolhe para balancear falsos positivos e falsos negativos é chamado de ponto de operação , que é parte de onde a curva característica de operação do receptor recebe seu nome. Outro nome para isso é o limite , que pode ser mais intuitivo. Alguém que minimiza quantas refeições ruins come só vai confiar em restaurantes de que já gosta; eles têm um limite alto para tomar uma decisão. Uma pessoa com um limiar baixo está disposta a tentar qualquer coisa, eles não têm filtro.
<img src="img/roc-auc.png">
A área cinza nos diz como um tomador de decisão é bom, independente de como eles equilibram falsos positivos e falsos negativos. Quanto mais cinza, melhores as decisões.
Uma vez que quanto mais à esquerda, melhor suas decisões são, a área sob a curva ROC , também conhecida como ROC AUC ou AUROC, é uma ótima métrica para entender como suas decisões são boas.
**Resumo**
- Todas as decisões compensam o risco de falsos positivos e falsos negativos.
- Espaço ROC é como visualizamos esse trade-off.
- Tomadores de decisão (isto é, pessoas, sistemas de inteligência artificial) vivem em um certo ponto no espaço do ROC, embora possam se movimentar no espaço alterando seu limite.
- Melhores tomadores de decisão estão à frente e à esquerda no espaço do ROC.
- Múltiplos tomadores de decisão da mesma habilidade ou perícia tendem a viver em uma curva no espaço ROC, que chamamos de curva ROC. Isso se aplica a grupos de tomadores de decisão e a um único tomador de decisões em diferentes limites.
- A área sob a curva ROC é uma medida geral de quão bom é um sistema em tomar decisões independentes do limiar de decisão. Isso pode ser visto como uma medida de especialização.
- Tomadores de decisão abaixo da curva são sub-ótimos.
Para ir além, veja: https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/
**Propriedades**
Invariância de prevalência
A prevalência é a proporção de exemplos positivos para negativos nos dados. Para remover a prevalência da consideração, simplesmente não precisamos comparar esses dois grupos; precisamos olhar para os aspectos positivos e negativos separadamente uns dos outros.
A curva ROC obtém isso plotando a sensibilidade no eixo Y e a especificidade no eixo X.
Sensibilidade é a razão entre verdadeiros positivos (casos positivos corretamente identificados como positivos pelo tomador de decisão) e o número total de casos positivos nos dados. Então você pode ver, só olha para positivos.
A especificidade tem a mesma propriedade, sendo a razão entre os negativos reais e o número total de negativos. Apenas os negativos são importantes.
**Como interpretar a AUC**
O número reflete a probabilidade de classificar corretamente (colocando em ordem) qualquer par de exemplos negativos / positivos aleatórios.
(Não vamos aprofundar nesse tema mas há uma importância matemática por estar profundamente ligada à estatística U de Mann-Whitney. https://www.alexejgossmann.com/auc/ )
<img src="img/slide14.png">
Se você quiser poucos falsos positivos, por exemplo para um algoritmo que sugere um tratamento muito caro para uma doença simples, você deve ir com a curva A, ela é muito ingreme no canto inferior esquerdo que significa que você conseguirá uma sensibilidade decente enquanto mantém a FPR muito baixo.
Já a curva B é boa para o caso contrario, pois você terá uma sensibilidade muito alta para especificidades moderadas. Você deverá escolher a curva B quando o custo de não capturar um caso positivo (como no exemplo da doença) for muito alto em relação ao custo de errar a afirmação (no nosso exemplo isso significa uma doença altamente mortifera, que o fato de falarmos que o sujeito não há tem causará um grande custo).
** Outra forma de interpretar a AUROC**
(inspirado por essa analise de autismo: https://www.spectrumnews.org/opinion/viewpoint/quest-autism-biomarkers-faces-steep-statistical-challenges/ )
O grafico abaixo é exatamente a mesma coisa que a nossa matriz de confusão plotados como distribuições de probabilidade - os números de verdadeiros positivos, verdadeiros negativos, falsos positivos e falsos negativos. Assim, podemos ver que, com esse limite, haverá mais falsos negativos e menos falsos positivos.
<img src="img/threshold2.png">
Nesse caso, o limite já foi selecionado (linha azul na figura é a fronteira de decisão) e podemos ver isso refletido como o círculo aberto na curva ROC. Mas queremos usar nossa curva ROC para selecionar um limite. O que acontece quando nós variamos o limite?
<img src="img/fig-2-11.gif">
Vamos variando o limiar/threshold/fronteira de decisão que separa os targets 0/1, isso move a posição ao longo da curva ROC, mas a curva permanece inalterada. Isso faz sentido, o limiar muda a razão de negativos reais para negativos de condição e positivos verdadeiros para positivos de condição, mas isso não muda a experiência do tomador de decisão.
Mas isso levanta a questão - qual é a expertise? (Nesse caso do médico diagnosticando autismo ou do nosso modelo competindo com o médico)
<img src="img/fig-3-1.gif">
A perícia é quão bem o tomador de decisão pode separar as classes. Quanto menos se sobrepõem, menos erros e mais a curva ROC se move para cima e para a esquerda.
E isso faz sentido no cenário da aprendizagem profunda também. Quando otimizamos a perda de registros, estamos tentando empurrar as classes o mais longe possível. Assim, podemos ver a comparação direta aqui: quanto melhor separarmos nossas classes (quanto mais treinado for o nosso modelo), maior será a AUC, não relacionada ao limiar ou à prevalência. É por isso que penso que a perícia é proporcional ao AUC ^^.
A outra coisa que isso mostra muito bem é que, embora o número de erros mude com especialização, o número de casos de condição positiva e de condição negativa não. A prevalência é inalterada.
Então, precisamos visualizar o elemento final: o que acontece com as funções de densidade de probabilidade e a curva ROC conforme as mudanças de prevalência? Sabendo o que fazemos do resto do post, esperamos que o tamanho proporcional dos picos mude, mas não as razões que compõem a curva ROC. Acontece que isso é certo.
<img src="img/fig5.gif">
A curva ROC permanece a mesma porque a razão para positivos verdadeiros para positivos de condição permanece a mesma (assim como TN: CN). Mas a proporção de verdadeiros positivos para falsos positivos muda drasticamente. A barra de duas cores na parte inferior reflete a precisão e vai de acima de 80% a menos de 10% à medida que a prevalência muda.
Na pagina original há um grafico interativo: https://www.spectrumnews.org/opinion/viewpoint/quest-autism-biomarkers-faces-steep-statistical-challenges/
| github_jupyter |
```
import sys
sys.path.append('../build')
import numpy as np
import libry as ry
K = ry.Config()
K.addFile("../rai-robotModels/pr2/pr2.g")
K.addFile("../rai-robotModels/objects/tables.g")
K.addFrame("obj0", "table1", "type:ssBox size:[.1 .1 .2 .02] color:[1. 0. 0.], contact, logical={ object }, joint:rigid, Q:<t(0 0 .15)>" )
K.addFrame("obj1", "table1", "type:ssBox size:[.1 .1 .2 .02] color:[1. 0. 0.], contact, logical={ object }, joint:rigid, Q:<t(0 .2 .15)>" )
K.addFrame("obj2", "table1", "type:ssBox size:[.1 .1 .2 .02] color:[1. 0. 0.], contact, logical={ object }, joint:rigid, Q:<t(0 .4 .15)>" )
K.addFrame("obj3", "table1", "type:ssBox size:[.1 .1 .2 .02] color:[1. 0. 0.], contact, logical={ object }, joint:rigid, Q:<t(0 .6.15)>" )
K.addFrame("tray", "table2", "type:ssBox size:[.15 .15 .04 .02] color:[0. 1. 0.], logical={ table }, Q:<t(0 0 .07)>" );
K.addFrame("", "tray", "type:ssBox size:[.27 .27 .04 .02] color:[0. 1. 0.]" )
V = K.view()
lgp = K.lgp("../rai/test/LGP/pickAndPlace/fol-pnp-switch.g")
lgp.nodeInfo()
# this writes the initial state, which is important to check:
# do the grippers have the gripper predicate, do all objects have the object predicate, and tables the table predicate? These need to be set using a 'logical' attribute in the g-file
# the on predicate should automatically be generated based on the configuration
lgp.getDecisions()
# This is also useful to check: inspect all decisions possible in this node, which expands the node.
# If there is no good decisions, the FOL rules are buggy
lgp.walkToDecision(3)
lgp.nodeInfo()
# Using getDecisions and walkToDecision and walkToParent, you can walk to anywhere in the tree by hand
lgp.viewTree()
lgp.walkToNode("(grasp pr2R obj0) (grasp pr2L obj1) (place pr2R obj0 tray)")
lgp.nodeInfo()
# at a node, you can compute bounds, namely BT.seq (just key frames), BT.path (the full path),
# and BT.setPath (also the full path, but seeded with the BT.seq result)
lgp.optBound(ry.BT.seq, True)
lgp.nodeInfo()
komo = lgp.getKOMOforBound(ry.BT.seq)
komo.display()
komo = 0
lgp.optBound(ry.BT.path, True)
lgp.nodeInfo()
lgp.viewTree()
# finally, the full multi-bound tree search (MBTS)
# you typically want to add termination rules, i.e., symbolic goals
print("THIS RUNS A THREAD. CHECK THE CONSOLE FOR OUTPUT. THIS IS GENERATING LOTS OF FILES.")
lgp.addTerminalRule("(on obj0 tray) (on obj1 tray) (on obj2 tray)")
lgp.run(2)
# wait until you have some number of solutions found (repeat executing this line...)
lgp.numSolutions()
# query the optimization features of the 0. solution
lgp.getReport(0, ry.BT.seqPath)
# get the KOMO object for the seqPath computation of the 0. solution
komo = lgp.getKOMO(0, ry.BT.seqPath)
komo.displayTrajectory() #SOOO SLOOOW (TODO: add parameter for display speed)
# assign K to the 20. configuration of the 0. solution, check display
# you can now query anything (joint state, frame state, features)
X = komo.getConfiguration(20)
K.setFrameState(X)
lgp.stop() #stops the thread... takes a while to finish the current job
lgp.run(2) #will continue where it stopped
komo=0
lgp=0
import sys
sys.path.append('../rai/rai/ry')
import numpy as np
import libry as ry
C = ry.Config()
D = C.view()
C.addFile('../test/lgp-example.g');
lgp = C.lgp("../test/fol.g");
lgp.walkToNode("(grasp baxterR stick) (push stickTip redBall table1) (grasp baxterL redBall) ");
print(lgp.nodeInfo())
lgp.optBound(BT.pose, True);
komo = lgp.getKOMOforBound(BT.path)
komo.display()
input("Press Enter to continue...")
```
| github_jupyter |
# Randomized Benchmarking
## Contents
1. [Introduction](#intro)
2. [The Randomized Benchmarking Protocol](#protocol)
3. [The Intuition Behind RB](#intuition)
4. [Simultaneous Randomized Benchmarking](#simultaneousrb)
5. [Predicted Gate Fidelity](#predicted-gate-fidelity)
6. [References](#references)
## 1. Introduction <a id='intro'></a>
One of the main challenges in building a quantum information processor is the non-scalability of completely
characterizing the noise affecting a quantum system via process tomography. In addition, process tomography is sensitive to noise in the pre- and post rotation gates plus the measurements (SPAM errors). Gateset tomography can take these errors into account, but the scaling is even worse. A complete characterization
of the noise is useful because it allows for the determination of good error-correction schemes, and thus
the possibility of reliable transmission of quantum information.
Since complete process tomography is infeasible for large systems, there is growing interest in scalable
methods for partially characterizing the noise affecting a quantum system. A scalable (in the number $n$ of qubits comprising the system) and robust algorithm for benchmarking the full set of Clifford gates by a single parameter using randomization techniques was presented in [1]. The concept of using randomization methods for benchmarking quantum gates is commonly called **Randomized Benchmarking
(RB)**.
## 2. The Randomized Benchmarking Protocol <a id='protocol'></a>
We should first import the relevant qiskit classes for the demonstration:
```
# Import general libraries (needed for functions)
import numpy as np
import matplotlib.pyplot as plt
from IPython import display
# Import the RB Functions
import qiskit.ignis.verification.randomized_benchmarking as rb
# Import Qiskit classes
import qiskit
from qiskit import assemble, transpile
from qiskit.providers.aer.noise import NoiseModel
from qiskit.providers.aer.noise.errors.standard_errors import depolarizing_error, thermal_relaxation_error
```
A RB protocol (see [1,2]) consists of the following steps:
### Step 1: Generate RB sequences
The RB sequences consist of random Clifford elements chosen uniformly from the Clifford group on $n$-qubits,
including a computed reversal element,
that should return the qubits to the initial state.
More precisely, for each length $m$, we choose $K_m$ RB sequences.
Each such sequence contains $m$ random elements $C_{i_j}$ chosen uniformly from the Clifford group on $n$-qubits, and the $m+1$ element is defined as follows: $C_{i_{m+1}} = (C_{i_1}\cdot ... \cdot C_{i_m})^{-1}$. It can be found efficiently by the Gottesmann-Knill theorem.
For example, we generate below several sequences of 2-qubit Clifford circuits.
```
# Generate RB circuits (2Q RB)
# number of qubits
nQ = 2
rb_opts = {}
#Number of Cliffords in the sequence
rb_opts['length_vector'] = [1, 10, 20, 50, 75, 100, 125, 150, 175, 200]
# Number of seeds (random sequences)
rb_opts['nseeds'] = 5
# Default pattern
rb_opts['rb_pattern'] = [[0, 1]]
rb_circs, xdata = rb.randomized_benchmarking_seq(**rb_opts)
```
As an example, we print the circuit corresponding to the first RB sequence
```
rb_circs[0][0].draw()
```
One can verify that the Unitary representing each RB circuit should be the identity (with a global phase).
We simulate this using Aer unitary simulator.
```
# Create a new circuit without the measurement
qregs = rb_circs[0][-1].qregs
cregs = rb_circs[0][-1].cregs
qc = qiskit.QuantumCircuit(*qregs, *cregs)
for i in rb_circs[0][-1][0:-nQ]:
qc.data.append(i)
# The Unitary is an identity (with a global phase)
u_sim = qiskit.Aer.get_backend('unitary_simulator')
basis_gates = ['u1','u2','u3','cx'] # use U,CX for now
qobj = assemble(qc)
unitary = u_sim.run(qobj).result().get_unitary()
from qiskit_textbook.tools import array_to_latex
array_to_latex(unitary, pretext="\\text{Unitary} = ")
```
### Step 2: Execute the RB sequences (with some noise)
We can execute the RB sequences either using Qiskit Aer Simulator (with some noise model) or using IBMQ provider, and obtain a list of results.
By assumption each operation $C_{i_j}$ is allowed to have some error, represented by $\Lambda_{i_j,j}$, and each sequence can be modeled by the operation:
$$\textit{S}_{\textbf{i}_\textbf{m}} = \bigcirc_{j=1}^{m+1} (\Lambda_{i_j,j} \circ C_{i_j})$$
where ${\textbf{i}_\textbf{m}} = (i_1,...,i_m)$ and $i_{m+1}$ is uniquely determined by ${\textbf{i}_\textbf{m}}$.
```
# Run on a noisy simulator
noise_model = NoiseModel()
# Depolarizing error on the gates u2, u3 and cx (assuming the u1 is virtual-Z gate and no error)
p1Q = 0.002
p2Q = 0.01
noise_model.add_all_qubit_quantum_error(depolarizing_error(p1Q, 1), 'u2')
noise_model.add_all_qubit_quantum_error(depolarizing_error(2 * p1Q, 1), 'u3')
noise_model.add_all_qubit_quantum_error(depolarizing_error(p2Q, 2), 'cx')
backend = qiskit.Aer.get_backend('qasm_simulator')
```
### Step 3: Get statistics about the survival probabilities
For each of the $K_m$ sequences the survival probability $Tr[E_\psi \textit{S}_{\textbf{i}_\textbf{m}}(\rho_\psi)]$
is measured.
Here $\rho_\psi$ is the initial state taking into account preparation errors and $E_\psi$ is the
POVM element that takes into account measurement errors.
In the ideal (noise-free) case $\rho_\psi = E_\psi = | \psi {\rangle} {\langle} \psi |$.
In practice one can measure the probability to go back to the exact initial state, i.e. all the qubits in the ground state $ {|} 00...0 {\rangle}$ or just the probability for one of the qubits to return back to the ground state. Measuring the qubits independently can be more convenient if a correlated measurement scheme is not possible. Both measurements will fit to the same decay parameter according to the properties of the *twirl*.
### Step 4: Find the averaged sequence fidelity
Average over the $K_m$ random realizations of the sequence to find the averaged sequence **fidelity**,
$$F_{seq}(m,|\psi{\rangle}) = Tr[E_\psi \textit{S}_{K_m}(\rho_\psi)]$$
where
$$\textit{S}_{K_m} = \frac{1}{K_m} \sum_{\textbf{i}_\textbf{m}} \textit{S}_{\textbf{i}_\textbf{m}}$$
is the average sequence operation.
### Step 5: Fit the results
Repeat Steps 1 through 4 for different values of $m$ and fit the results for the averaged sequence fidelity to the model:
$$ \textit{F}_{seq}^{(0)} \big(m,{|}\psi {\rangle} \big) = A_0 \alpha^m +B_0$$
where $A_0$ and $B_0$ absorb state preparation and measurement errors as well as an edge effect from the
error on the final gate.
$\alpha$ determines the average error-rate $r$, which is also called **Error per Clifford (EPC)**
according to the relation
$$ r = 1-\alpha-\frac{1-\alpha}{2^n} = \frac{2^n-1}{2^n}(1-\alpha)$$
(where $n=nQ$ is the number of qubits).
As an example, we calculate the average sequence fidelity for each of the RB sequences, fit the results to the exponential curve, and compute the parameters $\alpha$ and EPC.
```
# Create the RB fitter
backend = qiskit.Aer.get_backend('qasm_simulator')
basis_gates = ['u1','u2','u3','cx']
shots = 200
transpiled_circs_list = []
rb_fit = rb.RBFitter(None, xdata, rb_opts['rb_pattern'])
for rb_seed, rb_circ_seed in enumerate(rb_circs):
print(f'Compiling seed {rb_seed}')
new_rb_circ_seed = qiskit.compiler.transpile(rb_circ_seed, basis_gates=basis_gates)
transpiled_circs_list.append(new_rb_circ_seed)
print(f'Simulating seed {rb_seed}')
qobj = assemble(new_rb_circ_seed, shots=shots)
job = backend.run(qobj,
noise_model=noise_model,
max_parallel_experiments=0)
# Add data to the fitter
rb_fit.add_data(job.result())
print('After seed %d, alpha: %f, EPC: %f'%(rb_seed,rb_fit.fit[0]['params'][1], rb_fit.fit[0]['epc']))
```
### Extra Step: Plot the results
```
plt.figure(figsize=(8, 6))
ax = plt.subplot(1, 1, 1)
# Plot the essence by calling plot_rb_data
rb_fit.plot_rb_data(0, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit RB'%(nQ), fontsize=18)
plt.show()
```
## 3. The Intuition Behind RB <a id='intuition'></a>
The depolarizing quantum channel has a parameter $\alpha$, and works like this: with probability $\alpha$, the state remains the same as before; with probability $1-\alpha$, the state becomes the totally mixed state, namely:
$$\rho_f = \alpha \rho_i + \frac{1-\alpha}{2^n} * \mathbf{I}$$
Suppose that we have a sequence of $m$ gates, not necessarily Clifford gates,
where the error channel of the gates is a depolarizing channel with parameter $\alpha$
(same $\alpha$ for all the gates).
Then with probability $\alpha^m$ the state is correct at the end of the sequence,
and with probability $1-\alpha^m$ it becomes the totally mixed state, therefore:
$$\rho_f^m = \alpha^m \rho_i + \frac{1-\alpha^m}{2^n} * \mathbf{I}$$
Now suppose that in addition we start with the ground state;
that the entire sequence amounts to the identity;
and that we measure the state at the end of the sequence with the standard basis.
We derive that the probability of success at the end of the sequence is:
$$\alpha^m + \frac{1-\alpha^m}{2^n} = \frac{2^n-1}{2^n}\alpha^m + \frac{1}{2^n} = A_0\alpha^m + B_0$$
It follows that the probability of success, aka fidelity, decays exponentially with the sequence length, with exponent $\alpha$.
The last statement is not necessarily true when the channel is other than the depolarizing channel. However, it turns out that if the gates are uniformly-randomized Clifford gates, then the noise of each gate behaves on average as if it was the depolarizing channel, with some parameter that can be computed from the channel, and we obtain the exponential decay of the fidelity.
Formally, taking an average over a finite group $G$ (like the Clifford group) of a quantum channel $\bar \Lambda$ is also called a *twirl*:
$$ W_G(\bar \Lambda) \frac{1}{|G|} \sum_{u \in G} U^{\dagger} \circ \bar \Lambda \circ U$$
Twirling over the entire unitary group yields exactly the same result as the Clifford group. The Clifford group is a *2-design* of the unitary group.
## 4. Simultaneous Randomized Benchmarking <a id='simultaneousrb'></a>
RB is designed to address fidelities in multiqubit systems in two ways. For one, RB over the full $n$-qubit space
can be performed by constructing sequences from the $n$-qubit Clifford group. Additionally, the $n$-qubit space
can be subdivided into sets of qubits $\{n_i\}$ and $n_i$-qubit RB performed in each subset simultaneously [4].
Both methods give metrics of fidelity in the $n$-qubit space.
For example, it is common to perform 2Q RB on the subset of two-qubits defining a CNOT gate while the other qubits are quiescent. As explained in [4], this RB data will not necessarily decay exponentially because the other qubit subspaces are not twirled. Subsets are more rigorously characterized by simultaneous RB, which also measures some level of crosstalk error since all qubits are active.
An example of simultaneous RB (1Q RB and 2Q RB) can be found in:
https://github.com/Qiskit/qiskit-tutorials/blob/master/tutorials/noise/4_randomized_benchmarking.ipynb
## 5. Predicted Gate Fidelity <a id='predicted-gate-fidelity'></a>
If we know the errors on the underlying gates (the gateset) we can predict the EPC without running RB experiment. This calculation verifies that your RB experiment followed by fitting yields correct EPC value. First we need to count the number of these gates per Clifford.
Then, the two qubit Clifford gate error function ``calculate_2q_epc`` gives the error per 2Q Clifford. It assumes that the error in the underlying gates is depolarizing. This function is derived in the supplement to [5].
```
# count the number of single and 2Q gates in the 2Q Cliffords
qubits = rb_opts['rb_pattern'][0]
gate_per_cliff = rb.rb_utils.gates_per_clifford(
transpiled_circuits_list=transpiled_circs_list,
clifford_lengths=xdata[0],
basis=basis_gates,
qubits=qubits)
for basis_gate in basis_gates:
print("Number of %s gates per Clifford: %f"%(
basis_gate,
np.mean([gate_per_cliff[qubit][basis_gate] for qubit in qubits])))
# convert from depolarizing error to epg (1Q)
epg_q0 = {'u1': 0, 'u2': p1Q/2, 'u3': 2 * p1Q/2}
epg_q1 = {'u1': 0, 'u2': p1Q/2, 'u3': 2 * p1Q/2}
# convert from depolarizing error to epg (2Q)
epg_q01 = 3/4 * p2Q
# calculate the predicted epc from underlying gate errors
pred_epc = rb.rb_utils.calculate_2q_epc(
gate_per_cliff=gate_per_cliff,
epg_2q=epg_q01,
qubit_pair=qubits,
list_epgs_1q=[epg_q0, epg_q1])
print("Predicted 2Q Error per Clifford: %e (qasm simulator result: %e)" % (pred_epc, rb_fit.fit[0]['epc']))
```
On the other hand, we can calculate the errors on the underlying gates (the gateset) from the experimentally obtained EPC. Given that we know the errors on the every single-qubit gates in the RB sequence, we can predict 2Q gate error from the EPC of two qubit RB experiment.
The two qubit gate error function ``calculate_2q_epg`` gives the estimate of error per 2Q gate. In this section we prepare single-qubit errors using the deporalizing error model. If the error model is unknown, EPGs of those gates, for example [``u1``, ``u2``, ``u3``], can be estimated with a separate 1Q RB experiment with the utility function ``calculate_1q_epg``.
```
# use 2Q EPC from qasm simulator result and 1Q EPGs from depolarizing error model
pred_epg = rb.rb_utils.calculate_2q_epg(
gate_per_cliff=gate_per_cliff,
epc_2q=rb_fit.fit[0]['epc'],
qubit_pair=qubits,
list_epgs_1q=[epg_q0, epg_q1])
print("Predicted 2Q Error per gate: %e (gate error model: %e)" % (pred_epg, epg_q01))
```
## 6. References <a id='references'></a>
1. Easwar Magesan, J. M. Gambetta, and Joseph Emerson, *Robust randomized benchmarking of quantum processes*,
https://arxiv.org/pdf/1009.3639
2. Easwar Magesan, Jay M. Gambetta, and Joseph Emerson, *Characterizing Quantum Gates via Randomized Benchmarking*,
https://arxiv.org/pdf/1109.6887
3. A. D. C'orcoles, Jay M. Gambetta, Jerry M. Chow, John A. Smolin, Matthew Ware, J. D. Strand, B. L. T. Plourde, and M. Steffen, *Process verification of two-qubit quantum gates by randomized benchmarking*, https://arxiv.org/pdf/1210.7011
4. Jay M. Gambetta, A. D. C´orcoles, S. T. Merkel, B. R. Johnson, John A. Smolin, Jerry M. Chow,
Colm A. Ryan, Chad Rigetti, S. Poletto, Thomas A. Ohki, Mark B. Ketchen, and M. Steffen,
*Characterization of addressability by simultaneous randomized benchmarking*, https://arxiv.org/pdf/1204.6308
5. David C. McKay, Sarah Sheldon, John A. Smolin, Jerry M. Chow, and Jay M. Gambetta, *Three Qubit Randomized Benchmarking*, https://arxiv.org/pdf/1712.06550
```
import qiskit
qiskit.__qiskit_version__
```
| github_jupyter |
<p align="center">
<img src="https://www.dbs.ie/images/default-source/logos/dbs-logo-2019-small.png" />
</p>
This code is submitted by Yogeshwaran Shanmuganathan as a part of my final dissertation or thesis for the completion of "Master of Science in Data Analytics" at Dublin Business School, Dublin, Ireland.
```
!pip install pandas
!pip install numpy
!pip install matplotlib
!pip install pyspark
import pandas as pd
import numpy as np
# Load functionality to manipulate dataframes
from pyspark.sql import functions as fn
import matplotlib.pyplot as plt
from pyspark.sql.functions import stddev, mean, col
from pyspark.sql import SQLContext
from pyspark.sql import SparkSession
# Functionality for computing features
from pyspark.ml import feature, regression, classification, Pipeline
from pyspark.ml.classification import LogisticRegression
from pyspark.ml import feature, regression, classification, Pipeline
from pyspark.ml.feature import Tokenizer, VectorAssembler, HashingTF, Word2Vec, StringIndexer, OneHotEncoder
from pyspark.ml import clustering
from itertools import chain
from pyspark.ml.linalg import Vectors, VectorUDT
from pyspark.ml import classification
from pyspark.ml.classification import LogisticRegression, RandomForestClassifier, DecisionTreeClassifier
from pyspark.ml import evaluation
from pyspark.ml.evaluation import BinaryClassificationEvaluator,MulticlassClassificationEvaluator
from pyspark import keyword_only
from pyspark.ml import Transformer
from pyspark.ml.param.shared import HasInputCol, HasOutputCol, Param
#Classification Report
from sklearn.metrics import classification_report, confusion_matrix
from google.colab import files
uploaded = files.upload()
MAX_MEMORY = "45g"
spark = SparkSession \
.builder \
.appName("how to read csv file") \
.config("spark.executor.memory", MAX_MEMORY) \
.config("spark.driver.memory", MAX_MEMORY) \
.getOrCreate()
# load master dataset
dfmaster = spark.read.format("csv").load("master.csv", delimiter = ",", header = True)
```
Preparations and Understanding before Modeling
```
# create a 0/1 column for acquistions
dfmaster = dfmaster.\
withColumn("labelacq", fn.when(col("status") == "acquired","1").otherwise("0"))
# number of rows in master table
print(dfmaster.count())
dfmaster
```
NAs and market column (with too many levels) handeling
```
# check for missing values
dfmaster.toPandas().isnull().sum()
# drop market columns because of too many level and better breakdown with the category_final column
dfmaster1 = dfmaster.drop("market")
dfmaster1 = dfmaster1.toPandas()
dfmaster1
# Replace NaN with mode for categorical variables
dfmaster1['total_raised_usd'] = dfmaster1['total_raised_usd'].fillna(dfmaster1['total_raised_usd'].mode()[0])
dfmaster1['time_to_first_funding'] = dfmaster1['time_to_first_funding'].fillna(dfmaster1['time_to_first_funding'].mode()[0])
dfmaster1['founded_year'] = dfmaster1['founded_year'].fillna(dfmaster1['founded_year'].mode()[0])
dfmaster1['age'] = dfmaster1['age'].fillna(dfmaster1['age'].mode()[0])
dfmaster1['status'] = dfmaster1['status'].fillna(dfmaster1['status'].mode()[0])
dfmaster1['country_code'] = dfmaster1['country_code'].fillna(dfmaster1['country_code'].mode()[0])
dfmaster1['city'] = dfmaster1['city'].fillna(dfmaster1['city'].mode()[0])
dfmaster1['quarter_new'] = dfmaster1['quarter_new'].fillna(dfmaster1['quarter_new'].mode()[0])
dfmaster1['investor_country_codes'] = dfmaster1['investor_country_codes'].fillna(dfmaster1['investor_country_codes'].mode()[0])
dfmaster1['funding_round_types'] = dfmaster1['funding_round_types'].fillna(dfmaster1['funding_round_types'].mode()[0])
dfmaster1['permaround'] = dfmaster1['permaround'].fillna(dfmaster1['permaround'].mode()[0])
dfmaster1['investor_country_code'] = dfmaster1['investor_country_code'].fillna(dfmaster1['investor_country_code'].mode()[0])
dfmaster1['funding_round_type'] = dfmaster1['funding_round_type'].fillna(dfmaster1['funding_round_type'].mode()[0])
dfmaster1['category_final'] = dfmaster1['category_final'].fillna(dfmaster1['category_final'].mode()[0])
dfmaster1['perma'] = dfmaster1['perma'].fillna(dfmaster1['perma'].mode()[0])
# check for missing values
dfmaster1.isnull().sum()
# drop rows with missing values
dfmaster1drop = dfmaster1.dropna()
print(dfmaster1drop.count())
sql = SQLContext(spark)
dfmaster2 = sql.createDataFrame(dfmaster1drop)
display(dfmaster2)
```
String indexer, one hot encoder and casting to numerics
```
# create index for categorical variables
# use pipline to apply indexer
list1 = ["country_code","city","quarter_new","investor_country_code","funding_round_type","category_final"]
indexers = [StringIndexer(inputCol=column, outputCol=column+"_index").fit(dfmaster2) for column in list1]
pipelineindex = Pipeline(stages=indexers).fit(dfmaster2)
dfmasternew = pipelineindex.transform(dfmaster2)
# convert string to double for numerical variables
dfmasternew = dfmasternew.\
withColumn("numeric_funding_rounds", dfmasternew["funding_rounds"].cast("int")).\
withColumn("numeric_age", dfmasternew["age"].cast("int")).\
withColumn("numeric_count_investor", dfmasternew["count_investor"].cast("int")).\
withColumn("numeric_time_to_first_funding", dfmasternew["time_to_first_funding"].cast("int")).\
withColumn("numeric_total_raised_usd", dfmasternew["total_raised_usd"].cast("int")).\
withColumn("label", dfmasternew["labelacq"].cast("int"))
dfmasternew = dfmasternew.\
withColumn("funding_round_type", dfmasternew["funding_round_type"].cast("double")).\
withColumn("country_code_index", dfmasternew["country_code_index"].cast("double")).\
withColumn("city_index", dfmasternew["city_index"].cast("double")).\
withColumn("quarter_new_index", dfmasternew["quarter_new_index"].cast("double")).\
withColumn("labelacq", dfmasternew["labelacq"].cast("double"))
# save
dfone = dfmasternew
display(dfone)
print(dfone.count())
# list of index columns of categorical variables for the onehotencoder
list2 = dfone.columns[24:30]
list2
# create sparse matrix of indexed categorical columns
# use pipline to apply the encoder
onehotencoder_stages = [OneHotEncoder(inputCol=c, outputCol='onehotencoded_' + c) for c in list2]
pipelineonehot = Pipeline(stages=onehotencoder_stages)
pipeline_mode = pipelineonehot.fit(dfone)
df_coded = pipeline_mode.transform(dfone)
df_coded.show()
```
Data split, defining vector assemblers & standard scaler and creating labellist
```
# split dataset into training, validaiton and testing dataset
training_df, validation_df, testing_df = df_coded.randomSplit([0.6, 0.3, 0.1])
training_df.columns[30:35]
training_df.columns[36:42]
# define vector assembler with the features for the modelling
vanum = VectorAssembler(). \
setInputCols(training_df.columns[30:35]). \
setOutputCol('features_nonstd')
# define vector assembler with the features for the modelling
vacate = VectorAssembler(). \
setInputCols(training_df.columns[36:42]). \
setOutputCol('featurescate')
va = VectorAssembler(). \
setInputCols(['featuresnum','featurescate']). \
setOutputCol('features')
std = feature.StandardScaler(withMean=True, withStd=True).setInputCol('features_nonstd').setOutputCol('featuresnum')
# suffix for investor country code because intersection with county_code of the companies
invcc = ['{}_{}'.format(a, "investor") for a in indexers[3].labels]
# define labellist by using the indexer stages for displaying the weights & loadings
labellist = training_df.columns[30:35] + indexers[0].labels + indexers[1].labels + indexers[2].labels + invcc + indexers[4].labels + indexers[5].labels
# null dummy for onehotencoded_country_code_index
print("null dummy for onehotencoded_country_code_index")
print(len(indexers[0].labels))
print(indexers[0].labels)
# null dummy for onehotencoded_city_index
print("null dummy for onehotencoded_city_index")
print(indexers[1].labels)
print(len(indexers[1].labels))
# null dummy for onehotencoded_quarter_new_index
print("null dummy for onehotencoded_quarter_new_index")
print(len(indexers[2].labels))
print(indexers[2].labels)
# null dummy for onehotencoded_investor_country_code_index
print("null dummy for onehotencoded_investor_country_code_index")
print(len(invcc))
print(invcc)
# null dummy for onehotencoded_funding_round_type_index
print("null dummy for onehotencoded_funding_round_type_index")
print(len(indexers[4].labels))
print(indexers[4].labels)
# null dummy for onehotencoded_category_final_index
print("null dummy for onehotencoded_category_final_index")
print(len(indexers[5].labels))
print(indexers[5].labels)
```
# Modeling
## DECISION TREE
```
# define multiclass classification evaluator
mce = MulticlassClassificationEvaluator()
dt = DecisionTreeClassifier(labelCol="label", featuresCol="features")
dt_pipeline = Pipeline(stages=[vanum, std, vacate, va, dt]).fit(training_df)
dfdt = dt_pipeline.transform(validation_df)
# print the areas under the curve for the different random forest pipelines
print("Decision Tree: AUC = {}".format(mce.evaluate(dfdt)))
# print the accuracies for the different random forest pipelines
print(dfdt.select(fn.expr('float(label = prediction)').alias('correct')).select(fn.avg('correct').alias("Accuracy for Decision Tree")).show())
```
## RANDOM FOREST
```
# define binary classification evaluator
bce = BinaryClassificationEvaluator()
# define default, 15 trees and 25 trees random forest classifier
rf = RandomForestClassifier(maxBins=10000, featuresCol='features', labelCol='label')
rf15 = RandomForestClassifier(numTrees=15, maxBins=10000, featuresCol='features', labelCol='label')
rf25 = RandomForestClassifier(numTrees=25, maxBins=10000, featuresCol='features', labelCol='label')
# define and fit pipelines with vector assembler and random forest classifier
rf_pipeline = Pipeline(stages=[vanum, std, vacate, va, rf]).fit(training_df)
rf_pipeline_15 = Pipeline(stages=[vanum, std, vacate, va, rf15]).fit(training_df)
rf_pipeline_25 = Pipeline(stages=[vanum, std, vacate, va, rf25]).fit(training_df)
dfrf = rf_pipeline.transform(validation_df)
dfrf_15 = rf_pipeline_15.transform(validation_df)
dfrf_25 = rf_pipeline_25.transform(validation_df)
dfrf.show()
```
## Performance
```
# print the areas under the curve for the different random forest pipelines
print("Random Forest with 20 trees: AUC = {}".format(bce.evaluate(dfrf)))
print("Random Forest 15 trees: AUC = {}".format(bce.evaluate(dfrf_15)))
print("Random Forest 25 trees: AUC = {}".format(bce.evaluate(dfrf_25)))
# print the accuracies for the different random forest pipelines
print(dfrf.select(fn.expr('float(label = prediction)').alias('correct')).select(fn.avg('correct').alias("Accuracy for Random Forest with 20 trees")).show())
print(dfrf_15.select(fn.expr('float(label = prediction)').alias('correct')).select(fn.avg('correct').alias("Accuracy for Random Forest with 15 trees")).show())
print(dfrf_25.select(fn.expr('float(label = prediction)').alias('correct')).select(fn.avg('correct').alias("Accuracy for Random Forest with 25 trees")).show())
```
For some reason the accuracy is exactly the same for all three models, probably meaning that the three models do not have a significant difference. However, the values for the importancies (see down below) are different.
## Importancies and Weights
```
# create spark df with the 20 highest labels and the corresponding importancies + sorting by importancy
rfw = spark.createDataFrame(pd.DataFrame(list(zip(labellist, rf_pipeline.stages[4].featureImportances.toArray())),
columns = ['column', 'importancy']).sort_values('importancy').tail(20))
display(rfw)
rfw.show()
# create spark df with the labels and the corresponding importancies + sorting by importancy
rf15w = spark.createDataFrame(pd.DataFrame(list(zip(labellist, rf_pipeline_15.stages[4].featureImportances.toArray())),
columns = ['column', 'importancy']).sort_values('importancy').tail(20))
display(rf15w)
rf15w.show()
# create spark df with the labels and the corresponding importancies + sorting by importancy
rf25w = spark.createDataFrame(pd.DataFrame(list(zip(labellist, rf_pipeline_25.stages[4].featureImportances.toArray())),
columns = ['column', 'weight']).sort_values('weight').tail(20))
display(rf25w)
rf25w.show()
```
# TESTING PERFORMANCE
We tested with the best performing (AUC and Accuracy) Decision Tree and Random Forest model.
```
# Decision Tree
dfdt_test = dt_pipeline.transform(testing_df)
print("Decision Tree: AUC = {}".format(mce.evaluate(dfdt_test)))
print(dfdt_test.select(fn.expr('float(label = prediction)').alias('correct')).select(fn.avg('correct').alias("Accuracy for Decision Tree")).show())
dt_true = dfdt_test.select(['label']).collect()
dt_pred = dfdt_test.select(['prediction']).collect()
print(classification_report(dt_true, dt_pred))
# Best performing random forest model
dfrf_25_test = rf_pipeline_25.transform(testing_df)
print("Random Forest 25 trees: AUC = {}".format(bce.evaluate(dfrf_25_test)))
print(dfrf_25_test.select(fn.expr('float(label = prediction)').alias('correct')).select(fn.avg('correct').alias("Accuracy for Random Forest with 25 trees")).show())
rf_true = dfrf_25_test.select(['label']).collect()
rf_pred = dfrf_25_test.select(['prediction']).collect()
print(classification_report(rf_true, rf_pred))
```
| github_jupyter |
### Create train corpus for prediction (block + nonblocked users)
* Separately for blocked and non-blocked users
* Combine abuse score + ORES score data
* Aggregate daily activity data for blocked users
* Aggregate daiy acitivity data for nonblocked users
* Combine activity data with abuse score and ORES data
```
# import necessary packages
import os
import pandas as pd
import numpy as np
import re
# set options
pd.options.display.max_colwidth = 50
pd.set_option('display.max_colwidth', -1)
pd.options.mode.chained_assignment = None # default='warn'
# load abuse score file to extract max revids for all users
df_abuse = pd.read_csv('/home/ec2-user/SageMaker/bucket/wiki_trust/modeling/detection_user_level_pred_03_02.txt',sep = '\t')
df_abuse.drop(columns = ['Unnamed: 0', 'index', 'char_changes', 'revision_date',
'text', 'bl', 'occurance', 'bl_date', 'doi', 'valid_dt',
'clen', 'numb', 'caps', 'caps_ncaps', 'wordlen', 'schar',
'unique_wlen_percent', 'clen_wlen', 'neg', 'neu', 'compound'],inplace = True)
df_abuse = df_abuse.sort_values(by=['username','rev_id'])
df_abuse['sequence'] = df_abuse.groupby('username').cumcount(ascending=False)
df_abuse.head(10)
df_minrevid = df_abuse.loc[df_abuse.groupby(["username"])["sequence"].idxmax()]
df_minrevid.drop(columns = ['abuse_score','sequence'],inplace = True)
df_minrevid.head()
# save file as .csv
df_minrevid.to_csv('/home/ec2-user/SageMaker/bucket/wiki_trust/revisions_data/cr4zy_data/userminrev.txt', sep = '\t',encoding='utf-8',header = True,index=False)
df_minrevid.shape
# 21k users approx in both train and test sets
```
#### ORES + Abuse
```
# Combine all ores files
file_list = [x for x in os.listdir("/home/ec2-user/SageMaker/bucket/wiki_trust/revisions_data/csvs_stored/Ores/Data/") if x.endswith(".csv")]
df_list = []
for file in file_list:
print(file)
df_list.append(pd.read_csv('/home/ec2-user/SageMaker/bucket/wiki_trust/revisions_data/csvs_stored/Ores/Data/' + file))
df_ores = pd.concat(df_list)
df_ores.drop(columns = ['Unnamed: 0'],inplace = True)
df_ores.shape
df_abuse.head()
df_ores.head()
# merge ores scores with abuse data
# this data is unique at revision id level
df_abuse_ores = pd.merge(df_abuse,df_ores,how="left",on=["rev_id"])
df_abuse_ores.drop(columns = ['username_y'],inplace = True)
df_abuse_ores.shape
df_abuse_ores.columns = ['username','rev_id','abuse_score','sequence','damage_score','goodfaith_score']
df_abuse_ores.head(10)
# removing most recent scores (rev_id, sequence = 0)
minrevlist = df_minrevid['rev_id']
df_abuse_ores_excl = df_abuse_ores[~df_abuse_ores['rev_id'].isin(minrevlist)]
df_abuse_ores_excl.head(10)
df_abuse_ores = df_abuse_ores_excl.drop(columns = ['rev_id']) # changed
df_abuse_ores.shape
# remove users who are onot there in test # to remove one time users
df_test = pd.read_csv('/home/ec2-user/SageMaker/bucket/wiki_trust/revisions_data/cr4zy_data/test_block.txt',sep = '\t')
usertest = df_test['username']
#usertest.shape
#df_abuse_ores = df_abuse_ores[df_abuse_ores['username'].isin(usertest)]
#df_abuse_ores.shape
usertest.shape
# data unique at user level
df_abuse_ores = df_abuse_ores.pivot(index='username', columns='sequence').swaplevel(0,1,axis=1)
df_abuse_ores.reset_index(inplace = True)
df_abuse_ores.columns = [f'{j}_{i}' for i, j in df_abuse_ores.columns]
df_abuse_ores.rename(columns={'_username': 'username'}, inplace=True)
df_abuse_ores.head()
df_abuse_ores.shape # matches number of users in abuse df
df_abuse_ores.columns
df_abuse_ores.columns=['username', 'abuse_score_1', 'abuse_score_2',
'abuse_score_3', 'abuse_score_4', 'abuse_score_5', 'abuse_score_6',
'abuse_score_7', 'abuse_score_8', 'abuse_score_9', 'abuse_score_10',
'abuse_score_11', 'abuse_score_12', 'abuse_score_13', 'abuse_score_14',
'damage_score_1', 'damage_score_2', 'damage_score_3',
'damage_score_4', 'damage_score_5', 'damage_score_6', 'damage_score_7',
'damage_score_8', 'damage_score_9', 'damage_score_10',
'damage_score_11', 'damage_score_12', 'damage_score_13',
'damage_score_14', 'goodfaith_score_1',
'goodfaith_score_2', 'goodfaith_score_3', 'goodfaith_score_4',
'goodfaith_score_5', 'goodfaith_score_6', 'goodfaith_score_7',
'goodfaith_score_8', 'goodfaith_score_9', 'goodfaith_score_10',
'goodfaith_score_11', 'goodfaith_score_12', 'goodfaith_score_13',
'goodfaith_score_14']
df_abuse_ores.head()
#df_abuse[df_abuse['username']=='!dea4u']
# succesfully excluded the oldest revid
```
#### Aggregate activity data for blocked users
```
# read in blocked userlist
ipblocks = pd.read_csv("/home/ec2-user/SageMaker/bucket/wiki_trust/ipblocks_fulldump_20190223.txt", sep = "\t")
ipblocks.dropna(subset=['ipb_address'],inplace=True)
# limiting to users only blocked in 2017-2018
ipblocks_df = ipblocks[(ipblocks['date'] >= 20170115)]
df_user_maxrev_bl = pd.read_csv('/home/ec2-user/SageMaker/bucket/wiki_trust/revisions_data/cr4zy_data/usermaxrev.txt',sep = '\t')
bllist = ipblocks_df['ipb_address']
df_user_maxrev_bl['bl'] = 0
df_user_maxrev_bl['bl'][df_user_maxrev_bl['username'].isin(bllist)] = 1
df_user_maxrev_bl.head()
df_user_maxrev_bl.bl.value_counts()
# list of block and nb users
userlist_blk = df_user_maxrev_bl['username'][df_user_maxrev_bl['bl']==1]
userlist_nonblk = df_user_maxrev_bl['username'][df_user_maxrev_bl['bl']==0]
# corpus for revision activity
big_df = pd.read_csv('/home/ec2-user/SageMaker/bucket/wiki_trust/revisions_data/cr4zy_data/nontext_block.txt', sep = '\t')
big_df.shape
# only keep those blocked users that are in the webscraped list
big_df = big_df[big_df['rev_user_text'].isin(userlist_blk)]
# get list of registered users
reguserlist = big_df['rev_user_text'][big_df['rev_user']!=0.0]
len(big_df.rev_user_text.unique()) # 6445
# some formating
def col_format(dataframe):
dataframe['revision_date'] = (dataframe['rev_timestamp']/1000000).astype(np.int64)
dataframe.drop(columns = ['rev_page','rev_comment_id','rev_parent_id','rev_timestamp','blocked'],inplace = True)
dataframe.columns = ['rev_id','userid','username','rev_minoredit','rev_deleted','rev_len','rev_date']
return dataframe
df_block1 = col_format(big_df)
df_block1.head()
# adding block date to dataframe and formatting date
def addblockdate(dataframe):
dataframe2 = pd.merge(dataframe,ipblocks,how="left",left_on=["username"],right_on=["ipb_address"])
# difference between revision and block date (in days/weeks)
dataframe2['date'] = pd.to_datetime(dataframe2['date'], format = "%Y%m%d")
dataframe2['rev_date'] = pd.to_datetime(dataframe2['rev_date'], format = "%Y%m%d")
dataframe2['diff_days'] = (dataframe2['date']-dataframe2['rev_date']).dt.days
# only data before they were blocked
dataframe2 = dataframe2[dataframe2['diff_days']>=0]
return dataframe2
df_block2 = addblockdate(df_block1)
df_block2.shape
# subset weeks 1 -8
df_block2 = df_block2.loc[(df_block2['diff_days']>=0) & (df_block2['diff_days']<=14)
,['rev_id','userid','username','rev_minoredit','rev_deleted',
'rev_len','rev_date','diff_days']]
# delete column userid
df_block2.drop(columns = 'userid',inplace=True)
df_block2.shape
len(df_block2.username.unique())
df_block2.head()
len(df_block2.username.unique()) # there are users with maxrevid beyond 2 week window....
# so in our analysis we are only considering users who have made a rev text within 2 weeks of getting blocked effectively.
# exclude data max revid afterwards within that 2 week period
df_block2_1 = pd.merge(df_block2,df_user_maxrev_bl,how = 'left',on = 'username')
df_block2_1['revcount'] = df_block2_1['rev_id_y'] - df_block2_1['rev_id_x']
df_block2_1.head()
df_block2_2 = df_block2_1[df_block2_1['revcount'] >= 0]
df_block2_2.head(20)
#df_block2_2[df_block2_2['username']=='Freedom Fighter Jason Lin']
# filter rows < 0 ( as we only want actvity upto last edit made)
# for every user, remove rev id after maxrevid
df_block2_2.drop(columns = ['rev_id_y','revcount','bl'],inplace = True)
df_block2_2.columns = ['rev_id','username','rev_minoredit','rev_deleted','rev_len','rev_date','diff_days']
len(df_block2_2['username'].unique())
df_block2_3 = df_block2_2[df_block2_2['username'].isin(usertest)]
len(df_block2_3['username'].unique())
#### Calculating active days for users over 2 week period
def activedays(dataframe):
days_active = dataframe.groupby(['username', 'rev_date'],as_index=False).agg({'rev_id':"count"})
days_active = days_active.groupby(['username'],as_index=False).agg({'rev_date':"count"})
days_active = days_active.rename(columns={"rev_date": "active_days"})
return days_active
days_active = activedays(df_block2_3)
days_active.head()
#### Group by User , Week - calculate stats over 8 week period
def df_weekly(dataframe):
dataframe2 = dataframe.groupby(['username', 'diff_days'],as_index=False).agg(
{'rev_id':"count",'rev_minoredit':sum,'rev_deleted':sum,'rev_len':"mean"})
# adding active days before block
dataframe3 = pd.merge(dataframe2,days_active,how="left",left_on=["username"],right_on=["username"])
dataframe3.rev_len = dataframe3.rev_len.round()
# rename columns
dataframe3.columns = ['username','days','rev_count','rev_minorcount','rev_dltcount','rev_avglen','2wkactivedays']
dataframe3['blocked'] = 1
return dataframe3
df_block3 = df_weekly(df_block2_3)
df_block3.shape
df_block3.head()
#big_df[big_df['username']=='!rehtom']
# extract the columns that we don't need in the grouping
df_blockcols = df_block3.loc[:,['username','2wkactivedays','blocked']]
df_blockcols.drop_duplicates(inplace = True)
df_blockcols.head()
#### Pivoting the data
def df_pivot(dataframe):
dataframe1 = dataframe.drop(columns = ['2wkactivedays','blocked'])
dataframe2 = dataframe1.pivot(index='username', columns='days').swaplevel(0,1,axis=1)
dataframe2.reset_index(inplace=True)
dataframe2.columns = [f'{j}_{i}' for i, j in dataframe2.columns]
# adding active days,blocked
dataframe3 = pd.merge(dataframe2,df_blockcols,how="left",left_on=["_username"],right_on=["username"])
dataframe3.drop(columns = ['username'],inplace = True)
dataframe3 = dataframe3.rename(columns={'_username':'username'})
dataframe3 = dataframe3.fillna(0)
return dataframe3
df_data = df_pivot(df_block3)
df_data.shape
#### Normalizing rev counts,minor and deleted
df_data.columns[31:46]
def varnorm(dataframe):
# total revision count
dataframe['rev_count_total'] = dataframe.iloc[:, 1:16].sum(axis=1)
dataframe.iloc[:,1:16] = dataframe.iloc[:,1:16].div(dataframe.rev_count_total, axis=0) # normalize each revcount
# minor edit count
dataframe['minor_count_total'] = dataframe.iloc[:, 16:31].sum(axis=1)
dataframe['minor_count_norm'] = (dataframe['minor_count_total']/dataframe['rev_count_total']).round(4)
# delete edit count
dataframe['dlt_count_total'] = dataframe.iloc[:,31:46].sum(axis=1)
dataframe['dlt_count_norm'] = (dataframe['dlt_count_total']/dataframe['rev_count_total']).round(4)
# drop columns
dataframe.drop(columns = [ 'rev_minorcount_0',
'rev_minorcount_1', 'rev_minorcount_2', 'rev_minorcount_3',
'rev_minorcount_4', 'rev_minorcount_5', 'rev_minorcount_6',
'rev_minorcount_7', 'rev_minorcount_8', 'rev_minorcount_9',
'rev_minorcount_10', 'rev_minorcount_11', 'rev_minorcount_12',
'rev_minorcount_13', 'rev_minorcount_14', 'rev_dltcount_0',
'rev_dltcount_1', 'rev_dltcount_2', 'rev_dltcount_3', 'rev_dltcount_4',
'rev_dltcount_5', 'rev_dltcount_6', 'rev_dltcount_7', 'rev_dltcount_8',
'rev_dltcount_9', 'rev_dltcount_10', 'rev_dltcount_11',
'rev_dltcount_12', 'rev_dltcount_13', 'rev_dltcount_14',
'rev_count_total', 'minor_count_total', 'dlt_count_total'],inplace = True)
# add reguser column
dataframe['registered'] = np.where(dataframe['username'].isin(reguserlist),1,0)
return dataframe
df_data = varnorm(df_data)
df_data.shape
df_data.head()
df_data.iloc[0,:]
df_data.columns
# save file as .csv
header = ['username', 'rev_count_0', 'rev_count_1', 'rev_count_2', 'rev_count_3',
'rev_count_4', 'rev_count_5', 'rev_count_6', 'rev_count_7',
'rev_count_8', 'rev_count_9', 'rev_count_10', 'rev_count_11',
'rev_count_12', 'rev_count_13', 'rev_count_14', 'rev_avglen_0',
'rev_avglen_1', 'rev_avglen_2', 'rev_avglen_3', 'rev_avglen_4',
'rev_avglen_5', 'rev_avglen_6', 'rev_avglen_7', 'rev_avglen_8',
'rev_avglen_9', 'rev_avglen_10', 'rev_avglen_11', 'rev_avglen_12',
'rev_avglen_13', 'rev_avglen_14', '2wkactivedays', 'blocked',
'minor_count_norm', 'dlt_count_norm', 'registered']
df_data.to_csv('/home/ec2-user/SageMaker/bucket/wiki_trust/revisions_data/cr4zy_data/train_block.txt', sep = '\t',encoding='utf-8',header = True,index=False)
```
#### Aggregate activity data for nonbl users
```
big_df = pd.read_csv('/home/ec2-user/SageMaker/bucket/wiki_trust/revisions_data/cr4zy_data/nontext_nonblock.txt', sep = '\t')
big_df.shape
# only keep those blocked users that are in the webscraped list
big_df = big_df[big_df['rev_user_text'].isin(userlist_nonblk)]
len(big_df['rev_user_text'].unique())
# get list of registered users
reguserlist = big_df['rev_user_text'][big_df['rev_user']!=0.0]
# some formating
df_block1 = col_format(big_df)
df_block1.head()
df_block1.shape
#Extracting max date for each user
no_ipb = df_block1.groupby(['username'],as_index=False).agg({'rev_date':max})
no_ipb.columns = ['username2','maxdate']
# adding max rev date to dataframe and formatting date
df_block2 = pd.merge(df_block1,no_ipb,how="left",left_on=["username"],right_on=["username2"])
# difference between revision and max rev date (in days/weeks)
df_block2['rev_date'] = pd.to_datetime(df_block2['rev_date'], format = "%Y%m%d")
df_block2['maxdate'] = pd.to_datetime(df_block2['maxdate'], format = "%Y%m%d")
df_block2['diff_days'] = (df_block2['maxdate']-df_block2['rev_date']).dt.days
df_block2.dropna(subset=['username'],inplace=True)
df_block2.shape
# subset weeks 1-8
df_block2 = df_block2.loc[(df_block2['diff_days']>=0) & (df_block2['diff_days']<=14)
,['rev_id','userid','username','rev_minoredit','rev_deleted',
'rev_len','rev_date','diff_days']]
# delete column userid
df_block2.drop(columns = 'userid',inplace=True)
df_block2.head()
# exclude data max revid onwards within that 2 week period
# only keep those users that are there in list
#len(df_block2[df_block2.username.isin(userlist)].username.unique())
df_block2_1 = pd.merge(df_block2,df_user_maxrev_bl,how = 'left',on = 'username')
df_block2_1['revcount'] = df_block2_1['rev_id_y'] - df_block2_1['rev_id_x']
#df_block2_1 = df_block2_1[df_block2_1['revcount']==0]
df_block2_1.head(20)
df_block2_2 = df_block2_1[df_block2_1['revcount'] >= 0] # greater than excludes that particular max revid and everything after that
df_block2_2.head(20)
# filter rows <= 0 (removes ax rev id and any rev made after block date)
# for every user, remove rev id greater equal to maxrevid
df_block2_2.drop(columns = ['rev_id_y','revcount','bl'],inplace = True)
df_block2_2.columns = ['rev_id','username','rev_minoredit','rev_deleted','rev_len','rev_date','diff_days']
len(df_block2_2['username'].unique()) # 6509 unique users
df_test = pd.read_csv('/home/ec2-user/SageMaker/bucket/wiki_trust/revisions_data/cr4zy_data/test_nonblock.txt',sep = '\t')
usertest = df_test['username']
usertest.shape
df_block2_3 = df_block2_2[df_block2_2['username'].isin(usertest)]
len(df_block2_3['username'].unique())
df_block2_3.head(10)
# get active days for users
days_active = activedays(df_block2_3)
days_active.head()
df_block3 = df_weekly(df_block2_3)
df_block3.shape
# extract the columns that we don't need in the grouping
df_block3['blocked'] = 0
df_blockcols = df_block3.loc[:,['username','2wkactivedays','blocked']]
df_blockcols.drop_duplicates(inplace = True)
df_blockcols.head()
df_data = df_pivot(df_block3)
df_data.shape
len(df_data['username'].unique())
df_data.sample(5)
df_data.columns
df_data = varnorm(df_data)
df_data.shape
df_data.columns
# save file as .csv
header = ['username', 'rev_count_0', 'rev_count_1', 'rev_count_2', 'rev_count_3',
'rev_count_4', 'rev_count_5', 'rev_count_6', 'rev_count_7',
'rev_count_8', 'rev_count_9', 'rev_count_10', 'rev_count_11',
'rev_count_12', 'rev_count_13', 'rev_count_14', 'rev_avglen_0',
'rev_avglen_1', 'rev_avglen_2', 'rev_avglen_3', 'rev_avglen_4',
'rev_avglen_5', 'rev_avglen_6', 'rev_avglen_7', 'rev_avglen_8',
'rev_avglen_9', 'rev_avglen_10', 'rev_avglen_11', 'rev_avglen_12',
'rev_avglen_13', 'rev_avglen_14', '2wkactivedays', 'blocked',
'minor_count_norm', 'dlt_count_norm', 'registered']
df_data.to_csv('/home/ec2-user/SageMaker/bucket/wiki_trust/revisions_data/cr4zy_data/train_nonblock.txt', sep = '\t',encoding='utf-8',header = True,index=False)
```
#### Combine bl + nonbl activity corpus with abuse + ores data
```
#read in blocked users data
df_block = pd.read_csv('/home/ec2-user/SageMaker/bucket/wiki_trust/revisions_data/cr4zy_data/train_block.txt', sep = '\t')
df_block.shape
#read in non-blocked users data
df_nonblock = pd.read_csv('/home/ec2-user/SageMaker/bucket/wiki_trust/revisions_data/cr4zy_data/train_nonblock.txt', sep = '\t')
df_nonblock.shape
df_act = pd.concat([df_block,df_nonblock])
df_act.shape
df_act.tail()
df_act['blocked'].value_counts()
df_abuse_ores.head()
df_abuse_ores.columns
df_abuse_ores_act = pd.merge (df_act,df_abuse_ores,how="inner",on=["username"])
df_abuse_ores_act.shape
df_abuse_ores_act.head()
df_abuse_ores_act.iloc[0,:]
df_abuse_ores_act.columns
# save file as .csv
header = ['username', 'rev_count_0', 'rev_count_1', 'rev_count_2', 'rev_count_3',
'rev_count_4', 'rev_count_5', 'rev_count_6', 'rev_count_7',
'rev_count_8', 'rev_count_9', 'rev_count_10', 'rev_count_11',
'rev_count_12', 'rev_count_13', 'rev_count_14', 'rev_avglen_0',
'rev_avglen_1', 'rev_avglen_2', 'rev_avglen_3', 'rev_avglen_4',
'rev_avglen_5', 'rev_avglen_6', 'rev_avglen_7', 'rev_avglen_8',
'rev_avglen_9', 'rev_avglen_10', 'rev_avglen_11', 'rev_avglen_12',
'rev_avglen_13', 'rev_avglen_14', '2wkactivedays', 'blocked',
'minor_count_norm', 'dlt_count_norm', 'registered', 'abuse_score_1',
'abuse_score_2', 'abuse_score_3', 'abuse_score_4', 'abuse_score_5',
'abuse_score_6', 'abuse_score_7', 'abuse_score_8', 'abuse_score_9',
'abuse_score_10', 'abuse_score_11', 'abuse_score_12', 'abuse_score_13',
'abuse_score_14', 'damage_score_1', 'damage_score_2', 'damage_score_3',
'damage_score_4', 'damage_score_5', 'damage_score_6', 'damage_score_7',
'damage_score_8', 'damage_score_9', 'damage_score_10',
'damage_score_11', 'damage_score_12', 'damage_score_13',
'damage_score_14', 'goodfaith_score_1', 'goodfaith_score_2',
'goodfaith_score_3', 'goodfaith_score_4', 'goodfaith_score_5',
'goodfaith_score_6', 'goodfaith_score_7', 'goodfaith_score_8',
'goodfaith_score_9', 'goodfaith_score_10', 'goodfaith_score_11',
'goodfaith_score_12', 'goodfaith_score_13', 'goodfaith_score_14']
df_abuse_ores_act.to_csv('/home/ec2-user/SageMaker/bucket/wiki_trust/revisions_data/cr4zy_data/train_abuse_ores_act.txt', sep = '\t',encoding='utf-8',header = True,index=False)
```
| github_jupyter |
# What's new in the Forecastwrapper
- Solar Irradiance on a tilted plane
- Wind on an oriented building face
- No more "include this", "include that". Everything is included. (I implemented these flags to speed to speed up some things (which you cannot notice), but it complicates the code so much that it is not worth it)
- Daytime aggregates have been deprecated (we don't need this anymore since we have irradiance from dark sky. But if anyone incists, i can perhaps re-implement it)
- No more special timezone stuff, you get the data in a timezone-aware format, localized to the location of the request. If you want another timezone, use `tz_convert`
# Demo of the forecast.io wrapper to get past and future weather data
Important: you need to register for an apikey here: https://developer.forecast.io/ Put the key you obtain in the opengrid.cfg file as follows:
[Forecast.io]
apikey: your_key
```
import os
import sys
import inspect
import pandas as pd
import charts
```
## Import API wrapper module
```
from opengrid_dev.library import forecastwrapper
```
# Get weather data in daily and hourly resolution
To get started, create a Weather object for a certain location and a period
```
start = pd.Timestamp('20150813')
end = pd.Timestamp('20150816')
Weather_Ukkel = forecastwrapper.Weather(location='Ukkel', start=start, end=end)
```
You can use the methods `days()` and `hours()` to get a dataframe in daily or hourly resolution
```
Weather_Ukkel.days()
Weather_Ukkel.hours().info()
```
### Degree Days
Daily resolution has the option of adding degree days.
By default, the temperature equivalent and heating degree days with a base temperature of 16.5°C are added.
Heating degree days are calculated as follows:
$$heatingDegreeDays = max(0 , baseTemp - 0.6 * T_{today} + 0.3 * T_{today-1} + 0.1 * T_{today-2} )$$
Cooling degree days are calculated in an analog way:
$$coolingDegreeDays = max(0, 0.6 * T_{today} + 0.3 * T_{today-1} + 0.1 * T_{today-2} - baseTemp )$$
Add degree days by supplying `heating_base_temperatures` and/or `cooling_base_temperatures` as a list (you can add multiple base temperatures, or just a list of 1 element)
#### Get some more degree days
```
Weather_Ukkel.days(heating_base_temperatures = [15,18],
cooling_base_temperatures = [18,24]).filter(like='DegreeDays')
Weather_Ukkel.days()
```
# Hourly resolution example
Location can also be coördinates
```
start = pd.Timestamp('20150916')
end = pd.Timestamp('20150918')
Weather_Brussel = forecastwrapper.Weather(location=[50.8503396, 4.3517103], start=start, end=end)
Weather_Boutersem = forecastwrapper.Weather(location='Kapelstraat 1, 3370 Boutersem', start=start, end=end)
df_combined = pd.merge(Weather_Brussel.hours(), Weather_Boutersem.hours(), suffixes=('_Brussel', '_Boutersem'),
left_index=True, right_index=True)
charts.plot(df_combined.filter(like='cloud'), stock=True, show='inline')
```
## Built-In Caching
Caching is turned on by default, so when you try and get dataframes the first time it takes a long time...
```
start = pd.Timestamp('20170131', tz='Europe/Brussels')
end = pd.Timestamp('20170201', tz='Europe/Brussels')
Weather_Ukkel = forecastwrapper.Weather(location='Ukkel', start=start, end=end)
Weather_Ukkel.days().head(1)
```
... but now try that again and it goes very fast
```
Weather_Ukkel = forecastwrapper.Weather(location='Ukkel', start=start, end=end)
Weather_Ukkel.days().head(1)
```
You can turn of the behaviour by setting the cache flag to false:
```
Weather_Ukkel = forecastwrapper.Weather(location='Ukkel', start=start, end=end, cache=False)
```
## Solar Irradiance!
Dark Sky has added Solar Irradiance data as a beta.
Note:
- The values are calculated, not measured. Dark Sky uses the position of the sun in combination with cloud cover.
- Western Europe is not in Dark Sky's "primary region", therefore the data is not super-accurate.
- Since it is a beta, the algorithms and therefore the values may change
- I (JrtPec) have done a qualitative analysis that compared these values with those measured by KNMI (Netherlands). The differences were significant (27% lower). I have notified Dark Sky and they will investigate and possibly update their algorithms.
- You need to delete your cached files in order to include these new values (everything will have to be re-downloaded)
- If Dark Sky were to update their values, the cache needs to be deleted again.
```
Weather_Ukkel = forecastwrapper.Weather(location='Ukkel', start=start, end=end)
```
### Hourly data
```
Weather_Ukkel.hours()[[
'GlobalHorizontalIrradiance',
'DirectNormalIrradiance',
'DiffuseHorizontalIrradiance',
'ExtraTerrestrialRadiation',
'SolarAltitude',
'SolarAzimuth']].dropna().head()
```
- Global Horizontal Irradiance is the amount of Solar Irradiance that shines on a horizontal surface, direct and diffuse, in Wh/m<sup>2</sup>. It is calculated by transforming the Direct Normal Irradiance (DNI) to the horizontal plane and adding the Diffuse Horizontal Irradiance (DHI):
$$GHI = DNI * cos(90° - Altitude) + DHI$$
- The GHI is what you would use to benchmark PV-panels
- Direct Normal Irradiance is the amount of solar irradiance that shines directly on a plane tilted towards the sun. In Wh/m<sup>2</sup>.
- Diffuse Horizontal Irradiance is the amount of solar irradiance that is scattered in the atmosphere and by clouds. In Wh/m<sup>2</sup>.
- Extra-Terrestrial Radiation is the GHI a point would receive if there was no atmosphere.
- Altitude of the Sun is measured in degrees above the horizon.
- Azimuth is the direction of the Sun in degrees, measured from the true north going clockwise.
At night, all values will be `NaN`
### Daily data
The daily sum of the GHI is included in the `day` dataframe. Values are in Wh/m<sup>2</sup>
If you need other daily aggregates, give me a shout!
```
Weather_Ukkel.days()
```
## Add Global Irradiance on a tilted surface!
Create a list with all the different irradiances you want
A surface is specified by the orientation and tilt
- Orientation in degrees from the north: 0 = North, 90 = East, 180 = South, 270 = West
- Tilt in de degrees from the horizontal plane: 0 = Horizontal, 90 = Vertical
```
# Lets get the vertical faces of a house
irradiances=[
(0, 90), # north vertical
(90, 90), # east vertical
(180, 90), # south vertical
(270, 90), # west vertical
]
Weather_Ukkel.hours(irradiances=irradiances).filter(like='GlobalIrradiance').dropna().head()
```
The names of the columns reflect the orientation and the tilt
```
Weather_Ukkel.days(irradiances=irradiances).filter(like='GlobalIrradiance')
```
# Wind on an oriented building face
The hourly wind speed and bearing is projected on an oriented building face.
We call this the windComponent for a given orientation.
This value is also squared and called windComponentSquared. This can be equated with the force or pressure of the wind on a static surface, like a building face.
The value is also cubed and called windComponentCubed. This can be correlated with the power output of a windturbine.
First, define some orientations you want the wind calculated for. Orientation in degrees starting from the north and going clockwise
```
orientations = [0, 90, 180, 270]
Weather_Ukkel.hours(wind_orients=orientations).filter(like='wind').head()
Weather_Ukkel.days(wind_orients=orientations).filter(like='wind').head()
```
| github_jupyter |
# Build a Reproducible Workflow From Scratch (in about an hour)
## a.k.a. 2017 NIH Hour of Code
---
## R. Burke Squires
_Contractor with [Medical Sciences and Computing (MSC)](https://www.mscweb.com/) - [Positions available](https://careers-mscweb.icims.com/jobs/search?hashed=-435621309)_
### NIAID Bioinformatics and Computational Biosciences Branch ([BCBB](https://www.niaid.nih.gov/research/bcbb-services))
- Collaborative consulting with NIAID intromural scientists, and others (when possible)
- Develop international used bioinformatics web-based tools such as:
- [Nephele - AWS microbiome analysis portal](https://nephele.niaid.nih.gov/)
- [3D Print Exchange](https://3dprint.nih.gov/)
- [NIAID Bioinformatics Portal](https://bioinformatics.niaid.nih.gov)
---
### [Project Jupyter](http://jupyter.org/)

__Note:__ I am using the Damian Avila's [RISE notebook extension](https://github.com/damianavila/RISE) to present this notebook in slide format.
[Source](http://jupyter-notebook.readthedocs.io/en/stable/examples/Notebook/What%20is%20the%20Jupyter%20Notebook.html)
# What is the Jupyter Notebook?
"The Jupyter Notebook is an __interactive computing environment__ that enables users to author notebook documents that include: - Live code - Interactive widgets - Plots - Narrative text - Equations - Images - Video
These documents provide a complete and __self-contained record of a computation__ that can be converted to various formats and shared with others using email, Dropbox, version control systems (like git/GitHub) or [nbviewer.jupyter.org](nbviewer.jupyter.org)."
## Components
"The Jupyter Notebook combines three components:
- The __notebook web application__: An interactive web application for writing and running code interactively and authoring notebook documents.
- __Kernels__: Separate processes started by the notebook web application that runs users’ code in a given language and returns output back to the notebook web application. The kernel also handles things like computations for interactive widgets, tab completion and introspection.
- __Notebook documents__: Self-contained documents that contain a representation of all content visible in the notebook web application, including inputs and outputs of the computations, narrative text, equations, images, and rich media representations of objects. Each notebook document has its own kernel."
## Notebook web application
"The notebook web application enables users to:
- __Edit code in the browser__, with automatic syntax highlighting, indentation, and tab completion/introspection.
- __Run code from the browser__, with the results of computations attached to the code which generated them.
- See the results of computations with __rich media representations__, such as HTML, LaTeX, PNG, SVG, PDF, etc.
- Create and use __interactive JavaScript widgets__, which bind interactive user interface controls and visualizations to reactive kernel side computations.
- Author __narrative text__ using the Markdown markup language.
- Include mathematical equations using __LaTeX syntax in Markdown__, which are rendered in-browser by MathJax."
## Kernels
"Through Jupyter’s kernel and messaging architecture, the Notebook allows code to be run in a range of different programming languages. For each notebook document that a user opens, the web application starts a kernel that runs the code for that notebook. Each kernel is capable of running code in a single programming language and there are kernels available in the following languages:
- Python(https://github.com/ipython/ipython)
- Julia (https://github.com/JuliaLang/IJulia.jl)
- R (https://github.com/IRkernel/IRkernel)
- Ruby (https://github.com/minrk/iruby)
- Haskell (https://github.com/gibiansky/IHaskell)
- Scala (https://github.com/Bridgewater/scala-notebook)
- node.js (https://gist.github.com/Carreau/4279371)
- Go (https://github.com/takluyver/igo)
The __default kernel (IPython) runs Python code__. The notebook provides a simple way for users to pick which of these kernels is used for a given notebook.
Each of these kernels communicate with the notebook web application and web browser using a JSON over ZeroMQ/WebSockets message protocol that is described here. Most users don’t need to know about these details, but it helps to understand that __“kernels run code.”__"
## Notebook documents
"Notebook documents contain the __inputs and outputs__ of an interactive session as well as __narrative text__ that accompanies the code but is not meant for execution. __Rich output__ generated by running code, including HTML, images, video, and plots, is embeddeed in the notebook, which makes it a complete and self-contained record of a computation.
When you run the notebook web application on your computer, notebook documents are just __files on your local filesystem with a ``.ipynb`` extension__. This allows you to use familiar workflows for organizing your notebooks into folders and sharing them with others.
Notebooks consist of a __linear sequence of cells__. There are four basic cell types:
- __Code cells__: Input and output of live code that is run in the kernel
- __Markdown cells__: Narrative text with embedded LaTeX equations
- __Heading cells__: 6 levels of hierarchical organization and formatting
- __Raw cells__: Unformatted text that is included, without modification, when notebooks are converted to different formats using nbconvert
Internally, notebook documents are `JSON <https://en.wikipedia.org/wiki/JSON>`__ data with binary values `base64 <http://en.wikipedia.org/wiki/Base64>`__ encoded. This allows them to be read and manipulated programmatically by any programming language. Because JSON is a text format, notebook documents are version control friendly.
__Notebooks can be exported__ to different static formats including HTML, reStructeredText, LaTeX, PDF, and slide shows (reveal.js) using Jupyter’s `nbconvert` utility.
Furthermore, any notebook document available from a __public URL on or GitHub can be shared__ via [nbviewer](http://nbviewer.jupyter.org/). This service loads the notebook document from the URL and renders it as a static web page. The resulting web page may thus be shared with others without their needing to install the Jupyter Notebook."
## IPython
"IPython provides a rich architecture for interactive computing with:
- A powerful interactive shell.
- Support for __interactive data visualization__ and use of GUI toolkits.
- Easy to use, high performance tools for __parallel computing__.
- __Comprehensive object introspection__.
- __Extensible tab completion__, with support by default for completion of python variables and keywords, filenames and function keywords.
- __Extensible system of ‘magic’ commands for controlling the environment and performing many tasks related to IPython or the operating system__.
- A rich configuration system with easy switching between different setups.
- __Access to the system shell__ with user-extensible alias system.
Source: http://ipython.readthedocs.io/en/stable/
---
## Installing Jupyter Notebook
### Anaconda distribution
"Anaconda is a freemium open source distribution of the Python and R programming languages for large-scale data processing, predictive analytics, and scientific computing, that aims to simplify package management and deployment.
Package versions are managed by the package management system conda."
"Easily install 1,000+ data science packages and manage your packages, dependencies and environments—all with the single click of a button"
Source: https://www.anaconda.com/distribution/

### Conda
Package, dependency and environment management for any language—Python, R, Ruby, Lua, Scala, Java, JavaScript, C/ C++, FORTRAN
Conda is an open source package management system and environment management system that runs on Windows, macOS and Linux. Conda quickly installs, runs and updates packages and their dependencies. Conda easily creates, saves, loads and switches between environments on your local computer. It was created for Python programs, but it can package and distribute software for any language.
### Bioconda...
---
__Requirements:__
Participants who wish to follow along should have access to a computer with:
- a Linux shell;
- Mac and Linux computers come with this installed
- Windows users may install git-bash, part of git for windows (https://git-for-windows.github.io)
- A recent installation of the Anaconda python distribution for python 3.6
## Unable to install Anaconda? Try Jupyter in the Cloud...
__Note__: You can try out Jupyter here but will not bea ble to run the workflow as it requires additional setup.
Go to https://try.jupyter.org. No installation is needed.
Want to try using these notebooks...try MyBinder...https://mybinder.org/
You can also run Juypter notebooks in the cloud by going to [Google Colaboratory](https://colab.research.google.com)
__Additional Resources__:
- [Jupyter notebooks](https://github.com/ucsd-ccbb/jupyter-genomics)
- [biopython bioinformatics notebooks](https://github.com/tiagoantao/biopython-notebook)
| github_jupyter |
# Rules Example
The Rules class is used to define a set of rules using one of two representations. Once defined, we can switch between the different representations.
A rule set can be defined using one of the two following representations:
- Dictionary representation - here each joining condition (defined using the key `condition`) and each rule condition (defined using the keys `feature`, `operator` and `value`) of each rule are defined in a dictionary.
- String representation - here each rule is defined using the Pandas syntax, stored as a string. Python's built in `eval()` function can be used to evaluate the rule using this representation on a dataset.
We can also convert either of the above representations to the lambda expression format. This format allows different values to be injected into the rule string, which can then be evaluated on a dataset. This is very useful for when we optimise existing rules on a dataset.
----
## Import packages
```
from iguanas.rules import Rules
import pandas as pd
import numpy as np
```
## Create dummy dataset
```
np.random.seed(0)
X = pd.DataFrame(
{
'payer_id_sum_approved_txn_amt_per_paypalid_1day': np.random.uniform(0, 1000, 1000),
'payer_id_sum_approved_txn_amt_per_paypalid_7day': np.random.uniform(0, 7000, 1000),
'payer_id_sum_approved_txn_amt_per_paypalid_30day': np.random.uniform(0, 30000, 1000),
'num_items': np.random.randint(0, 10, 1000),
'ml_cc_v0': np.random.uniform(0, 1, 1000),
'method_clean': ['checkout', 'login', 'bad_login', 'bad_checkout', 'fraud_login', 'fraud_checkout', 'signup', 'bad_signup', 'fraud_signup', np.nan] * 100,
'ip_address': ['192.168.0.1', np.nan] * 500,
'ip_isp': ['BT', np.nan, '', ''] * 250
}
)
y = pd.Series(np.random.randint(0, 2, 1000))
```
----
## Instantiate Rule class
### Using dictionary representation
As mentioned above, we can define a rule set using one of the two following representations - Dictionary or String.
Let's first define a set of rules using the dictionary representation:
```
rule_dicts = {
'Rule1': {
'condition': 'AND',
'rules': [{
'condition': 'OR',
'rules': [{
'field': 'payer_id_sum_approved_txn_amt_per_paypalid_1day',
'operator': 'greater_or_equal',
'value': 60.0
},
{
'field': 'payer_id_sum_approved_txn_amt_per_paypalid_7day',
'operator': 'greater',
'value': 120.0
},
{
'field': 'payer_id_sum_approved_txn_amt_per_paypalid_30day',
'operator': 'less_or_equal',
'value': 500.0
}
]},
{
'field': 'num_items',
'operator': 'equal',
'value': 1.0
}
]},
'Rule2': {
'condition': 'AND',
'rules': [{
'field': 'ml_cc_v0',
'operator': 'less',
'value': 0.315
},
{
'condition': 'OR',
'rules': [{
'field': 'method_clean',
'operator': 'equal',
'value': 'checkout'
},
{
'field': 'method_clean',
'operator': 'begins_with',
'value': 'checkout'
},
{
'field': 'method_clean',
'operator': 'ends_with',
'value': 'checkout'
},
{
'field': 'method_clean',
'operator': 'contains',
'value': 'checkout'
},
{
'field': 'ip_address',
'operator': 'is_not_null',
'value': None
},
{
'field': 'ip_isp',
'operator': 'is_not_empty',
'value': None
}]
}]
}
}
```
Now that we have defined our rule set using the dictionary representation, we can instantiate the `Rules` class.
```
rules = Rules(rule_dicts=rule_dicts)
```
Once the class is instantiated, we can switch to the string representation using the `as_rule_strings` method:
```
rule_strings = rules.as_rule_strings(as_numpy=False)
```
#### Outputs
The `as_rule_strings` method returns a dictionary of the set of rules defined using the standard Iguanas string format (values) and their names (keys). It also saves this dictionary as the class attribute `rule_strings`.
```
rule_strings
```
### Using string representation
Now let's instead define the same set of rules using the string representation:
```
rule_strings = {
'Rule1': "((X['payer_id_sum_approved_txn_amt_per_paypalid_1day']>=60.0)|(X['payer_id_sum_approved_txn_amt_per_paypalid_7day']>120.0)|(X['payer_id_sum_approved_txn_amt_per_paypalid_30day']<=500.0))&(X['num_items']==1.0)",
'Rule2': "(X['ml_cc_v0']<0.315)&((X['method_clean']=='checkout')|(X['method_clean'].str.startswith('checkout', na=False))|(X['method_clean'].str.endswith('checkout', na=False))|(X['method_clean'].str.contains('checkout', na=False))|(~X['ip_address'].isna())|(X['ip_isp'].fillna('')!=''))"
}
```
Now that we have defined our rule set using the string representation, we can instantiate the `Rules` class.
```
rules = Rules(rule_strings=rule_strings)
```
Once the class is instantiated, we can switch to the dictionary representation using the `as_rule_dicts` method:
```
rule_dicts = rules.as_rule_dicts()
```
#### Outputs
The `as_rule_dicts` method returns a dictionary of the set of rules defined using the standard Iguanas dictionary format (values) and their names (keys). It also saves this dictionary as the class attribute `rule_dicts`.
```
rule_dicts
```
----
## Converting to lambda expressions
Once a rule set has been defined using one of the two representations, it can be converted to the lambda expression format. This format allows different values to be injected into the rule string, which can then be evaluated on a dataset. This is very useful for when we optimise existing rules on a dataset.
We can use the above instantiated Rules class along with the `as_rule_lambdas` method to convert the rules to the lambda expression format. The lambda expressions can be created such that they receive either keyword arguments as inputs, or positional arguments as inputs.
### with_kwargs = True
Let's first convert the rule set to lambda expressions that receive keyword arguments as inputs:
```
rule_lambdas = rules.as_rule_lambdas(
as_numpy=False,
with_kwargs=True
)
```
#### Outputs
The `as_rule_lambdas` method returns a dictionary of the set of rules defined using the standard Iguanas lambda expression format (values) and their names (keys). It also saves this dictionary as the class attribute `rule_lambdas`.
Three useful attributes created by running the `as_rule_lambdas` method are:
- `lambda_kwargs` (dict): For each rule (keys), a dictionary containing the features used in the rule (keys) and the current values (values). Only populates when `as_rule_lambdas` is used with the keyword argument `with_kwargs`=True.
- `lambda_args` (dict): For each rule (keys), a list containing the current values used in the rule. Only populates when `as_rule_lambdas` is used with the keyword argument `with_kwargs`=False.
- `rule_features` (dict): For each rule (keys), a list containing the features used in the rule. Only populates when `as_rule_lambdas` is used with the keyword argument `with_kwargs`=False.
```
rule_lambdas
rules.lambda_kwargs
```
Across both rules, we have the following features:
* payer_id_sum_approved_txn_amt_per_paypalid_1day
* payer_id_sum_approved_txn_amt_per_paypalid_7day
* payer_id_sum_approved_txn_amt_per_paypalid_30day
* num_items
* ml_cc_v0
* method_clean
* ip_address
* ip_isp
**A few points to note:**
- When the same feature is used more than once in a given rule, a suffix with the format '%\<n\>' will be added, where *n* is a counter used to distinguish the conditions.
- The values of some of these features cannot be changed, since the conditions related to these features do not have values - 'ip_address' is checked for nulls and 'ip_isp' is checked for empty cells. These are omitted from the `lambda_kwargs` class attribute (as seen above).
So we can construct a dictionary for each rule, for the features whose values can be changed. The keys of each dictionary are the features, with the values being the new values that we want to try in the rule:
```
new_values = {
'Rule1': {
'payer_id_sum_approved_txn_amt_per_paypalid_1day': 100.0,
'payer_id_sum_approved_txn_amt_per_paypalid_7day': 200.0,
'payer_id_sum_approved_txn_amt_per_paypalid_30day': 600.0,
'num_items': 2.0
},
'Rule2': {
'ml_cc_v0': 0.5,
'method_clean': 'login',
'method_clean%0': 'bad_',
'method_clean%1': '_bad',
'method_clean%2': 'fraud'
}
}
```
Then we can loop through the rules, inject the new values into the lambda expression and evaluate it (with the new values) on the dataset:
```
X_rules = {}
for rule_name, rule_lambda in rules.rule_lambdas.items():
new_values_for_rule = new_values[rule_name]
X_rules[rule_name] = eval(rule_lambda(**new_values_for_rule))
X_rules = pd.DataFrame(X_rules, index=X.index)
X_rules.sum()
```
We can also use the `lambda_kwargs` class attribute to inject the original values into the lambda expression and evaluate it on the dataset:
```
X_rules = {}
for rule_name, rule_lambda in rules.rule_lambdas.items():
X_rules[rule_name] = eval(rule_lambda(**rules.lambda_kwargs[rule_name]))
X_rules = pd.DataFrame(X_rules, index=X.index)
X_rules.sum()
```
### with_kwargs = False
Now let's convert the rule set to lambda expressions that receive positions arguments as inputs:
```
rule_lambdas = rules.as_rule_lambdas(
as_numpy=False,
with_kwargs=False
)
```
#### Outputs
The `as_rule_lambdas` method returns a dictionary of the set of rules defined using the standard Iguanas lambda expression format (values) and their names (keys). It also saves this dictionary as the class attribute *rule_lambdas*.
Three useful attributes created by running the `as_rule_lambdas` method are:
- `lambda_kwargs` (dict): For each rule (keys), a dictionary containing the features used in the rule (keys) and the current values (values). Only populates when `as_rule_lambdas` is used with the keyword argument `with_kwargs`=True.
- `lambda_args` (dict): For each rule (keys), a list containing the current values used in the rule. Only populates when `as_rule_lambdas` is used with the keyword argument `with_kwargs`=False.
- `rule_features` (dict): For each rule (keys), a list containing the features used in the rule. Only populates when `as_rule_lambdas` is used with the keyword argument `with_kwargs`=False.
```
rule_lambdas
rules.lambda_args
rules.rule_features
```
Across both rules, we have the following features:
* payer_id_sum_approved_txn_amt_per_paypalid_1day
* payer_id_sum_approved_txn_amt_per_paypalid_7day
* payer_id_sum_approved_txn_amt_per_paypalid_30day
* num_items
* ml_cc_v0
* method_clean
* ip_address
* ip_isp
**Note:** the values of some of these features cannot be changed, since the conditions related to these features do not have values - 'ip_address' is checked for nulls and 'ip_isp' is checked for empty cells. These are omitted from the `lambda_args` class attribute (as seen above).
So we can construct a list for each rule, for the features whose values can be changed. The values of the list are new values that we want to try in the rules. We can use the `rule_features` class attribute to ensure we use the correct order:
```
new_values = {
'Rule1': [100.0, 200.0, 600.0, 2.0],
'Rule2': [0.5, 'login', 'bad_', '_bad', 'fraud']
}
```
Then we can loop through the rules, inject the new values into the lambda expression and evaluate it (with the new values) on the dataset:
```
X_rules = {}
for rule_name, rule_lambda in rules.rule_lambdas.items():
new_values_for_rule = new_values[rule_name]
X_rules[rule_name] = eval(rule_lambda(*new_values_for_rule))
X_rules = pd.DataFrame(X_rules, index=X.index)
X_rules.sum()
```
We can also use the `lambda_args` class attribute to inject the original values into the lambda expression and evaluate it on the dataset:
```
X_rules = {}
for rule_name, rule_lambda in rules.rule_lambdas.items():
X_rules[rule_name] = eval(rule_lambda(*rules.lambda_args[rule_name]))
X_rules = pd.DataFrame(X_rules, index=X.index)
X_rules.sum()
```
----
## Filtering rules
We can use the `filter_rules` method to filter a ruleset based on their names. Let's say we define the following rule set, which consists of three rules:
```
rule_strings = {
'Rule1': "(X['payer_id_sum_approved_txn_amt_per_paypalid_1day']>=60.0)",
'Rule2': "(X['payer_id_sum_approved_txn_amt_per_paypalid_7day']>120.0)",
'Rule3': "(X['payer_id_sum_approved_txn_amt_per_paypalid_30day']<=500.0)"
}
rules = Rules(rule_strings=rule_strings)
rules.rule_strings
```
Now we can filter the rule set to include or exclude those rules stated:
```
rules.filter_rules(
include=['Rule1'],
exclude=None
)
```
### Outputs
The `filter_rules` method does not return a value, however it does filter the rules within the class, based on the rules that were included or excluded:
```
rules.rule_strings
```
---
## Returning the features in each rule
We can use the `get_rule_features` method to return the unique set of features related to each rule. Let's say we define the following rule set, which consists of three rules:
```
rule_strings = {
'Rule1': "(X['payer_id_sum_approved_txn_amt_per_paypalid_1day']>=60.0)&(X['payer_id_sum_approved_txn_amt_per_paypalid_7day']>120.0)",
'Rule2': "(X['num_order_items']>20)|((X['payer_id_sum_approved_txn_amt_per_paypalid_7day']<100.0)&(X['num_order_items']>10))",
'Rule3': "(X['payer_id_sum_approved_txn_amt_per_paypalid_30day']<=500.0)"
}
rules = Rules(rule_strings=rule_strings)
```
Now we can return the unique set of features related to each rule:
```
rule_features = rules.get_rule_features()
```
### Outputs
The `get_rule_features` method return a dictionary of the unique set of features (values) related to each rule (keys):
```
rule_features
```
---
## Applying rules to a dataset
Use the `transform` method to apply the rules to a dataset:
```
rule_strings = {
'Rule1': "(X['payer_id_sum_approved_txn_amt_per_paypalid_1day']>=60.0)&(X['payer_id_sum_approved_txn_amt_per_paypalid_7day']>120.0)",
'Rule2': "(X['num_items']>20)|((X['payer_id_sum_approved_txn_amt_per_paypalid_7day']<100.0)&(X['num_items']>10))",
'Rule3': "(X['payer_id_sum_approved_txn_amt_per_paypalid_30day']<=500.0)"
}
rules = Rules(rule_strings=rule_strings)
X_rules = rules.transform(X=X)
```
### Outputs
The `transform` method returns a dataframe giving the binary columns of the rules as applied to the given dataset:
```
X_rules.head()
```
---
| github_jupyter |
```
%load_ext autoreload
import matplotlib.pyplot as plt
def plotit(x,y):
fig, ax = plt.subplots()
ax.plot(x,y, 'o')
plt.show()
```
Controller configuration
-----------------------
Overall configuration is held in a class, Config. A Config object holds the list of linux containers, Corsa switches, simulated sites. The following is used to generate and save the configuration. It explains what the various data types are and being used. Refer to coord.py for more details on each of the objects.
Switches,dtn, etc are python objects. There is no access functions to their values. They can directly be accessed:
```
print wash_sw.ip, wash_sw.vfc, wash_sw.ofport, wash_sw.rtt
print config.switches
print config.dtns
print config.sites
```
Once the config is written into a file, it can be retrieved later:
```
from coord import get_config as gc
new_config = gc(config_file="calibers-denv.config")
new_config.dtns[0].switch
```
Traffic Generator
----------------
Current testing the default generator in coord.py (one file per source at the time). File size distribution is random and delivery delay (i.e. past min deadline) is exponential.
```
from coord import SingleFileGen
capacity = 500000 # 500 Mbps
epoch = 5 * 60 # 5 mins
buckets=[30*1024,15*1024,10*1024,5*1024,1*1025,512,128]
gen = SingleFileGen(dtns,capacity,epoch,buckets)
reqs = gen.generate_requests(iterations = 10, scale = 0.1, dst_dtn = scinet_dtn, min_bias=10)
```
The following shows the request for a given DTN
```
print wash_dtn.requests[0]
```
The first following graph shows the additional delay. The x are requests and y is the percentage of the theoritical minimum transfer time (i.e. going at full line rate without congestion + padding) that is added for the dealine.
The second graph shows the file size distribution.
```
x=[]
y_delay=[]
y_size=[]
req_nb = 0
for req_epoch in reqs:
x.append(req_nb)
ys_delay = []
ys_size = []
for req in req_epoch:
ys_delay.append(req.delay_ratio)
ys_size.append(req.size)
y_delay.append(ys_delay)
y_size.append(ys_size)
req_nb += 1
plotit(x,y_delay)
plotit(x,y_size)
```
Creating file of requests and storing them into a file.
```
gen.save("scenario.data",reqs)
```
A saved scenario can be loaded as follow:
```
reqs = SingleFileGen.load("scenario.data", dtns)
dtns[0].requests[0]
%autoreload 2
from coord import Coordinator
coord = Coordinator(app_ip="192.168.120.119",name="caliber-slice",epoch_time=10,config_file="calibers-denv.config", scenario_file="scenario.data",max_rate_mbps=500)
coord.scheduler.debug = False
coord.debug = False
coord.start()
%%bash
curl -H "Content-Type: application/json" http://localhost:5000/api/stop -X POST
%%bash
curl -H "Content-Type: application/json" http://192.168.120.119:5000/api/configs/ -X GET
coord.config.dtns[0].current_request.completed = True
import requests
import pickle
def get_config():
get_url = "http://192.168.120.119:5000/api/config/"
try:
results = requests.get(get_url)
except requests.exceptions.RequestException:
return None
if results.status_code==200:
return pickle.loads(results.json())
else:
return None
c = get_config()
c.dtns
%%bash
curl -H "Content-Type: application/json" http://192.168.120.119:5000/api/config/ -X GET
nc = pickle.dumps(new_config)
import time
import datetime
import subprocess
import re
import json
from flask import Flask, request
from flask_restful import Resource, Api
app = Flask(__name__)
api = Api(app)
import socket
import fcntl
import struct
class FileTransfer(Resource):
def put(self, size):
print "got",size
print request.json
dest = request.json['dest']
print "got request: " + dest
subprocess.Popen(('globus-url-copy vb -fast -p 4 file:///storage/'+size+'.img ftp://'+dest).split())
time.sleep(.4) # Wait for the connection to establish
output = subprocess.check_output('ss -int'.split())
return output
api.add_resource(FileTransfer, '/start/<string:size>')
app.run(host='localhost')
%%bash
curl -H "Content-Type: application/json" -d "{'dest':'192.168.112.2:9002/data/test'}" http://localhost:5000/start/1024000000 -X PUT
```
| github_jupyter |
# Resturant Reviews
***DataDescription***
> The data consists of 2 columns Review and Liked
>
> Review: The reviews on resturants were in the Review Column
>
> Liked: The Good and Bad Review are denoted in the Liked column in the form of 0 and 1
>
> 0- Bad Review
>
> 1- Good Review
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
```
# Adding Basic Liberaries
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
```
# Loading the Data
```
df = pd.read_csv('/kaggle/input/restaurant-reviews/Restaurant_Reviews.tsv', delimiter = '\t', quoting = 3)
df.head()
# Getting the shape of data
df.shape
```
* **Setting Parameters**
```
vocab_size = 500
embedding_dim = 16
max_length = 100
trunc_type='post'
padding_type='post'
oov_tok = "<OOV>"
training_size = 900
```
***Seperating data column to sentences and Labels***
```
sentences = df['Review'].tolist()
labels = df['Liked'].tolist()
```
# Getting Training and Testing Data
```
training_sentences = sentences[0:training_size]
testing_sentences = sentences[training_size:]
training_labels = labels[0:training_size]
testing_labels = labels[training_size:]
```
# Setting Tokenizer And Padding data
```
tokenizer = Tokenizer(num_words=vocab_size, oov_token=oov_tok)
tokenizer.fit_on_texts(training_sentences)
word_index = tokenizer.word_index
training_sequences = tokenizer.texts_to_sequences(training_sentences)
training_padded = pad_sequences(training_sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type)
testing_sequences = tokenizer.texts_to_sequences(testing_sentences)
testing_padded = pad_sequences(testing_sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type)
```
***Converting data into arrays***
```
import numpy as np
training_padded = np.array(training_padded)
training_labels = np.array(training_labels)
testing_padded = np.array(testing_padded)
testing_labels = np.array(testing_labels)
```
# Creating CNN model
# Adding Layers
# Compiling Models
```
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(24, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
# Getting Summary
model.summary()
```
# Fiting CNN Model
```
num_epochs = 50
history = model.fit(training_padded, training_labels, epochs=num_epochs, validation_data=(testing_padded, testing_labels), verbose=2)
```
# Plotting accuracy and loss Graph
```
import matplotlib.pyplot as plt
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
plot_graphs(history, "accuracy")
plot_graphs(history, "loss")
```
* The 1st Graph Show the Difference B/w Increase in accuracy and val_accuracy
* The 2nd Graph show the difference b/w decrease in loss and val_loss
***Decoding Sentences***
```
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_sentence(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
print(decode_sentence(training_padded[0]))
print(training_sentences[0])
print(labels[0])
```
# Prediction on Testing Data
```
for n in range(10):
print(testing_sentences[n],': ',testing_labels[n])
```
> As we can see here the testing was all perfect!!!!!
> Bad Reviews are marked as 0
> Good reviews are marked as 1
# Getting Prediction with Randomly Created Reviews
```
# Checking Predictions
sentence = ["Awesome Pizza", "I will come here everytime!!!", "Dont come here ever, Worst Food"]
sequences = tokenizer.texts_to_sequences(sentence)
padded = pad_sequences(sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type)
print(model.predict(padded))
```
* As we can see the sentences i created randomly were Predicted almost perfectly
* the **First 2 reviews** were **good** to they got score which is **almost equals** to **1**
* The **3rd review** was the **bad** one so as we can see its score is **almost equal** to **0**
***Please Leave your Valuable feedback in the comments below!!!!!!***
| github_jupyter |
# Shape descriptors based on neighborhood graphs
This notebook demonstrates how to determine shape descriptors of cells in case they cannot be segmented exactly but their centers can be detected.
```
import pyclesperanto_prototype as cle
print(cle.get_device())
```
We generate a label image of cells with given sizes in x and y and a size ratio of 1:1.5.
Assume this is the result of some cell segmentation algorithm.
```
cell_size_x = 10
cell_size_y = 15
# generate and show tissue
tissue_labels = cle.artificial_tissue_2d(width=100, height=100, delta_x=cell_size_x, delta_y=cell_size_y, random_sigma_x=1, random_sigma_y=1)
cle.imshow(tissue_labels, labels=True)
```
# Classical shape descriptors: minor and major axis
We can measure the minor and major axis of those cells using scikit-image
```
from skimage.measure import regionprops
import numpy as np
label_image = cle.pull_zyx(tissue_labels).astype(int)
stats = regionprops(label_image)
avg_minor_axis_length = np.mean([s.minor_axis_length for s in stats])
print("Average minor axis length", avg_minor_axis_length)
avg_major_axis_length = np.mean([s.major_axis_length for s in stats])
print("Average major axis length", avg_major_axis_length)
```
## Shape descriptors based on neighbor meshes
In some cases, we can't segment the cells properly, we can just do spot detection and visulize the centers of the cells.
Assume this is the result of a cell detection algorithm.
```
centroids = cle.centroids_of_labels(tissue_labels)
spot_image = cle.create_like(tissue_labels)
cle.pointlist_to_labelled_spots(centroids, spot_image)
cle.imshow(spot_image)
```
From such an image of labelled spots, we can make a voronoi diagram, were we can analyse wich cells (expanded spots) are close by each other.
The result is an approximation of cell segmentation.
```
voronoi_diagram = cle.extend_labeling_via_voronoi(spot_image)
cle.imshow(voronoi_diagram, labels=True)
```
From such a pair of spots-image and voronoi diagram, we can dermine to matrices, a touch-matrix (also known as adjaceny-graph matrix) and a distance matrix.
```
touch_matrix = cle.generate_touch_matrix(voronoi_diagram)
centroids = cle.labelled_spots_to_pointlist(spot_image) # this is susbstanially faster than centroids_of_labels
distance_matrix = cle.generate_distance_matrix(centroids, centroids)
cle.imshow(touch_matrix)
cle.imshow(distance_matrix)
```
From these two matrices, we can determine the minimum and maximum distance between centroids of touching objects (cells) in the voronoi image. These are estimated minor and major axis of the segmented objects.
```
min_distance = cle.minimum_distance_of_touching_neighbors(distance_matrix, touch_matrix)
max_distance = cle.maximum_distance_of_touching_neighbors(distance_matrix, touch_matrix)
print("minimum distance of touching neihbors", cle.mean_of_all_pixels(min_distance))
print("maximum distance of touching neihbors", cle.mean_of_all_pixels(max_distance))
```
## Distance visualisation
Finally, let's visualize distances between neighbors in a colored mesh.
```
mesh = cle.draw_distance_mesh_between_touching_labels(voronoi_diagram)
cle.imshow(mesh)
```
| github_jupyter |
```
import numpy as np
from Data_Savior_J import load_file
Xz = load_file("./classifier_data/walk1.data")
Xz = np.vstack((Xz,load_file("./classifier_data/walk1U.data")))
Xz = np.vstack((Xz,load_file("./classifier_data/walk1D.data")))
Xz = np.vstack((Xz,load_file("./classifier_data/walk2.data")))
Xz = np.vstack((Xz,load_file("./classifier_data/walk2U.data")))
Xz = np.vstack((Xz,load_file("./classifier_data/walk2D.data")))
Xz = np.vstack((Xz,load_file("./classifier_data/walk3.data")))
Xz = np.vstack((Xz,load_file("./classifier_data/walk3U.data")))
Xz = np.vstack((Xz,load_file("./classifier_data/walk3D.data")))
Xz = np.vstack((Xz,load_file("./classifier_data/walk4.data")))
Xz = np.vstack((Xz,load_file("./classifier_data/walk4U.data")))
Xz = np.vstack((Xz,load_file("./classifier_data/walk4D.data")))
Xz = np.vstack((Xz,load_file("./classifier_data/walk5.data")))
Xz = np.vstack((Xz,load_file("./classifier_data/walk5U.data")))
Xz = np.vstack((Xz,load_file("./classifier_data/walk5D.data")))
```
## Vetor de features para classificação:
$X_c = [a~| ~av~| ~aa~| ~l\_a~| ~l\_av~| ~l\_aa~| ~pos\_foot\_r~| ~pos\_foot\_l~| ~vz\_r~| ~vz\_l~| ~C]$
$X_c = [0~|~~1~~| ~~2~~| ~~~3~~| ~~~4~~~| ~~~~5~~| ~~~~~~~6~~~~~~~~~~| ~~~~~~~~7~~~~~~~~| ~~~8~~~| ~~~~9~~| 10]$
#### $a \rightarrow$ ângulo do joelho direito; $av \rightarrow$ velocidade angular do joelho direito; $aa \rightarrow$ aceleração angular do joelho direito;
#### $a \rightarrow$ ângulo do joelho esquerdo; $av \rightarrow$ velocidade angular do joelho esquerdo; $aa \rightarrow$ aceleração angular do joelho esquerdo;
#### $pos\_foot\_r \rightarrow$ posição do tornozelo direito em relação ao sacro; $pos\_foot\_l \rightarrow$ posição do tornozelo esquerdo em relação ao sacro;
#### $vz\_r \rightarrow$ velocidade do trocanter direito no eixo z; $vz\_l \rightarrow$ velocidade do trocanter esquerdo no eixo z;
#### $C \rightarrow$ indice de classificação
## Indice de classificação $"c"$:
#### $C = 0 \rightarrow$ Marcha normal;
#### $C = 1 \rightarrow$ Marcha de subida de escada;
#### $C = 2 \rightarrow$ Marvha de descidade escada.
```
import numpy as np
X = Xz[:,[0,1,2,3,4,5,6,7,8,9]]
yz = Xz[:,[10]]
y = np.array([])
for i in range(len(yz)):
y = np.hstack((y,yz[i]))
X.shape, y.shape
np.unique(y) # possíveis valores de y
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25,
random_state=10)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
sc.fit(X_train)
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)
print X_train_std.shape
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
print 'Processing time RBF'
%time rbf_svc = svm.SVC(kernel='rbf', gamma=0.2, C=6, decision_function_shape='ovr').fit(X_train_std, y_train)
print ''
print 'Processing time Polynomial'
#%time poly_svc = svm.SVC(kernel='poly', degree=4, coef0=6.0, C=0.5, decision_function_shape='ovr').fit(X_train_std, y_train)
%time poly_svc = svm.SVC(kernel='poly', degree=2, coef0=4.7, C=48.9, decision_function_shape='ovr').fit(X_train_std, y_train)
def run_svm(svc, X_test_std, y_test):
y_pred = svc.predict(X_test_std)
from sklearn.metrics import accuracy_score
if (svc==rbf_svc):
print ('SVM-RBF accuracy:---------->%.2f %%' % (accuracy_score(y_test, y_pred)*100))
elif(svc==poly_svc):
print ('SVM-Polynomial accuracy:--->%.2f %%' % (accuracy_score(y_test, y_pred)*100))
run_svm(rbf_svc, X_test_std, y_test)
run_svm(poly_svc, X_test_std, y_test)
from sklearn.grid_search import GridSearchCV
param_grid = [
{
'C' : [0.001, 0.01, 0.05, 0.1, 0.5, 1, 1.5, 3, 6, 8,10, 38.5, 38.56, 38.8, 39, 39.01, 39.1, 39.2, 48.5, 48.56, 48.8, 49, 49.01, 49.1, 49.2, 50, 70, 80,
100, 1000],
'gamma' : [1000, 100, 80, 50, 35, 10, 7, 5, 3, 2, 1.5, 1, 0.9, 0.7, 0.8, 0.61, 0.62, 0.63, 0.65, 0.6, 0.59, 0.58, 0.57, 0.56,
0.55, 0.5, 0.48, 0.45, 0.4, 0.35, 0.32, 0.25, 0.2, 0.1, 0.01, 0.001, 0.0001],
'kernel': ['rbf'],
'random_state' : [1,5,10,20,30,40,50,60,70,80,90,100,500,1000,10000]
},
]
clf = GridSearchCV(svm.SVC(C=1), param_grid, cv=15)
%time clf.fit(X_train_std, y_train)
print("Best parameters set found on development set:")
print()
print(clf.best_params_)
print()
print("Grid scores on development set:")
print()
for params, mean_score, scores in clf.grid_scores_:
print("%0.3f (+/-%0.03f) for %r"
% (mean_score, scores.std() * 2, params))
print()
%time
def frange(start, stop, step):
i = start
while i < stop:
yield i
i += step
g = []
for i in frange(0.001, 0.015, 0.0005):
g.append(i)
g
```
# Gaussian Naive Bayes Classifier
```
from sklearn.naive_bayes import GaussianNB
clf = GaussianNB()
clf.fit(X_train, y_train)
pred = clf.predict(X_test)
from sklearn.metrics import accuracy_score
print ('ClassifyNB accuracy:---------->%.2f %%' % (accuracy_score(y_test, pred)*100))
```
# Random Forest Classifier
```
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_estimators=500)
rfc = rfc.fit(X_train, y_train)
y_pred = rfc.predict(X_test)
from sklearn.metrics import accuracy_score
print ('ClassifyNB accuracy:---------->%.2f %%' % (accuracy_score(y_test, y_pred)*100))
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
tp = float(cm[0][0])/np.sum(cm[0])
tn = float(cm[1][1])/np.sum(cm[1])
print cm
print tp
print tn
from sklearn.metrics import confusion_matrix
import itertools
class_names = np.array(['normal', 'stair ascent', 'stair descent'])
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test, y_pred)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names,
title='Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names, normalize=True,
title='Normalized confusion matrix')
plt.show()
cm = confusion_matrix(y_test, y_pred)
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
cm
TP = float(cm[0][0])
FP = float(cm[1][0])
FN = float(cm[0][1])
TN = float(cm[1][1])
print np.array([['TP', 'FN'],['FP', 'TN']])
cmm =np.array([[TP, FN],[FP, TN]])
print cmm
(TP)/((TP+FN))
(TN)/((FP+TN))
(TP+TN)/((TP+FN)+(FP+TN))
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import os
import sys
# Install if in Google colab notebook
if 'google.colab' in sys.modules:
os.system('python -m pip install --upgrade --force-reinstall git+https://github.com/manzt/hg')
```
## Synced heatmaps
```
import hg
# Configure remote data sources (tilesets)
tileset1 = hg.remote(
uid="CQMd6V_cRw6iCI_-Unl3PQ",
server="https://higlass.io/api/v1/",
name="Rao et al. (2014) GM12878 MboI (allreps) 1kb",
)
tileset2 = hg.remote(
uid="QvdMEvccQuOxKTEjrVL3wA",
server="https://higlass.io/api/v1/",
name="Rao et al. (2014) K562 MboI (allreps) 1kb",
)
# Create a HeatmapTrack for each tileset
track1 = tileset1.track("heatmap")
track2 = tileset2.track("heatmap")
# Create two independent Views, one for each heatmap
view1 = hg.view(track1, width=6)
view2 = hg.view(track2, width=6)
# Lock zoom & location for each View
view_lock = hg.lock(view1, view2)
# Concatenate Views side-by-side, and apply synchronization lock
(view1 | view2).locks(view_lock)
# Lock zoom only for each view
(view1 | view2).locks(zoom=view_lock)
# Lock location only for each view
(view1 | view2).locks(location=view_lock)
# Create additional views and synchronize (apply black to white color range)
bw_color_range = ['rgba(255,255,255,1)', 'rgba(0,0,0,1)']
view3 = hg.view(track1.opts(colorRange=bw_color_range), width=6)
view4 = hg.view(track2.opts(colorRange=bw_color_range), width=6)
# Create stacked view configuration and lock views by column
((view1 / view2) | (view3 / view4)).locks(
hg.lock(view1, view2),
hg.lock(view3, view4),
)
```
## Value scale syncing
```
# Create Tracks
# creates a hg.Track without a tileset
t1 = hg.track('top-axis')
# Creates a hg.RemoteTileset object
remote_tileset = hg.remote(
uid="CQMd6V_cRw6iCI_-Unl3PQ",
server="https://higlass.io/api/v1/", # optional, "http://higlass.io/api/v1/" default
name="Rao et al. (2014) GM12878 MboI (allreps) 1kb",
)
# Tileset.track() creates a hg.Track object binding the parent Tileset
t2 = remote_tileset.track('heatmap').opts(valueScaleMax=0.5)
# Create Views
# Positional arguments for `hg.view` are overloaded. Keyword arguments are
# forwarded to the creation of the `hg.View`. Layout fields (`x`, `y`,
# `width`, `height`) may also be assigned.
# (1) Track objects (positioning guessed based on track type)
view1 = hg.view(t1, t2, width=6)
# (2) (Track, position) tuple
view2 = hg.view((t1, 'top'), t2, width=6)
# (3) hg.Tracks object
view3 = hg.view(hg.Tracks(top=[t1], center=[t2]), width=6)
# (View, Track) tuples -> ScaleValueLock
scale_value_lock = hg.lock((view1, t2), (view2, t2))
(view1 | view2).locks(scale_value_lock)
```
## Remote heatmaps
```
# Initialize data sources
tset1 = hg.remote(
uid="CQMd6V_cRw6iCI_-Unl3PQ",
name="Rao et al. (2014) GM12878 MboI (allreps) 1kb",
)
tset2 = hg.remote(
uid="QvdMEvccQuOxKTEjrVL3wA",
name="Rao et al. (2014) K562 MboI (allreps) 1kb",
)
# Create a track for each data source
t1 = tset1.track('heatmap', height=300)
t2 = tset2.track('heatmap', height=300)
# Create a custom DividedTrack and modify color scale
t3 = hg.divide(t1, t2).opts(
colorRange=['blue', 'white'],
valueScaleMin=0.1,
valueScaleMax=10,
)
# Set initial view domains
domain = (7e7, 8e7)
v1 = hg.view(t1, width=4).domain(x=domain)
v2 = hg.view(t2, width=4).domain(x=domain)
v3 = hg.view(t3, width=4).domain(x=domain)
(v1 | v3 | v2).locks(hg.lock(v1, v2, v3))
```
## Extract track from another view config
```
import hg
# Load view config from URL
url = 'https://gist.githubusercontent.com/manzt/c2c498dac3ca9804a2b8e4bc1af3b55b/raw/ee8426c9728e875b6f4d65030c61181c6ba29b53/example.json'
config = hg.Viewconf.from_url(url)
# Display viewconfig
config
# Inspect the Viewconf tracks
for position, track in config.views[0].tracks:
print(position) # position in view layout
print(track) # python object from `higlass-schema`
# Extract a specific track from the above Viewconf and modify its properties
gene_annotation_track = config.views[0].tracks.top[0].properties(height=200)
# Display track in isolation
hg.view(gene_annotation_track)
```
## Remote bigWig tiles
The next section requires `fuse` and `simple-httpfs` to be installed
```
!pip install fusepy simple-httpfs clodius
# set up FUSE
import pathlib
mount_dir = pathlib.Path.cwd() / "mnt"
if not mount_dir.is_dir():
mount_dir.mkdir()
# mount directory and start fuse
hg.fuse.start(mount_dir.absolute())
import hg
# get mounted path
bwpath = hg.fuse.path(
f"http://hgdownload.cse.ucsc.edu/goldenpath/hg19/encodeDCC/"
f"wgEncodeSydhTfbs/wgEncodeSydhTfbsGm12878InputStdSig.bigWig"
)
track = hg.bigwig(bwpath).track("horizontal-bar")
hg.view(track)
```
## Local cooler files
This section describes how to load cooler files that are on the same filesystem as the Jupyter notebook.
```
tileset = hg.cooler('../test/data/Dixon2012-J1-NcoI-R1-filtered.100kb.multires.cool')
hg.view(tileset.track("heatmap"))
```
## Local bigWig files (with chromsizes)
### TODO: inline `chromsizes` tilesets are not yet implemented
```
chromsizes = [
('chr1', 249250621),
('chr2', 243199373),
('chr3', 198022430),
('chr4', 191154276),
('chr5', 180915260),
('chr6', 171115067),
('chr7', 159138663),
('chr8', 146364022),
('chr9', 141213431),
('chr10', 135534747),
('chr11', 135006516),
('chr12', 133851895),
('chr13', 115169878),
('chr14', 107349540),
('chr15', 102531392),
('chr16', 90354753),
('chr17', 81195210),
('chr18', 78077248),
('chr20', 63025520),
('chr19', 59128983),
('chr21', 48129895),
('chr22', 51304566),
('chrX', 155270560),
('chrY', 59373566),
('chrM', 16571),
]
bigwig_fp = '../test/data/wgEncodeCaltechRnaSeqHuvecR1x75dTh1014IlnaPlusSignalRep2.bigWig'
ts = hg.bigwig(bigwig_fp, chromsizes=chromsizes)
cs = hg.ChromSizes(chromsizes)
view1 = View([
Track('top-axis'),
Track('horizontal-bar', tileset=ts),
Track('horizontal-chromosome-labels', position='top', tileset=cs)
])
display, server, viewconf = higlass.display([view1])
display
```
## Local bedlike data
### TODO: inline `bedlike` tilesets are not yet implemented
```
import hg
bed = [
['chr1', 1000, 2000, 'item #1', '.', '+'],
['chr2', 3000, 3500, 'item #1', '.', '-'],
]
chroms = [
['chr1', 2100],
['chr2', 4000],
]
data = bedtiles(bed, chroms)
track = Track(track_type='bedlike', position='top',
height=50, data=data, options={"minusStrandColor": "red"})
d,s,v = higlass.display([[track]])
d
```
## Custom data
```
import numpy as np
dim = 2000
I, J = np.indices((dim, dim))
data = (
-(J + 47) * np.sin(np.sqrt(np.abs(I / 2 + (J + 47))))
- I * np.sin(np.sqrt(np.abs(I - (J + 47))))
)
import clodius.tiles.npmatrix
from hg.tilesets import LocalTileset
ts = hg.server.add(
LocalTileset(
uid="my-custom-tileset",
info=lambda: clodius.tiles.npmatrix.tileset_info(data),
tiles=lambda tids: clodius.tiles.npmatrix.tiles_wrapper(data, tids),
)
)
hg.view(
(hg.track("top-axis"), "top"),
(hg.track("left-axis"), "left"),
(ts.track("heatmap", height=250).opts(valueScaleMax=0.5), "center"),
)
```
| github_jupyter |
# Supervised Learning (Classification)
In supervised learning, the task is to infer hidden structure from
labeled data, comprised of training examples $\{(x_n, y_n)\}$.
Classification means the output $y$ takes discrete values.
We demonstrate with an example in Edward. A webpage version is available at
http://edwardlib.org/tutorials/supervised-classification.
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import edward as ed
import numpy as np
import tensorflow as tf
from edward.models import Bernoulli, MultivariateNormalTriL, Normal
from edward.util import rbf
```
## Data
Use the
[crabs data set](https://stat.ethz.ch/R-manual/R-devel/library/MASS/html/crabs.html),
which consists of morphological measurements on a crab species. We
are interested in predicting whether a given crab has the color form
blue or orange.
```
ed.set_seed(42)
data = np.loadtxt('data/crabs_train.txt', delimiter=',')
data[data[:, 0] == -1, 0] = 0 # replace -1 label with 0 label
N = data.shape[0] # number of data points
D = data.shape[1] - 1 # number of features
X_train = data[:, 1:]
y_train = data[:, 0]
print("Number of data points: {}".format(N))
print("Number of features: {}".format(D))
```
## Model
A Gaussian process is a powerful object for modeling nonlinear
relationships between pairs of random variables. It defines a distribution over
(possibly nonlinear) functions, which can be applied for representing
our uncertainty around the true functional relationship.
Here we define a Gaussian process model for classification
(Rasumussen & Williams, 2006).
Formally, a distribution over functions $f:\mathbb{R}^D\to\mathbb{R}$ can be specified
by a Gaussian process
$$
\begin{align*}
p(f)
&=
\mathcal{GP}(f\mid \mathbf{0}, k(\mathbf{x}, \mathbf{x}^\prime)),
\end{align*}
$$
whose mean function is the zero function, and whose covariance
function is some kernel which describes dependence between
any set of inputs to the function.
Given a set of input-output pairs
$\{\mathbf{x}_n\in\mathbb{R}^D,y_n\in\mathbb{R}\}$,
the likelihood can be written as a multivariate normal
\begin{align*}
p(\mathbf{y})
&=
\text{Normal}(\mathbf{y} \mid \mathbf{0}, \mathbf{K})
\end{align*}
where $\mathbf{K}$ is a covariance matrix given by evaluating
$k(\mathbf{x}_n, \mathbf{x}_m)$ for each pair of inputs in the data
set.
The above applies directly for regression where $\mathbb{y}$ is a
real-valued response, but not for (binary) classification, where $\mathbb{y}$
is a label in $\{0,1\}$. To deal with classification, we interpret the
response as latent variables which is squashed into $[0,1]$. We then
draw from a Bernoulli to determine the label, with probability given
by the squashed value.
Define the likelihood of an observation $(\mathbf{x}_n, y_n)$ as
\begin{align*}
p(y_n \mid \mathbf{z}, x_n)
&=
\text{Bernoulli}(y_n \mid \text{logit}^{-1}(\mathbf{x}_n^\top \mathbf{z})).
\end{align*}
Define the prior to be a multivariate normal
\begin{align*}
p(\mathbf{z})
&=
\text{Normal}(\mathbf{z} \mid \mathbf{0}, \mathbf{K}),
\end{align*}
with covariance matrix given as previously stated.
Let's build the model in Edward. We use a radial basis function (RBF)
kernel, also known as the squared exponential or exponentiated
quadratic. It returns the kernel matrix evaluated over all pairs of
data points; we then Cholesky decompose the matrix to parameterize the
multivariate normal distribution.
```
X = tf.placeholder(tf.float32, [N, D])
f = MultivariateNormalTriL(loc=tf.zeros(N), scale_tril=tf.cholesky(rbf(X)))
y = Bernoulli(logits=f)
```
Here, we define a placeholder `X`. During inference, we pass in
the value for this placeholder according to data.
## Inference
Perform variational inference.
Define the variational model to be a fully factorized normal.
```
qf = Normal(loc=tf.Variable(tf.random_normal([N])),
scale=tf.nn.softplus(tf.Variable(tf.random_normal([N]))))
```
Run variational inference for `500` iterations.
```
inference = ed.KLqp({f: qf}, data={X: X_train, y: y_train})
inference.run(n_iter=5000)
```
In this case
`KLqp` defaults to minimizing the
$\text{KL}(q\|p)$ divergence measure using the reparameterization
gradient.
For more details on inference, see the [$\text{KL}(q\|p)$ tutorial](/tutorials/klqp).
(This example happens to be slow because evaluating and inverting full
covariances in Gaussian processes happens to be slow.)
| github_jupyter |
## Transfer learning with ULMFiT script
This script implements transfer learning for the LSTM models using the ULMFiT procedures. The core parameters are specified in "slanted_learning_rate". The rigidity of the tensorflow computational graph means that the procedures are hard coded (with separate learning rates and optimisers specified for each model layer). This is hugely inefficient and it might be that a cleaner implementation is possible in newer versions of tensorflow.
Given the size of the legislative training dataset, these efficiencies don't much matter: one training epoch takes around 80 seconds on an intel i5 processor / 4GB memory.
Adapted from <https://github.com/tensorflow/nmt>
```
import tensorflow as tf
from tensorflow.python.ops import lookup_ops
from tensorflow.python.layers import core as layers_core
from tensorflow.contrib.layers import xavier_initializer
import codecs
import numpy as np
import time
# Inputs
data_path = "" # Data path
text_data = [data_path + "/leg_train_text.txt"] # Sentence training data (spacy parsed)
text_whole_data = [data_path + "/leg_train_original.txt"] # Whole sentence (not tokenised)
labels_data = [data_path + "/leg_train_label.txt"] # Labels for sentences
embed_vocab_data = data_path + "/leg_embeddings_vocab.txt" # Embedding vocab: words from training sentences
# for which embeddings exist and have been extracted in embed_file. (If full, this is "embed_vocab.txt")
full_vocab_data = data_path + "/total_vocab.txt" # Full sentence vocab. ("total_vocab.txt")
txt_eos = "</S>" # Special characters
lbl_sos = "<l>"
lbl_eos = "</l>"
embed_file = data_path + "/leg_embeddings.txt" # Embeddings file (full is "embeddings.txt")
restore_path = "./LSTM_base/model.ckpt"
save_model = True # Set to True if you want to save model variables
log_path = "" # Log directory
save_path = "" # Save model path, only used if save_path is True
log_freq = 100 # Show some outputs every log_freq training steps
# Model parameters
num_layers = 3
num_total_layers = 7
num_units = 128 # If uni-directional, then same as enc_units. If bi, make twice as big.
beam_search = False
beam_width = 4 # There are only 3 outputs...
batch_size = 25
forget_bias = 0
dropout = 0.2
max_gradient_norm = 1
learning_rate = 0.002 # This doesn't do anything - see slanted_learning_rate
epochs = 10
# Build a tf dataset: an iterator that returns batched data for training.
def build_dataset(text_data, labels_data, embed_vocab_data, full_vocab_data, txt_eos, lbl_sos, lbl_eos, batch_size):
# Build the word to id lookup table from the text data. OOV words point at 0 = <unk> = random (but all same)
vocab_table = lookup_ops.index_table_from_file(embed_vocab_data, default_value=0)
# Build a residual lookup table for all vocab, so can convert words back at end of process (evaluation only)
full_vocab_table = lookup_ops.index_table_from_file(full_vocab_data, default_value=0)
txt_eos_id = tf.cast(vocab_table.lookup(tf.constant(txt_eos)), tf.int32)
txt_full_eos_id = tf.cast(full_vocab_table.lookup(tf.constant(txt_eos)), tf.int32) # Probably not strictly necessary, since
# eos ends up in the same place in both vocab files.
lbl_sos_id = tf.cast(vocab_table.lookup(tf.constant(lbl_sos)), tf.int32)
lbl_eos_id = tf.cast(vocab_table.lookup(tf.constant(lbl_eos)), tf.int32)
# Read each line of the text file. Each line is a sentence (where text has been tokenised using spacy)
# NB can pass multiple files to TextLineDataset (so can prep data in batches)
sent_data = tf.data.TextLineDataset(text_data)
labels_data = tf.data.TextLineDataset(labels_data)
# For each line, split on white space
sent_data = sent_data.map(lambda string: tf.string_split([string]).values)
labels_data = labels_data.map(lambda label: tf.string_split([label]).values)
labels_data = labels_data.map(lambda label: tf.string_to_number(label, tf.int32))
# Lookup word ids (in the embedding vocab and in the full vocab)
embed_sent_data = sent_data.map(lambda token: tf.cast(vocab_table.lookup(token), tf.int32))
full_sent_data = sent_data.map(lambda token: tf.cast(full_vocab_table.lookup(token), tf.int32))
# Zip datasets together
sent_label_data = tf.data.Dataset.zip((full_sent_data, embed_sent_data, labels_data))
# Create input dataset (labels prefixed by sos) and target dataset (labels suffixed with eos)
sent_label_data = sent_label_data.map(lambda full_words, embed_words, labels: (full_words, embed_words,
tf.concat(([lbl_sos_id], labels), 0),
tf.concat((labels, [lbl_eos_id]), 0),))
# Add seqeunce length
sent_label_data = sent_label_data.map(lambda full_words, embed_words, labels_in, labels_out: (full_words, embed_words,
tf.size(embed_words),
tf.size(labels_in),
labels_in,
labels_out))
# Random shuffle
sent_label_data = sent_label_data.shuffle(buffer_size=5000)
# Batching the input, padding to the length of the longest sequence in the input. Can also bucket these. Form of dataset
# is: txt_ids_for_full_vocab, txt_ids_for_embed_vocab, text_size, label_size, labels_in, labels_out.
batch_size = tf.constant(batch_size, tf.int64)
batched_input = sent_label_data.padded_batch(batch_size, padded_shapes=(tf.TensorShape([None]),
tf.TensorShape([None]),
tf.TensorShape([]),
tf.TensorShape([]),
tf.TensorShape([None]),
tf.TensorShape([None])),
padding_values=(txt_full_eos_id,
txt_eos_id,
0,
0,
lbl_eos_id,
lbl_eos_id))
iterator = batched_input.make_initializable_iterator()
return iterator
# Preparatory step to create_emb_matrix. Each line of the embedding file is a word followed by a space delimited numbers forming
# the vector. load_embed_txt splits on white space and builds a dictionary where keys are the words in the embedding file
def load_embed_txt(embed_file):
emb_dict = dict()
with codecs.getreader("utf-8")(tf.gfile.GFile(embed_file, 'rb')) as f:
for line in f:
tokens = line.strip().split(" ")
word = tokens[0]
vec = list(map(float, tokens[1:]))
emb_dict[word] = vec
emb_size = len(vec)
return emb_dict, emb_size
# Create an embedding matrix (numpy array of embeddings). Includes an <unk> value for oov words. These are the values that are
# looked-up when the model is run.
def create_emb_matrix(embed_file):
emb_dict, emb_size = load_embed_txt(embed_file)
mat = np.array([emb_dict[token] for token in emb_dict.keys()])
emb_mat = tf.convert_to_tensor(mat, dtype=tf.float32)
return emb_mat
# A hack to help with the input to the decoder. Creates a matrix where keys and values are just integers in single item lists.
def create_dec_matrix(num):
dec_dict = {}
for i in range(num):
dec_dict[i] = [i]
mat = np.array([dec_dict[token] for token in dec_dict.keys()])
dec_mat = tf.convert_to_tensor(mat, dtype=tf.float32)
return dec_mat
# Build the id to vocab dictionary (reverse of the vocab lookup). This is for the "embed vocab" (i.e. where lots of words are
# still mapped to <unk>)). This assumes there is both: an "embed vocab file", a file of the vocab for which embeddings exist and
# an embed file. Recall unk and special characters are included in the vocab file, so no need to manaully add to the dictionary.
# The words are just set out on each line of the file, so "strip" / "split" is a bit overkill but works well enough.
def ids_to_embed_vocab(embed_vocab_data):
embed_vocab_dict = {}
with codecs.getreader("utf-8")(tf.gfile.GFile(embed_vocab_data, 'rb')) as f:
count = 0
for line in f:
tokens = line.strip().split(" ")
word = tokens[0]
embed_vocab_dict[count] = word
count += 1
return embed_vocab_dict
# Build the id to vocab dictionary (reverse of the vocab lookup). This is for the full vocab. This is a hack, not really
# necessary for the model but allows you to read the outputs easier (otherwise you would be left with lots of "unks" in the
# final output.) We don't compute with these ids, they are just preserved through the batch input so we know what words went in.
def ids_to_full_vocab(full_vocab_data):
full_vocab_dict = {}
with codecs.getreader("utf-8")(tf.gfile.GFile(full_vocab_data, 'rb')) as f:
count = 0
for line in f:
tokens = line.strip().split(" ")
word = tokens[0]
full_vocab_dict[count] = word
count += 1
return full_vocab_dict
# Single LSTM cell instance with dropout option.
def single_cell(num_units, forget_bias, dropout, name):
single_cell = tf.nn.rnn_cell.BasicLSTMCell(num_units, forget_bias=forget_bias, name=name)
if dropout > 0.0:
single_cell = tf.nn.rnn_cell.DropoutWrapper(cell=single_cell, input_keep_prob=(1.0 - dropout))
return single_cell
# Multi-layer RNN definition. The "direction" argument is just to help with naming when using bi-directional model.
def RNN_cell(num_layers, num_units, forget_bias, dropout, direction):
if num_layers == 1:
cell_name = direction + "_LSTM_layer"
rnn_cell = single_cell(num_units, forget_bias, dropout, cell_name)
else:
cell_list = []
for i in range(num_layers):
cell_name = direction + "_LSTM_layer_" + str(i)
cell = single_cell(num_units, forget_bias, dropout, cell_name)
cell_list.append(cell)
rnn_cell = tf.nn.rnn_cell.MultiRNNCell(cell_list)
return rnn_cell
# Build bi-directional LSTM (just add direction name prefixes)
def build_bi_directional_LSTM(num_units, forget_bias, dropout, num_layers):
fw_cell = RNN_cell(num_layers, num_units, forget_bias, dropout, "fw")
bw_cell = RNN_cell(num_layers, num_units, forget_bias, dropout, "bw")
return fw_cell, bw_cell
# Calculate the slanted learning rates to use at each training iteration. Returns a list of learning rates, one per training
# iteration. Note params are hard coded (see in particular the number of training examples).
def slanted_learning_rate(epochs, num_total_layers, batch_size):
LRmax = 0.01
ratio = 32
T = (750*epochs)/batch_size # Number of training examples.
cut_frac = 0.1
cut = T * cut_frac
reduce_factor = 1/2.6
iterations = np.arange(int(T))
s_l_r = np.zeros((num_total_layers, int(T)))
for l in range(num_total_layers):
for i in iterations:
if i < cut:
p = i/cut
LR_i = LRmax*((1+p*(ratio-1))/ratio)
else:
p = 1 - ((i-cut)/(cut*(1/cut_frac-1)))
LR_i = LRmax*((1+p*(ratio-1))/ratio)
if i > l*(T/epochs):
s_l_r[l][i]= LR_i*(reduce_factor**l)
t_list = []
for l in range(num_total_layers):
s_l_r_t = tf.convert_to_tensor(s_l_r[l], dtype=tf.float32)
t_list.append([s_l_r_t])
s_l_r_t = tf.concat(t_list,0)
return s_l_r_t
class Model():
def __init__(self, dropout, num_units, num_layers, forget_bias,
embed_words, full_words, txt_size, labels_size, labels_in, labels_out,
global_step):
self.global_step = global_step
self.learning_rates_ = slanted_learning_rate(epochs, num_total_layers, batch_size)
self.learning_rate1 = self.learning_rates_[0][global_step]
self.learning_rate2 = self.learning_rates_[1][global_step]
self.learning_rate3 = self.learning_rates_[2][global_step]
self.learning_rate4 = self.learning_rates_[3][global_step]
self.learning_rate5 = self.learning_rates_[4][global_step]
self.learning_rate6 = self.learning_rates_[5][global_step]
self.learning_rate7 = self.learning_rates_[6][global_step]
self.dropout = dropout
self.num_units = num_units
self.forget_bias = forget_bias
self.num_layers = num_layers
self.words_in = embed_words
self.full_words_in = full_words
with tf.variable_scope("main", initializer=xavier_initializer()):
# Inputs
mask_labels = tf.sequence_mask(labels_size, dtype=tf.int32) # To mask the padded input
labels_in = labels_in * mask_labels
self.labels_out = labels_out
self.mask_labels = mask_labels
encoder_emb_inp = tf.nn.embedding_lookup(emb_mat, embed_words) # Encoder embedding lookup
decoder_emb_inp = tf.nn.embedding_lookup(dec_mat, labels_in) # Decoder embedding lookup (easiest way to get it in
# right shape)
# Encoder definition (by default, encoder_state is just the final state). Encoder can be multi-layers and
# bi-directional
encoder_cells = RNN_cell(num_layers, num_units, forget_bias, dropout, "enc_fw")
encoder_outputs, encoder_state = tf.nn.dynamic_rnn(encoder_cells,
encoder_emb_inp,
sequence_length=txt_size,
time_major=False,
dtype = tf.float32)
# Decoder definition. Number of decoder layers is the same as the number of encoder layers, but needed be. The
# helper is defined seperately and can be adjusted for greedy / beam decoding at inference.
decoder_cells = RNN_cell(num_layers, num_units, forget_bias, dropout, "dec")
helper = tf.contrib.seq2seq.TrainingHelper(decoder_emb_inp,
labels_size,
time_major=False)
# Output layer which takes decoder output and maps to 3 categories (0,1,2) - these are the same as the target labels.
# Recall 2 just maps to </l>, which is the prediction for </s>
output_layer = layers_core.Dense(3, use_bias=False, name="output_projection")
decoder = tf.contrib.seq2seq.BasicDecoder(decoder_cells, helper, encoder_state, output_layer)
outputs, final_context_state, _ = tf.contrib.seq2seq.dynamic_decode(decoder,
output_time_major=False)
# Decoder just runs until it gets to the end, but could impose a max length (e.g. length of labels)
# Calculate loss: By logits we just mean the outputs of the decoder (after output_layer). crossent takes normalised
# output probability prediction for each class (i.e. the softmax of the logits) and takes cross-entropy with the
# actual labels.
self.logits = outputs[0]
crossent = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=self.labels_out, logits=self.logits)
self.loss = tf.reduce_sum(crossent * tf.cast(mask_labels, tf.float32)) / tf.cast(batch_size, tf.float32)
########################################################################################################
# Transfer learning regime...
########################################################################################################
epoch_count = tf.floor_div(global_step*batch_size, 750)+1
opt1 = tf.train.GradientDescentOptimizer(self.learning_rates_[0][global_step])
opt2 = tf.train.GradientDescentOptimizer(self.learning_rates_[1][global_step])
opt3 = tf.train.GradientDescentOptimizer(self.learning_rates_[2][global_step])
opt4 = tf.train.GradientDescentOptimizer(self.learning_rates_[3][global_step])
opt5 = tf.train.GradientDescentOptimizer(self.learning_rates_[4][global_step])
opt6 = tf.train.GradientDescentOptimizer(self.learning_rates_[5][global_step])
opt7 = tf.train.GradientDescentOptimizer(self.learning_rates_[6][global_step])
t_variables1 = tf.trainable_variables(scope="main/decoder/output_projection/")
t_variables2 = tf.trainable_variables(scope="main/decoder/multi_rnn_cell/cell_2/dec_LSTM_layer_2/")
t_variables3 = tf.trainable_variables(scope="main/decoder/multi_rnn_cell/cell_1/dec_LSTM_layer_1/")
t_variables4 = tf.trainable_variables(scope="main/decoder/multi_rnn_cell/cell_0/dec_LSTM_layer_0/")
t_variables5 = tf.trainable_variables(scope="main/rnn/multi_rnn_cell/cell_2/enc_fw_LSTM_layer_2/")
t_variables6 = tf.trainable_variables(scope="main/rnn/multi_rnn_cell/cell_1/enc_fw_LSTM_layer_1/")
t_variables7 = tf.trainable_variables(scope="main/rnn/multi_rnn_cell/cell_0/enc_fw_LSTM_layer_0/")
gradients1, variables1 = zip(*opt1.compute_gradients(self.loss, var_list=t_variables1))
gradients2, variables2 = zip(*opt2.compute_gradients(self.loss, var_list=t_variables2))
gradients3, variables3 = zip(*opt3.compute_gradients(self.loss, var_list=t_variables3))
gradients4, variables4 = zip(*opt4.compute_gradients(self.loss, var_list=t_variables4))
gradients5, variables5 = zip(*opt5.compute_gradients(self.loss, var_list=t_variables5))
gradients6, variables6 = zip(*opt6.compute_gradients(self.loss, var_list=t_variables6))
gradients7, variables7 = zip(*opt7.compute_gradients(self.loss, var_list=t_variables7))
train_opt1 = opt1.apply_gradients(zip(gradients1, variables1), global_step=global_step)
train_opt2 = opt2.apply_gradients(zip(gradients2, variables2))
train_opt3 = opt3.apply_gradients(zip(gradients3, variables3))
train_opt4 = opt4.apply_gradients(zip(gradients4, variables4))
train_opt5 = opt5.apply_gradients(zip(gradients5, variables5))
train_opt6 = opt6.apply_gradients(zip(gradients6, variables6))
train_opt7 = opt7.apply_gradients(zip(gradients7, variables7))
grad_opts = [train_opt1, train_opt2, train_opt3, train_opt4, train_opt5, train_opt6, train_opt7]
self.train_ops = tf.group(grad_opts)
self.preds = tf.argmax(self.logits, axis=2)
# Summaries: Tensorflow summaries
self.make_summaries(self.learning_rate1, self.learning_rate2, self.learning_rate3,
self.learning_rate4, self.learning_rate5, self.learning_rate6,
self.learning_rate7, self.loss)
def make_summaries(self, learning_rate1, learning_rate2, learning_rate3, learning_rate4,
learning_rate5, learning_rate6, learning_rate7, loss):
tf.summary.scalar("loss", loss)
tf.summary.scalar("learning_rate_dense_layer", learning_rate1)
tf.summary.scalar("learning_dec_layer2", learning_rate2)
tf.summary.scalar("learning_dec_layer1", learning_rate3)
tf.summary.scalar("learning_dec_layer0", learning_rate4)
tf.summary.scalar("learning_enc_layer2", learning_rate5)
tf.summary.scalar("learning_enc_layer1", learning_rate6)
tf.summary.scalar("learning_enc_layer0", learning_rate7)
self.merged = tf.summary.merge_all()
# Run the graph
with tf.Graph().as_default():
iterator = build_dataset(text_data, labels_data, embed_vocab_data, full_vocab_data, txt_eos, lbl_sos, lbl_eos, batch_size)
emb_mat = create_emb_matrix(embed_file)
dec_mat = create_dec_matrix(4)
random_embeddings = np.random.uniform(low=-1, high=1, size=(4,300)) # A random choice for unk and other special characters
embeddings = tf.Variable(tf.convert_to_tensor(random_embeddings, dtype=tf.float32), name="saved_embeddings")
emb_mat = tf.concat((embeddings, emb_mat), 0)
ids_to_embed_vocab = ids_to_embed_vocab(embed_vocab_data)
ids_to_full_vocab = ids_to_full_vocab(full_vocab_data)
# A call to the iterator for inputs
full_words_, embed_words_, txt_size_, label_size_, labels_in_, labels_out_ = iterator.get_next()
# Model instantiation
global_step = tf.Variable(0, name='global_step',trainable=False)
model = Model(dropout, num_units, num_layers,
forget_bias, embed_words_, full_words_,
txt_size_, label_size_, labels_in_, labels_out_, global_step)
# Initialise variables
init = tf.global_variables_initializer()
t_variables = tf.trainable_variables()
saver = tf.train.Saver(var_list=t_variables) # Saver for variables. Not full graph.
with tf.Session() as sess:
train_writer = tf.summary.FileWriter(log_path, sess.graph)
# Restore variables if present.
if restore_path == None:
sess.run(init)
else:
global_step.initializer.run()
saver.restore(sess, restore_path)
print("Model restored.")
# Initialise the vocab tables
sess.run(tf.tables_initializer())
counter = 0
# Training loop.
for epoch in range(epochs):
losses = []
epoch_start = time.time()
sess.run(iterator.initializer)
while True:
try:
_, summary, loss = sess.run([model.train_ops, model.merged, model.loss])
train_writer.add_summary(summary, counter)
train_writer.flush()
losses.append(loss) # Counter for epoch loss
counter += 1
#print(sess.run(global_step))
if counter % log_freq == 0:
# Get the values from model
preds, full_words_in, labels_out, mask_labels = sess.run([model.preds,
model.full_words_in,
model.labels_out,
model.mask_labels])
# pick one of the entries in the current batch
j = np.random.randint(0, batch_size)
full_sent = []
target_sent = []
predicted_sent = []
for i in range(len(full_words_in[j])):
if mask_labels[j][i] == 1:
full_sent.append(ids_to_full_vocab[full_words_in[j][i]])
if preds[j][i] != 0:
predicted_sent.append(ids_to_full_vocab[full_words_in[j][i]])
if labels_out[j][i] != 0:
target_sent.append(ids_to_full_vocab[full_words_in[j][i]])
print("Input sentence is:")
print(" ".join(full_sent))
print("Target sentence is:")
print(" ".join(target_sent))
print("Predicted sentence is:")
print(" ".join(predicted_sent))
except tf.errors.OutOfRangeError:
average_loss = sum(losses) / len(losses)
elapsed_time = (time.time() - epoch_start)
print("Epoch run time: %s" % elapsed_time)
print("Average epoch loss: %s" % average_loss)
break
if save_model == True:
saver.save(sess, save_path)
```
| github_jupyter |
# Setup
## Imports
```
import os.path
```
vai Modules
```
from vai.io import pickle_load
```
Keras Modules
```
from keras.preprocessing.text import Tokenizer
```
PyTorch Modules
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
from vai.torch.utils import cuda
```
## Define Useful Features
```
DIR_DATA = DIR_DATA['LJSpeech']
tokenizer = pickle_load(os.path.join(DIR_CHECKPOINTS, 'tokenizer.p'))
char_idx = tokenizer.word_index
idx_char = {v: k for k, v in char_idx.items()}
vocab_size = len(char_idx)
```
## Load Data
```
uttarances = pickle_load(os.path.join(DIR_DATA, 'text_data.p'))
```
# Create Model
## Define Hyperparameters
```
embedding_dim = 256
conv_channels = 64
kernel_size = 5
encoder_layers = 7
dropout_probability = 0.95
attention_size = 128
```
## Convolution Block
```
class ConvBlock(nn.Module):
def __init__(self, causal=False, in_channels=conv_channels, kernel_size=kernel_size, dropout_probability=dropout_probability):
super().__init__()
self.dropout = nn.Dropout(dropout_probability, inplace=True)
if not causal: self.conv = nn.Conv1d(in_channels, 2 * in_channels, kernel_size,
padding=(kernel_size - 1) // 2)
def forward(self, x):
out = self.conv(self.dropout(x))
a, b = out.split(x.size(1), 1)
out = a * F.sigmoid(b)
return (out + x) * np.sqrt(0.5)
```
## Encoder
```
class Encoder(nn.Module):
def __init__(self, embedding_dim=embedding_dim, conv_channels=conv_channels, encoder_layers=encoder_layers):
super().__init__()
self.fc_in = nn.Linear(embedding_dim, conv_channels)
self.conv_blocks = [cuda(ConvBlock())] * encoder_layers
self.fc_out = nn.Linear(conv_channels, embedding_dim)
def forward(self, x):
out = self.fc_in(x).transpose(1, 2)
for conv_block in self.conv_blocks: out = conv_block(out)
keys = self.fc_out(out.transpose(1, 2))
values = (keys + x) * np.sqrt(0.5)
return keys, values
```
## Attention Block
**TODO:**
* Add Positional Encodings
* Find dropout probabilities
* Find initialization strategy
* Use context normalization
* Apply windowed attention
```
class Attention(nn.Module):
def __init__(self, query_dim=embedding_dim, embedding_dim=embedding_dim, hidden_size=attention_size,
dropout_probability=dropout_probability):
super().__init__()
self.fc_query = nn.Linear(query_dim, hidden_size)
self.fc_keys = nn.Linear(embedding_dim, hidden_size)
self.fc_values = nn.Linear(embedding_dim, hidden_size)
self.fc_context = nn.Linear(hidden_size, embedding_dim)
self.dropout = nn.Dropout(dropout_probability, inplace=True)
def forward(self, query, encoder_context):
keys, values = encoder_context
query = self.fc_query(query)
keys = self.fc_keys(keys)
values = self.fc_values(values)
context = query.bmm(keys.transpose(1, 2))
context = F.softmax(context.view(-1, context.size(-1))).view(context.size())
context = self.dropout(context)
context = context.bmm(values) / np.sqrt(values.size(1))
context = self.fc_context(context)
return context
```
## Decoder
| github_jupyter |
# Data extraction, transformation, and loading into a JSON file
This is part of the project described in <https://github.com/amchagas/OSH_papers_DB>, check the project readme for more details.
This notebook loads data sources and merges them in a single compressed JSON file.
```
import os
import re
import numpy as np
import pandas as pd
import unicodedata
import string
import rispy
import matplotlib.pyplot as plt
from pathlib import Path
from project_definitions import baseDir, dataSourceDir, dataOutDir, figDir, articleDataFile
from project_definitions import store_data, load_data
from pprint import pprint
import html
from urllib.parse import unquote
from jellyfish import damerau_levenshtein_distance as edit_distance
```
## Sources
```
scieloSource = {
'paths': [dataSourceDir / x for x in ("scielo.ris",)],
'rispy_args': {},
'col_rename': {},
'transforms': [],
}
scopusSource = {
'paths': [dataSourceDir / x for x in ("scopus.ris",)],
'rispy_args': {},
'col_rename': {},
'transforms': [],
}
wosSource = {
'paths': [
dataSourceDir / x for x in "wos1001-1500.ciw wos1-500.ciw wos1501-1689.ciw wos501-1000.ciw".split()
],
'rispy_args': {'implementation': 'wok'},
'col_rename': {'publication_year': 'year', 'document_title': 'title'},
'transforms': [],
}
def load_source(dataSource):
dfs = []
for path in dataSource['paths']:
with path.open() as f:
df = pd.DataFrame(rispy.load(f, **dataSource['rispy_args']))
df['__source'] = [[path.name] for _ in range(len(df))]
dfs.append(df)
cdf = pd.concat(dfs, join='outer', ignore_index=True)
cdf = cdf.rename(columns=dataSource['col_rename'])
for trans in dataSource['transforms']:
cdf = cdf.transform(trans)
return cdf.sort_index(axis=1)
scieloData = load_source(scieloSource)
scopusData = load_source(scopusSource)
wosData = load_source(wosSource)
allDataList = [scieloData, scopusData, wosData]
allData = pd.concat(allDataList, join='outer', ignore_index=True)
# Keep only article data
article_data = allData.loc[allData["type_of_reference"].eq('JOUR') | allData["publication_type"].eq('J')]
# Normalize DOI
article_data.loc[:, 'doi'] = article_data['doi'].str.translate(
str.maketrans(string.ascii_lowercase, string.ascii_uppercase)
)
# Remove spurious records
article_data = article_data.loc[article_data['url'].ne(
"https://www.scopus.com/inward/record.uri?eid=2-s2.0-85052219975&partnerID=40&md5=7b54756675a6d510c9db069b49b634d6"
)]
# Correct faulty records
data_corrections = {
'doi': {
r'^(.*)/PDF$': r'\1',
}
}
corrected_article_data = article_data.replace(data_corrections, regex=True)
article_data.compare(corrected_article_data)
article_data = corrected_article_data
article_data.describe()
def merge_series_keep_longest(sx):
if sx.isna().all():
return np.nan
if sx.name == '__source':
return sx.sum()
if sx.name == 'doi':
if len(sx.dropna().unique()) > 1:
print('Warning, merging different DOIs:\n', sx)
return list(sx.dropna().unique())
return sx[sx.map(len, na_action='ignore').idxmax()] # Keep a list of all DOIs - must explode before using!
def merge_records_keep_longest(dfx):
return dfx.agg(merge_series_keep_longest)
# Merge data with same DOI
article_doi = article_data.groupby(article_data['doi'].values).agg(merge_records_keep_longest)
# Reassemble data with and without DOI
article_nodoi = article_data[~article_data.doi.isin(article_doi.index)]
article_data = pd.concat([article_doi, article_nodoi], ignore_index=True)
def remove_diacritics(input_str):
nfkd_form = unicodedata.normalize('NFKD', input_str)
return "".join([c for c in nfkd_form if not unicodedata.combining(c)])
def clean_titles(sx):
return (
sx
.str.lower()
.str.replace(r'[^\s\w]', ' ', regex=True)
.str.replace(r'\s+', ' ', regex=True)
.str.strip()
# .map(remove_diacritics) # no need as our corpus is in English
)
class Match:
"""
Index string values with similar strings under the same index, for use in a `groupby`.
First normalizes titles. Then, for each value, returns the index of the first previously indexed value
whose edit_distance is <= threshold, or a new index if none is found.
"""
def __init__(self, df, threshold=0):
self.df = df
assert not df['title'].hasnans
self.titles = clean_titles(self.df['title'])
self.threshold = threshold
self.match_index = {}
def match(self, x):
x = self.titles.loc[x]
if x in self.match_index:
return self.match_index[x]
if self.threshold > 0:
for m, idx in self.match_index.items():
if edit_distance(x, m) <= self.threshold:
self.match_index[x] = idx
return self.match_index[x]
self.match_index[x] = len(self.match_index)
return self.match_index[x]
articles_g = article_data.groupby(Match(article_data, 5).match)
aa = articles_g.agg(list)[articles_g.size() > 1]
# Test alternatives matchers
if False:
articles_gx = article_data.groupby(Match(article_data, 15).match)
bb = articles_gx.agg(list)[articles_gx.size() > 1]
pprint([sorted(x) for x in (
set(clean_titles(aa.explode('title')['title'])).difference(clean_titles(bb.explode('title')['title'])),
set(clean_titles(bb.explode('title')['title'])).difference(clean_titles(aa.explode('title')['title'])),
)])
def clean_name(name):
return remove_diacritics(name.split(',')[0].split(' ')[-1].lower().replace(' ', '').replace('-', ''))
# Check that matching titles also have matching year
sel = aa['year'].map(lambda x: len(set(x)) > 1)
aa[sel]
# Check that matching titles also have matching author (impl: first author last name)
sel = aa['authors'].map(
lambda merged_authors: set(
tuple( # last name of each author
clean_name(author)
for author in authors
)
for authors in merged_authors
if not ( isinstance(authors, float) and pd.isna(authors) ) # skip NANs
)
).map(
lambda merged_lastnames: sum(
edit_distance(firstauthor, other_firstauthor) # sum the edit distances
for merged_firstauthor in list(zip(*merged_lastnames))[:1] # first authors
for i, firstauthor in enumerate(merged_firstauthor)
for other_firstauthor in merged_firstauthor[i+1:] # distinct pairs
)
) > 0
aa[sel].authors.to_dict()
article_data[['doi', 'title', 'authors']].describe()
article_data = articles_g.agg(merge_records_keep_longest)
article_data
# Store deduplicated data and check the stored version reproduces the data
store_data(article_data, articleDataFile)
assert article_data.equals(load_data(articleDataFile))
```
# Load article data (if already stored from the code above)
```
article_data = load_data(articleDataFile)
```
## PLOS Collection sources
```
plosData = pd.read_csv('https://raw.githubusercontent.com/amchagas/open-source-toolkit/main/plos-items.csv')
sel_article = plosData[
"Content Type (URL items only - Research Article, Web Article, Commentary, Video, Poster)"
].eq("Research Article")
sel_hardware = plosData["Hardware or software"].eq("hardware")
plosData = plosData.loc[sel_article & sel_hardware]
```
### DOIs
```
assert plosData["URI (DOI or URL)"].notna().all()
# Normalize DOI
plosData["URI (DOI or URL)"] = plosData["URI (DOI or URL)"].str.translate(
str.maketrans(string.ascii_lowercase, string.ascii_uppercase)
)
# Get the doi and doi-like, fixing doi-like containing extra stuff
re_doi = r"(10\.[1-9]\d{3,}(?:\.\d+)*/.+)"
re_http_doi_fix = r"HTTPS?://.*/" + re_doi + r"(?:/|/FULL|/ABSTRACT|#\w+)$"
plosData_doi = plosData['URI (DOI or URL)'].str.extract(re_doi)[0]
plosData_doi_http_doi_fixed = (
plosData['URI (DOI or URL)']
.str.extract(re_http_doi_fix)[0]
.map(unquote, na_action='ignore')
)
plosData_doi.loc[plosData_doi_http_doi_fixed.notna()].compare(plosData_doi_http_doi_fixed.dropna())
assert 'doi' not in plosData
plosData['doi'] = plosData_doi_http_doi_fixed.where(plosData_doi_http_doi_fixed.notna(), plosData_doi)
plosData['doi'].dropna()
print(
len(set(plosData['doi'].dropna()).intersection(article_data['doi'].explode())),
len(set(plosData['doi'].dropna()).symmetric_difference(article_data['doi'].explode())),
)
```
### Titles
```
plosData['Title (URL items only)'] = plosData['Title (URL items only)'].str.strip()
# How many from the collection have their title in article_data
plosData['Title (URL items only)'].pipe(clean_titles).map(
lambda x: article_data['title'].pipe(clean_titles).str.contains(rf'(?i){x}', regex=True).any()
).sum()
# How many from the collection have their title in article_data if we require they have DOIs
plosData['Title (URL items only)'].loc[plosData['doi'].notna()].pipe(clean_titles).map(
lambda x: article_data.loc[article_data['doi'].notna()].title.pipe(clean_titles).str.contains(rf'(?i){x}', regex=True).any()
).sum()
# Give me 10 from the collection having DOIs
z = plosData['doi'].dropna().sample(10)
print(z)
# Get their titles if their titles are not in article_data
for i, title in plosData.loc[z.index]['Title (URL items only)'].pipe(clean_titles).items():
if not clean_titles(article_data['title']).str.contains(rf'(?i){title}', regex=True).any():
print(i, title)
# Selector for DOIs only in the collection
sel_new_doi = ~plosData["doi"].dropna().isin(article_data['doi'].explode().values)
sel_new_doi.sum()
# Selector for Titles only in the collection
sel_new_title = ~clean_titles(plosData["Title (URL items only)"]).isin(clean_titles(article_data['title']))
sel_new_title.sum()
# Same title, different DOIs
x = plosData[["doi", "Title (URL items only)"]].loc[sel_new_doi & ~sel_new_title]
for i, y in x["Title (URL items only)"].str.lower().items():
print(
y,
article_data["doi"].loc[
article_data['title'].str.lower().eq(y)
].squeeze(),
plosData.loc[i, 'doi']
)
# Same DOI, different titles
x = plosData.loc[~sel_new_doi & sel_new_title, 'doi']
for y in x:
print(
plosData.loc[plosData['doi'].eq(y), "Title (URL items only)"].squeeze(),
article_data.loc[article_data['doi'].explode().eq(y), 'title'].squeeze(),
)
```
# All done, now just mess around
```
article_data.shape
article_data.issn.str.replace('[^\d]', '', regex=True).value_counts()
article_data.issn.str.replace('[^\d]', '', regex=True).value_counts().reset_index().plot(loglog=True)
article_data.groupby('year').size().plot.bar()
```
## Play with our 10 article sample
```
dois = pd.Series("""
10.1371/journal.pone.0187219
10.1371/journal.pone.0059840
10.1371/journal.pone.0030837
10.1371/journal.pone.0118545
10.1371/journal.pone.0206678
10.1371/journal.pone.0143547
10.1371/journal.pone.0220751
10.1371/journal.pone.0107216
10.1371/journal.pone.0226761
10.1371/journal.pone.0193744
""".split()).str.translate(
str.maketrans(string.ascii_lowercase, string.ascii_uppercase)
)
dois[dois.isin(article_data.doi.explode())]
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Reducer/using_weights.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Reducer/using_weights.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Reducer/using_weights.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
# Load an input Landsat 8 image.
image = ee.Image('LANDSAT/LC08/C01/T1_TOA/LC08_186059_20130419')
# Compute cloud score and reverse it such that the highest
# weight (100) is for the least cloudy pixels.
cloudWeight = ee.Image(100).subtract(
ee.Algorithms.Landsat.simpleCloudScore(image).select(['cloud']))
# Compute NDVI and add the cloud weight band.
ndvi = image.normalizedDifference(['B5', 'B4']).addBands(cloudWeight)
# Define an arbitrary region in a cloudy area.
region = ee.Geometry.Rectangle(9.9069, 0.5981, 10.5, 0.9757)
# Use a mean reducer.
reducer = ee.Reducer.mean()
# Compute the unweighted mean.
unweighted = ndvi.select(['nd']).reduceRegion(reducer, region, 30)
# compute mean weighted by cloudiness.
weighted = ndvi.reduceRegion(reducer.splitWeights(), region, 30)
# Observe the difference as a result of weighting by cloudiness.
print('unweighted:', unweighted.getInfo())
print('weighted:', weighted.getInfo())
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
```
import pandas as pd
%load_ext autoreload
%autoreload 2
import sys
sys.path.append("../")
from src.utility.Summary import Summary
side_info_params = {
"CollectiveMF_Item": {"use_user_info": False, "use_item_info": True},
"CollectiveMF_User": {"use_user_info": True, "use_item_info": False},
"CollectiveMF_Both": {"use_user_info": True, "use_item_info": True},
"CollectiveMF_No": {"use_user_info": False, "use_item_info": False},
}
model_with_params = []
for db_path in ["default_server.db", "new_server.db"]:
summary = Summary(db_path="sqlite:///results/" + db_path)
for model in list(side_info_params.keys()) + ["surprise_SVD", "surprise_Baseline", "FMItem", "FMNone", "BPR","PureRandom"]:
opt = summary.get_optimal_params(dataset_name="user_10_item_1_exp", model_name=model, metric="ndcg@10")
if not opt: continue
params_dict = dict(eval(opt))
model_with_params.append((model, params_dict))
df_list = []
for db_path in ["default_server.db", "new_server.db"]:
summary = Summary(db_path="sqlite:///results/" + db_path)
for model, opt_param in model_with_params:
for metric in ["ndcg@{}".format(k+1) for k in range(10)]:
str_param = str(sorted(opt_param.items(), key=lambda x: x[0]))
df = summary.get_result_for_params(dataset_name="user_10_item_1_exp", model_name=model,
hyper=str_param, metric=metric, verbose=False)
if len(df) == 0: continue
df_list.append(df)
topk_results = pd.concat(df_list).reset_index(drop=True)
topk_results["k"] = topk_results.metric.apply(lambda x: int(x.split("@")[1]))
topk_results
topk_results.to_csv("./results/topk_results.csv")
```
# Find another group of best params using rmse
```
regression_models = list(side_info_params.keys()) + ["surprise_SVD", "surprise_Baseline", "FMItem", "FMNone"]
rmse_model_with_params = []
for db_path in ["default_server.db", "new_server.db"]:
summary = Summary(db_path="sqlite:///results/" + db_path)
for model in regression_models:
opt = summary.get_optimal_params(dataset_name="user_10_item_1_exp", model_name=model, metric="rmse")
if not opt: continue
params_dict = dict(eval(opt))
rmse_model_with_params.append((model, params_dict))
rmse_list = []
for db_path in ["default_server.db", "new_server.db"]:
summary = Summary(db_path="sqlite:///results/" + db_path)
for model, opt_param in model_with_params:
if model in regression_models:
str_param = str(sorted(opt_param.items(), key=lambda x: x[0]))
df = summary.get_result_for_params(dataset_name="user_10_item_1_exp", model_name=model,
hyper=str_param, metric="rmse", verbose=False)
if len(df) == 0: continue
df["is_rmse_opt"] = False
rmse_list.append(df)
for db_path in ["default_server.db", "new_server.db"]:
summary = Summary(db_path="sqlite:///results/" + db_path)
for model, opt_param in rmse_model_with_params:
str_param = str(sorted(opt_param.items(), key=lambda x: x[0]))
df = summary.get_result_for_params(dataset_name="user_10_item_1_exp", model_name=model,
hyper=str_param, metric="rmse", verbose=False)
if len(df) == 0: continue
df["is_rmse_opt"] = True
rmse_list.append(df)
rmse_results = pd.concat(rmse_list).reset_index(drop=True)
rmse_results
rmse_results.to_csv("./results/rmse_results.csv")
```
| github_jupyter |
created by Ignacio Oguiza - email: timeseriesAI@gmail.com
## How to efficiently work with (very large) Numpy Arrays? 👷♀️
Sometimes we need to work with some very large numpy arrays that don't fit in memory. I'd like to share with you a way that works well for me.
## Import libraries 📚
```
# ## NOTE: UNCOMMENT AND RUN THIS CELL IF YOU NEED TO INSTALL/ UPGRADE TSAI
# stable = False # True: stable version in pip, False: latest version from github
# if stable:
# !pip install tsai -U >> /dev/null
# else:
# !pip install git+https://github.com/timeseriesAI/tsai.git -U >> /dev/null
# ## NOTE: REMEMBER TO RESTART (NOT RECONNECT/ RESET) THE KERNEL/ RUNTIME ONCE THE INSTALLATION IS FINISHED
from tsai.all import *
import numba
computer_setup(numba)
```
## Introduction 🤝
I normally work with time series data. I made the decision to use numpy arrays to store my data since the can easily handle multiple dimensions, and are really very efficient.
But sometimes datasets are really big (many GBs) and don't fit in memory. So I started looking around and found something that works very well: [**np.memmap**](https://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html). Conceptually they work as arrays on disk, and that's how I often call them.
np.memmap creates a map to numpy array you have previously saved on disk, so that you can efficiently access small segments of those (small or large) files on disk, without reading the entire file into memory. And that's exactly what we need with deep learning, be able to quickly create a batch in memory, without reading the entire file (that is stored on disk).
The best analogy I've found are image files. You may have a very large dataset on disk (that far exceeds your RAM). In order to create your DL datasets, what you pass are the paths to each individual file, so that you can then load a few images and create a batch on demand.
You can view np.memmap as the path collection that can be used to load numpy data on demand when you need to create a batch.
So let's see how you can work with larger than RAM arrays on disk.
On my laptop I have only 8GB of RAM.
I will try to demonstrate how you can handle a 10 GB numpy array dataset in an efficient way.
## Create and save a larger-than-memory array 🥴
I will now create a large numpy array that doesn't fit in memory.
Since I don't have enough RAM, I'll create an empty array on disk, and then load data in chunks that fit in memory.
⚠️ If you want to to experiment with large datasets, you may uncomment and run this code. **It will create a ~10GB file on your disk** (we'll delete it at the end of this notebook).
In my laptop it took me around **2 mins to create the data.**
```
# path = Path('./data')
# X = create_empty_array((100_000, 50, 512), fname='X_on_disk', path=path, mode='r+')
# chunksize = 10_000
# pbar = progress_bar(range(math.ceil(len(X) / chunksize)))
# start = 0
# for i in pbar:
# end = start + chunksize
# X[start:end] = np.random.rand(chunksize, X.shape[-2], X.shape[-1])
# start = end
# # I will create a smaller array. Sinc this fits in memory, I don't need to use a memmap
# y_fn = path/'y_on_disk.npy'
# y = np.random.randint(0, 10, X.shape[0])
# labels = np.array(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'])
# np.save(y_fn, labels[y])
# del X, y
```
Ok. So let's check the size of these files on memory.
```
print(f'X array: {os.path.getsize("./data/X_on_disk.npy"):12} bytes ({bytes2GB(os.path.getsize("./data/X_on_disk.npy")):3.2f} GB)')
print(f'y array: {os.path.getsize("./data/y_on_disk.npy"):12} bytes ({bytes2GB(os.path.getsize("./data/y_on_disk.npy")):3.2f} GB)')
```
## Load an array on disk (np.memmap) 🧠
Remember I only have an 8 GB RAM on this laptop, so I couldn't load these datasets in memory.
☣️ Actually I accidentally loaded the "X_on_disk.npy" file, and my laptop crahsed so I had to reboot it!
So let's now load data as arrays on disk (np.memmap). The way to do it is super simple, and very efficient. You just do it as you would with a normal array, but add an mmap_mode.
There are 4 modes:
- ‘r’ Open existing file for reading only.
- ‘r+’ Open existing file for reading and writing.
- ‘w+’ Create or overwrite existing file for reading and writing.
- ‘c’ Copy-on-write: assignments affect data in memory, but changes are not saved to disk. The file on disk is read-only.
I normally use mode 'c' since I want to be able to make changes to data in memory (transforms for example), without affecting data on disk (same approach as with image data). This is the same thing you do with image files on disk, that are just read, and then modified in memory, without change the file on disk.
But if you also want to be able to modify data on disk, you can load the array with mmap_mode='r+'.
```
X_on_disk = np.load('./data/X_on_disk.npy', mmap_mode='c')
y_on_disk = np.load('./data/y_on_disk.npy', mmap_mode='c')
```
**Fast load**: it only takes a few ms to "load" a memory map to a 10 GB array on disk.
In fact, the only thing that is loaded is a map to the array stored on disk. That's why it's so fast.
## Arrays on disk: main features 📀
### Very limited RAM usage
```
print(X_on_disk.shape, y_on_disk.shape)
print(f'X array on disk: {sys.getsizeof(X_on_disk):12} bytes ({bytes2GB(sys.getsizeof(X_on_disk)):3.3f} GB)')
print(f'y array on disk: {sys.getsizeof(y_on_disk):12} bytes ({bytes2GB(sys.getsizeof(y_on_disk)):3.3f} GB)')
```
**152 bytes of RAM for a 10GB array**. This is the great benefit of arrays on disk.
Arrays on disk barely use any RAM until each the it's sliced and an element is converted into a np.array or a tensor.
This is equivalent to the size of file paths in images (very limited) compared to the files themselves (actual images).
### Types
np.memmap is a subclass of np.ndarray
```
isinstance(X_on_disk, np.ndarray)
type(X_on_disk)
```
### Operations
With np.memmap you can perform the same operations you would with a normal numpy array.
The most common operations you will perform in deep learning are:
- slicing
- calculating stats: mean and std
- scaling (using normalize or standardize)
- transformation into a tensor
Once you get the array on disk slice, you'll convert it into a tensor, move to a GPU and performs operations there.
⚠️ You need to be careful though not to convert the entire np.memmap to an array/ tensor if it's larger than your RAM. This will crash your computer unless you have enough RAM, so you would have to reboot!
**DON'T DO THIS: torch.from_numpy(X) or np.array(X)** unless you have ehough RAM.
To avoid issues during test, I created a smaller array on disk (that I can store in memory). When I want to test something I test it with that array first. It's important to always verify that the type output of your operations is np.memmap, which means data is still in memory.
#### Slicing
To ensure you don't brind the entire array in memory (which may crash your computer) you can always work with slices of data, which is by the way how fastai works.
If you use mode 'r' you can grab a sample and make changes to it, but this won't modify data on disk.
```
x = X_on_disk[0]
x
```
It's important to note that **when we perform an math operation on a np.memmap (add, subtract, ...) the output is a np.array, and no longer a np.memmap.**
⚠️ Remember you don't want to run this type of operations with a memmap larger than your RAM!! That's why I do it with a slice.
```
x = X_on_disk[0] + 1
x
x = torch.from_numpy(X_on_disk[0])
x2 = x + 1
x2
```
As you can see, this doesn't affect the original np.memmap
```
X_on_disk[0]
```
You can slice an array on disk by any axis, and it'll return a memmap. Slicing by any axis is very fast.
```
X_on_disk[0]
X_on_disk[:, 0]
```
However, bear in mind that if you use multiple indices, the output will be a regular numpy array. This is important as it will use more RAM.
```
X_on_disk[[0,1]]
```
Unless you use a slice with consecutive indices like this:
```
X_on_disk[:2]
```
This continues to be a memmap
There's a trick we can use avoid this making use of the excellent new L class in fastai. It is to **itemify** the np.memmap/s.
```
def itemify(*x): return L(*x).zip()
```
To itemify one or several np.memmap/s is very fast. Let's see how long it takes with a 10 GB array.
```
X_on_disk_as_items = itemify(X_on_disk)
```
5 seconds to return individual records on disk! Bear in mind you only need to perform this once!
So now, you can select multiple items at the same time, and they will all still be on disk:
```
X_on_disk_as_items[0,1]
```
You can also itemify several items at once: X and y for example. When you slice the list, you'll get tuples.
```
Xy_on_disk_as_items = itemify(X_on_disk, y_on_disk)
Xy_on_disk_as_items[0, 1]
```
Slicing is very fast, even if there are 100.000 samples.
```
# axis 0
%timeit X_on_disk[0]
# axis 1
%timeit X_on_disk[:, 0]
# axis 2
%timeit X_on_disk[..., 0]
# aixs 0,1
%timeit X_on_disk[0, 0]
```
To compare how fast you can slice a np.memmap, let's create a smaller array that I can fit in memory (X_in_memory). This is 10 times smaller (100 MB) than the one on disk.
```
X_in_memory_small = np.random.rand(10000, 50, 512)
%timeit X_in_memory_small[0]
```
Let's create the same array on disk. It's super simple:
```
np.save('./data/X_on_disk_small.npy', X_in_memory_small)
X_on_disk_small = np.load('./data/X_on_disk_small.npy', mmap_mode='c')
%timeit X_on_disk_small[0]
```
This is approx. 10 slower than having arrays on disk, although it's still pretty fast.
However, if we use the itemified version, it's much faster:
```
%timeit X_on_disk_as_items[0]
```
This is much better! So now you can access 1 of multiple items on disk with a pretty good performance.
#### Calculating stats: mean and std
Another benefit of using arrays on disk is that you can calculate the mean and std deviation of the entire dataset.
It takes a considerable time since the array is very big (10GB), but it's feasible:
- mean (0.4999966): 1 min 45 s
- std (0.2886839): 11 min 43 s
in my laptop.
If you need them, you could calculate these stats once, and store the results (similar to ImageNet stats).
However, you usually need to claculate these metrics for labeled (train) datasets, that tend to be smaller.
```
# X_mean = X_on_disk.mean()
# X_mean
# X_std = X_on_disk.std()
# X_std
```
#### Conversion into a tensor
Conversion from an array on disk slice into a tensor is also very fast:
```
torch.from_numpy(X_on_disk[0])
X_on_disk_small_0 = X_on_disk_small[0]
X_in_memory_small_0 = X_in_memory_small[0]
%timeit torch.from_numpy(X_on_disk_small_0)
%timeit torch.from_numpy(X_in_memory_small_0 )
```
So it takes the same time to convert from numpy.memmap or from a np.array in memory is the same.
#### Combined operations: slicing plus conversion to tensor
Let's now check performance of the combined process: slicing plus conversion to a tensor. Based on what we've seen there are 3 options:
- slice np.array in memory + conversion to tensor
- slice np.memamap on disk + conversion to tensor
- slice itemified np.memmap + converion to tensor
```
%timeit torch.from_numpy(X_in_memory_small[0])
%timeit torch.from_numpy(X_on_disk_small[0])
X_on_disk_small_as_items = itemify(X_on_disk_small)
%timeit torch.from_numpy(X_on_disk_small_as_items[0][0])
```
So this last method is **almost as fast as having the array in memory**!! This is an excellent outcome, since slicing arrays in memory is a highly optimized operation.
And we have the benefit of having access to very large datasets if needed.
## Remove the arrays on disk
Don't forget to remove the arrays you have created on disk.
```
os.remove('./data/X_on_disk.npy')
os.remove('./data/X_on_disk_small.npy')
os.remove('./data/y_on_disk.npy')
```
## Summary ✅
We now have a very efficient way to work with very large numpy arrays.
The process is very simple:
- create and save the array on disk (as described before)
- load it with a mmap_mode='c' if you want to be able to modify data in memory but not on dis, or 'r+ if you want to modify data both in memory and on disk.
So my recommendation would be:
- use numpy arrays in memory when possible (if your data fits in memory)
- use numpy memmap (arrays on disk) when data doesn't fit. You will still have a great performance.
| github_jupyter |
```
%matplotlib inline
```
# Biclustering documents with the Spectral Co-clustering algorithm
This example demonstrates the Spectral Co-clustering algorithm on the
twenty newsgroups dataset. The 'comp.os.ms-windows.misc' category is
excluded because it contains many posts containing nothing but data.
The TF-IDF vectorized posts form a word frequency matrix, which is
then biclustered using Dhillon's Spectral Co-Clustering algorithm. The
resulting document-word biclusters indicate subsets words used more
often in those subsets documents.
For a few of the best biclusters, its most common document categories
and its ten most important words get printed. The best biclusters are
determined by their normalized cut. The best words are determined by
comparing their sums inside and outside the bicluster.
For comparison, the documents are also clustered using
MiniBatchKMeans. The document clusters derived from the biclusters
achieve a better V-measure than clusters found by MiniBatchKMeans.
```
from __future__ import print_function
from collections import defaultdict
import operator
from time import time
import numpy as np
from sklearn.cluster.bicluster import SpectralCoclustering
from sklearn.cluster import MiniBatchKMeans
from sklearn.externals.six import iteritems
from sklearn.datasets.twenty_newsgroups import fetch_20newsgroups
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.cluster import v_measure_score
print(__doc__)
def number_normalizer(tokens):
""" Map all numeric tokens to a placeholder.
For many applications, tokens that begin with a number are not directly
useful, but the fact that such a token exists can be relevant. By applying
this form of dimensionality reduction, some methods may perform better.
"""
return ("#NUMBER" if token[0].isdigit() else token for token in tokens)
class NumberNormalizingVectorizer(TfidfVectorizer):
def build_tokenizer(self):
tokenize = super(NumberNormalizingVectorizer, self).build_tokenizer()
return lambda doc: list(number_normalizer(tokenize(doc)))
# exclude 'comp.os.ms-windows.misc'
categories = ['alt.atheism', 'comp.graphics',
'comp.sys.ibm.pc.hardware', 'comp.sys.mac.hardware',
'comp.windows.x', 'misc.forsale', 'rec.autos',
'rec.motorcycles', 'rec.sport.baseball',
'rec.sport.hockey', 'sci.crypt', 'sci.electronics',
'sci.med', 'sci.space', 'soc.religion.christian',
'talk.politics.guns', 'talk.politics.mideast',
'talk.politics.misc', 'talk.religion.misc']
newsgroups = fetch_20newsgroups(categories=categories)
y_true = newsgroups.target
vectorizer = NumberNormalizingVectorizer(stop_words='english', min_df=5)
cocluster = SpectralCoclustering(n_clusters=len(categories),
svd_method='arpack', random_state=0)
kmeans = MiniBatchKMeans(n_clusters=len(categories), batch_size=20000,
random_state=0)
print("Vectorizing...")
X = vectorizer.fit_transform(newsgroups.data)
print("Coclustering...")
start_time = time()
cocluster.fit(X)
y_cocluster = cocluster.row_labels_
print("Done in {:.2f}s. V-measure: {:.4f}".format(
time() - start_time,
v_measure_score(y_cocluster, y_true)))
print("MiniBatchKMeans...")
start_time = time()
y_kmeans = kmeans.fit_predict(X)
print("Done in {:.2f}s. V-measure: {:.4f}".format(
time() - start_time,
v_measure_score(y_kmeans, y_true)))
feature_names = vectorizer.get_feature_names()
document_names = list(newsgroups.target_names[i] for i in newsgroups.target)
def bicluster_ncut(i):
rows, cols = cocluster.get_indices(i)
if not (np.any(rows) and np.any(cols)):
import sys
return sys.float_info.max
row_complement = np.nonzero(np.logical_not(cocluster.rows_[i]))[0]
col_complement = np.nonzero(np.logical_not(cocluster.columns_[i]))[0]
# Note: the following is identical to X[rows[:, np.newaxis],
# cols].sum() but much faster in scipy <= 0.16
weight = X[rows][:, cols].sum()
cut = (X[row_complement][:, cols].sum() +
X[rows][:, col_complement].sum())
return cut / weight
def most_common(d):
"""Items of a defaultdict(int) with the highest values.
Like Counter.most_common in Python >=2.7.
"""
return sorted(iteritems(d), key=operator.itemgetter(1), reverse=True)
bicluster_ncuts = list(bicluster_ncut(i)
for i in range(len(newsgroups.target_names)))
best_idx = np.argsort(bicluster_ncuts)[:5]
print()
print("Best biclusters:")
print("----------------")
for idx, cluster in enumerate(best_idx):
n_rows, n_cols = cocluster.get_shape(cluster)
cluster_docs, cluster_words = cocluster.get_indices(cluster)
if not len(cluster_docs) or not len(cluster_words):
continue
# categories
counter = defaultdict(int)
for i in cluster_docs:
counter[document_names[i]] += 1
cat_string = ", ".join("{:.0f}% {}".format(float(c) / n_rows * 100, name)
for name, c in most_common(counter)[:3])
# words
out_of_cluster_docs = cocluster.row_labels_ != cluster
out_of_cluster_docs = np.where(out_of_cluster_docs)[0]
word_col = X[:, cluster_words]
word_scores = np.array(word_col[cluster_docs, :].sum(axis=0) -
word_col[out_of_cluster_docs, :].sum(axis=0))
word_scores = word_scores.ravel()
important_words = list(feature_names[cluster_words[i]]
for i in word_scores.argsort()[:-11:-1])
print("bicluster {} : {} documents, {} words".format(
idx, n_rows, n_cols))
print("categories : {}".format(cat_string))
print("words : {}\n".format(', '.join(important_words)))
```
| github_jupyter |
# Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
## Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
```
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
```
## Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
```
rides[:24*10].plot(x='dteday', y='cnt')
```
### Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`.
```
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
```
### Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
```
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
```
### Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
```
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
```
We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
```
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
```
## Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.
> **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function.
2. Implement the forward pass in the `train` method.
3. Implement the backpropagation algorithm in the `train` method, including calculating the output error.
4. Implement the forward pass in the `run` method.
```
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function = lambda x: 1 / (1 + np.exp(-x))
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
hidden_outputs = self.activation_function(hidden_inputs)
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
final_outputs = final_inputs
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
output_errors = targets - final_outputs
# TODO: Backpropagated error
hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors)
hidden_grad = hidden_outputs * (1 - hidden_outputs)
# TODO: Update the weights
self.weights_hidden_to_output += self.lr * np.dot(output_errors, hidden_outputs.T)
self.weights_input_to_hidden += self.lr * np.dot((hidden_errors * hidden_grad), inputs.T)
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
hidden_outputs = self.activation_function(hidden_inputs)
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
final_outputs = final_inputs
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
```
## Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
### Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
### Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
### Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
```
import sys
### Set the hyperparameters here ###
epochs = 500
learning_rate = 0.08
hidden_nodes = 25
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
```
## Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
```
fig, ax = plt.subplots(figsize=(16,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
```
## Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
> **Note:** You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
#### Your answer below
It seems like the model is pretty close to the prediction. The spikes somethings is lower or higher from the prediction, but not by much. The real "problem" starting around Dec 22, where the prediction has more spikes then the model and, and where my model has spikes, they aren't predict the same numbers - they predict lower numbers.
And I have no idea why is that. Maybe it's beacause Dec is a holiday and the start of the winter, but
## Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
```
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
```
| github_jupyter |
```
from PoisDenoiser.nnLayers.functional import PoisProx
import numpy as np
import torch as th
from torch.autograd import Variable
from pydl.nnLayers.functional import functional
poisProx = PoisProx.apply
PoisProx()
def l2Prox(epsilon=1e-4,dtype='torch.DoubleTensor',GPU=False):
l2ProxF = functional.L2Prox.apply
x = th.randn(4,3,40,40).type(dtype)
x -= x.view(x.size(0),-1).min().view(-1,1,1,1)
x /= x.view(x.size(0),-1).max().view(-1,1,1,1)
x = x*255
z = th.randn(4,3,40,40).type(dtype)
z -= z.view(z.size(0),-1).min().view(-1,1,1,1)
z /= z.view(z.size(0),-1).max().view(-1,1,1,1)
z = z*255
alpha = th.Tensor(np.random.randint(0,3,(1,))).type(dtype)
stdn = th.Tensor(np.random.randint(5,20,(4,1))).type(dtype)
if GPU and th.cuda.is_available():
x = x.cuda()
z = z.cuda()
alpha = alpha.cuda()
stdn = stdn.cuda()
sz_x = x.size()
grad_output = th.randn_like(x)
x_numgrad = th.zeros_like(x).view(-1)
perturb = x_numgrad.clone()
cost = lambda input: cost_l2Prox(input,z,alpha,stdn,grad_output)
for k in range(0,x.numel()):
perturb[k] = epsilon
loss1 = cost(x.view(-1).add(perturb).view(sz_x))
loss2 = cost(x.view(-1).add(-perturb).view(sz_x))
x_numgrad[k] = (loss1-loss2)/(2*perturb[k])
perturb[k] = 0
x_numgrad = x_numgrad.view(sz_x)
sz_alpha = alpha.size()
alpha_numgrad = th.zeros_like(alpha).view(-1)
perturb = alpha_numgrad.clone()
cost = lambda input : cost_l2Prox(x,z,input,stdn,grad_output)
for k in range(0,alpha.numel()):
perturb[k] = epsilon
loss1 = cost(alpha.view(-1).add(perturb).view(sz_alpha))
loss2 = cost(alpha.view(-1).add(-perturb).view(sz_alpha))
alpha_numgrad[k] = (loss1-loss2)/(2*perturb[k])
perturb[k] = 0
alpha_numgrad = alpha_numgrad.view(sz_alpha)
x_var = Variable(x,requires_grad = True)
alpha_var = Variable(alpha,requires_grad = True)
y = l2ProxF(x_var,z,alpha_var,stdn)
y.backward(grad_output)
err_x = th.norm(x_var.grad.data.view(-1) - x_numgrad.view(-1))/\
th.norm(x_var.grad.data.view(-1) + x_numgrad.view(-1))
err_a = th.norm(alpha_var.grad.data.view(-1) - alpha_numgrad.view(-1))/\
th.norm(alpha_var.grad.data.view(-1) + alpha_numgrad.view(-1))
return err_x, x_var.grad.data, x_numgrad, err_a, alpha_var.grad.data, alpha_numgrad
res = l2Prox()
len(res)
```
| github_jupyter |
```
%matplotlib inline
```
# Regularized OT with generic solver
Illustrates the use of the generic solver for regularized OT with
user-designed regularization term. It uses Conditional gradient as in [6] and
generalized Conditional Gradient as proposed in [5][7].
[5] N. Courty; R. Flamary; D. Tuia; A. Rakotomamonjy, Optimal Transport for
Domain Adaptation, in IEEE Transactions on Pattern Analysis and Machine
Intelligence , vol.PP, no.99, pp.1-1.
[6] Ferradans, S., Papadakis, N., Peyré, G., & Aujol, J. F. (2014).
Regularized discrete optimal transport. SIAM Journal on Imaging Sciences,
7(3), 1853-1882.
[7] Rakotomamonjy, A., Flamary, R., & Courty, N. (2015). Generalized
conditional gradient: analysis of convergence and applications.
arXiv preprint arXiv:1510.06567.
```
import numpy as np
import matplotlib.pylab as pl
import ot
import ot.plot
```
Generate data
-------------
```
#%% parameters
n = 100 # nb bins
# bin positions
x = np.arange(n, dtype=np.float64)
# Gaussian distributions
a = ot.datasets.make_1D_gauss(n, m=20, s=5) # m= mean, s= std
b = ot.datasets.make_1D_gauss(n, m=60, s=10)
# loss matrix
M = ot.dist(x.reshape((n, 1)), x.reshape((n, 1)))
M /= M.max()
```
Solve EMD
---------
```
#%% EMD
G0 = ot.emd(a, b, M)
pl.figure(3, figsize=(5, 5))
ot.plot.plot1D_mat(a, b, G0, 'OT matrix G0')
```
Solve EMD with Frobenius norm regularization
--------------------------------------------
```
#%% Example with Frobenius norm regularization
def f(G):
return 0.5 * np.sum(G**2)
def df(G):
return G
reg = 1e-1
Gl2 = ot.optim.cg(a, b, M, reg, f, df, verbose=True)
pl.figure(3)
ot.plot.plot1D_mat(a, b, Gl2, 'OT matrix Frob. reg')
```
Solve EMD with entropic regularization
--------------------------------------
```
#%% Example with entropic regularization
def f(G):
return np.sum(G * np.log(G))
def df(G):
return np.log(G) + 1.
reg = 1e-3
Ge = ot.optim.cg(a, b, M, reg, f, df, verbose=True)
pl.figure(4, figsize=(5, 5))
ot.plot.plot1D_mat(a, b, Ge, 'OT matrix Entrop. reg')
```
Solve EMD with Frobenius norm + entropic regularization
-------------------------------------------------------
```
#%% Example with Frobenius norm + entropic regularization with gcg
def f(G):
return 0.5 * np.sum(G**2)
def df(G):
return G
reg1 = 1e-3
reg2 = 1e-1
Gel2 = ot.optim.gcg(a, b, M, reg1, reg2, f, df, verbose=True)
pl.figure(5, figsize=(5, 5))
ot.plot.plot1D_mat(a, b, Gel2, 'OT entropic + matrix Frob. reg')
pl.show()
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.