code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# Rare Labels
## Labels that occur rarely
Categorical variables are those which values are selected from a group of categories, also called labels. Different labels appear in the dataset with different frequencies. Some categories appear a lot in the dataset, whereas some other categories appear only in a few number of observations.
For example, in a dataset with information about loan applicants where one of the variables is "city" where the applicant lives, cities like 'New York' may appear a lot in the data because New York has a huge population, whereas smaller towns like 'Leavenworth' will appear only on a few occasions (population < 2000 people), because the population there is very small. A borrower is more likely to live in New York, because far more people live in New York.
In fact, categorical variables often contain a few dominant labels that account for the majority of the observations and a large number of labels that appear only seldom.
### Are Rare Labels in a categorical variable a problem?
Rare values can add a lot of information or none at all. For example, consider a stockholder meeting where each person can vote in proportion to their number of shares. One of the shareholders owns 50% of the stock, and the other 999 shareholders own the remaining 50%. The outcome of the vote is largely influenced by the shareholder who holds the majority of the stock. The remaining shareholders may have an impact collectively, but they have almost no impact individually.
The same occurs in real life datasets. The label that is over-represented in the dataset tends to dominate the outcome, and those that are under-represented may have no impact individually, but could have an impact if considered collectively.
More specifically,
- Rare values in categorical variables tend to cause over-fitting, particularly in tree based methods.
- A big number of infrequent labels adds noise, with little information, therefore causing over-fitting.
- Rare labels may be present in training set, but not in test set, therefore causing over-fitting to the train set.
- Rare labels may appear in the test set, and not in the train set. Thus, the machine learning model will not know how to evaluate it.
**Note** Sometimes rare values, are indeed important. For example, if we are building a model to predict fraudulent loan applications, which are by nature rare, then a rare value in a certain variable, may be indeed very predictive. This rare value could be telling us that the observation is most likely a fraudulent application, and therefore we would choose not to ignore it.
## In this Demo:
We will:
- Learn to identify rare labels in a dataset
- Understand how difficult it is to derive reliable information from them.
- Visualise the uneven distribution of rare labels between train and test sets
We will use the House Prices dataset.
- To download the dataset please visit the lecture **Datasets** in **Section 1** of the course.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# to separate data intro train and test sets
from sklearn.model_selection import train_test_split
# let's load the dataset with the variables
# we need for this demo
# Variable definitions:
# Neighborhood: Physical locations within Ames city limits
# Exterior1st: Exterior covering on house
# Exterior2nd: Exterior covering on house (if more than one material)
use_cols = ['Neighborhood', 'Exterior1st', 'Exterior2nd', 'SalePrice']
data = pd.read_csv('../houseprice.csv', usecols=use_cols)
data.head()
# let's look at the different number of labels
# in each variable (cardinality)
# these are the loaded categorical variables
cat_cols = ['Neighborhood', 'Exterior1st', 'Exterior2nd']
for col in cat_cols:
print('variable: ', col, ' number of labels: ', data[col].nunique())
print('total houses: ', len(data))
```
The variable 'Neighborhood' shows 25 different values, 'Exterior1st' shows 15 different categories, and 'Exterior2nd' shows 16 different categories.
```
# let's plot how frequently each label
# appears in the dataset
# in other words, the percentage of houses in the data
# with each label
total_houses = len(data)
# for each categorical variable
for col in cat_cols:
# count the number of houses per category
# and divide by total houses
# aka percentage of houses per category
temp_df = pd.Series(data[col].value_counts() / total_houses)
# make plot with the above percentages
fig = temp_df.sort_values(ascending=False).plot.bar()
fig.set_xlabel(col)
# add a line at 5 % to flag the threshold for rare categories
fig.axhline(y=0.05, color='red')
fig.set_ylabel('Percentage of houses')
plt.show()
```
For each of the categorical variables, some labels appear in more than 10% of the houses and many appear in less than 10% or even 5% of the houses. These are infrequent labels or **Rare Values** and could cause over-fitting.
### How is the target, "SalePrice", related to these categories?
In the following cells, I want to understand the mean SalePrice per group of houses that display each categories.
Keep on reading, it will become clearer.
```
# the following function calculates:
# 1) the percentage of houses per category
# 2) the mean SalePrice per category
def calculate_mean_target_per_category(df, var):
# total number of houses
total_houses = len(df)
# percentage of houses per category
temp_df = pd.Series(df[var].value_counts() / total_houses).reset_index()
temp_df.columns = [var, 'perc_houses']
# add the mean SalePrice
temp_df = temp_df.merge(df.groupby([var])['SalePrice'].mean().reset_index(),
on=var,
how='left')
return temp_df
# now we use the function for the variable 'Neighborhood'
temp_df = calculate_mean_target_per_category(data, 'Neighborhood')
temp_df
```
The above dataframe contains the percentage of houses that show each one of the labels in Neighborhood, and the mean SalePrice for those group of houses. In other words, ~15% of houses are in NAmes and the mean SalePrice is 145847.
```
# Now I create a function to plot of the
# category frequency and mean SalePrice.
# This will help us visualise the relationship between the
# target and the labels of the categorical variable
def plot_categories(df, var):
fig, ax = plt.subplots(figsize=(8, 4))
plt.xticks(df.index, df[var], rotation=90)
ax2 = ax.twinx()
ax.bar(df.index, df["perc_houses"], color='lightgrey')
ax2.plot(df.index, df["SalePrice"], color='green', label='Seconds')
ax.axhline(y=0.05, color='red')
ax.set_ylabel('percentage of houses per category')
ax.set_xlabel(var)
ax2.set_ylabel('Average Sale Price per category')
plt.show()
plot_categories(temp_df, 'Neighborhood')
```
Houses in the 'Neighborhood' of 'NridgHt' sell at a high price, whereas houses in 'Sawyer' tend to be cheaper.
Houses in the 'Neighborhood' of StoneBr have on average a high SalePrice, above 300k. However, StoneBr is present in less than 5% of the houses. Or in other words, less than 5% of the houses in the dataset are located in StoneBr.
Why is this important? Because if we do not have a lot of houses to learn from, we could be under or over-estimating the effect of StoneBr on the SalePrice.
In other words, how confident are we to generalise that most houses in StoneBr will sell for around 300k, when we only have a few houses to learn from?
```
# let's plot the remaining categorical variables
for col in cat_cols:
# we plotted this variable already
if col !='Neighborhood':
# re using the functions I created
temp_df = calculate_mean_target_per_category(data, col)
plot_categories(temp_df, col)
```
Let's look at variable Exterior2nd: Most of the categories in Exterior2nd are present in less than 5% of houses. In addition, the "SalePrice" varies a lot across those rare categories. The mean value of SalePrice goes up and down over the infrequent categories. In fact, it looks quite noisy. These rare labels could indeed be very predictive, or they could be introducing noise rather than information. And because the labels are under-represented, we can't be sure whether they have a true impact on the house price. We could be under or over-estimating their impact due to the fact that we have information for few houses.
**Note:** This plot would bring more value, if we plotted the errors of the mean SalePrice. It would give us an idea of how much the mean value of the target varies within each label. Why don't you go ahead and add the standard deviation to the plot?
### Rare labels: grouping under a new label
One common way of working with rare or infrequent values, is to group them under an umbrella category called 'Rare' or 'Other'. In this way, we are able to understand the "collective" effect of the infrequent labels on the target. See below.
```
# I will replace all the labels that appear in less than 5%
# of the houses by the label 'rare'
def group_rare_labels(df, var):
total_houses = len(df)
# first I calculate the % of houses for each category
temp_df = pd.Series(df[var].value_counts() / total_houses)
# now I create a dictionary to replace the rare labels with the
# string 'rare' if they are present in less than 5% of houses
grouping_dict = {
k: ('rare' if k not in temp_df[temp_df >= 0.05].index else k)
for k in temp_df.index
}
# now I replace the rare categories
tmp = df[var].map(grouping_dict)
return tmp
# group rare labels in Neighborhood
data['Neighborhood_grouped'] = group_rare_labels(data, 'Neighborhood')
data[['Neighborhood', 'Neighborhood_grouped']].head(10)
# let's plot Neighborhood with the grouped categories
# re-using the functions I created above
temp_df = calculate_mean_target_per_category(data, 'Neighborhood_grouped')
plot_categories(temp_df, 'Neighborhood_grouped')
```
"Rare" now contains the overall influence of all the infrequent categories on the SalePrice.
```
# let's plot the original Neighborhood for comparison
temp_df = calculate_mean_target_per_category(data, 'Neighborhood')
plot_categories(temp_df, 'Neighborhood')
```
Only 9 categories of Neighborhood are relatively common in the dataset. The remaining ones are now grouped into 'rare' which captures the average SalePrice for all the infrequent labels.
```
# let's group and plot the remaining categorical variables
for col in cat_cols[1:]:
# re using the functions I created
data[col+'_grouped'] = group_rare_labels(data, col)
temp_df = calculate_mean_target_per_category(data, col+'_grouped')
plot_categories(temp_df, col+'_grouped')
```
Here is something interesting: In the variable Exterior1st, look at how all the houses with rare values are on average more expensive than the rest, except for those with VinySd.
The same is true for Exterior2nd. The rare categories seem to have had something in common.
**Note:** Ideally, we would also like to have the standard deviation / inter-quantile range for the SalePrice, to get an idea of how variable the house price is for each category.
### Rare labels lead to uneven distribution of categories in train and test sets
Similarly to highly cardinal variables, rare or infrequent labels often land only on the training set, or only on the testing set. If present only in the training set, they may lead to over-fitting. If present only on the testing set, the machine learning algorithm will not know how to handle them, as they have not seen the rare labels during training. Let's explore this further.
```
# let's separate into training and testing set
X_train, X_test, y_train, y_test = train_test_split(data[cat_cols],
data['SalePrice'],
test_size=0.3,
random_state=2910)
X_train.shape, X_test.shape
# Let's find labels present only in the training set
# I will use X2 as example
unique_to_train_set = [
x for x in X_train['Exterior1st'].unique() if x not in X_test['Exterior1st'].unique()
]
print(unique_to_train_set)
```
There are 4 categories present in the train set and are not present in the test set.
```
# Let's find labels present only in the test set
unique_to_test_set = [
x for x in X_test['Exterior1st'].unique() if x not in X_train['Exterior1st'].unique()
]
print(unique_to_test_set)
```
In this case, there is 1 rare value present in the test set only.
**That is all for this demonstration. I hope you enjoyed the notebook, and see you in the next one.**
| github_jupyter |
<a href="https://colab.research.google.com/github/cccadet/Tensorflow/blob/master/Celsius_to_Fahrenheit.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# The Basics: Training Your First Model
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/courses/udacity_intro_to_tensorflow_for_deep_learning/l02c01_celsius_to_fahrenheit.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l02c01_celsius_to_fahrenheit.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Welcome to this Colab where you will train your first Machine Learning model!
We'll try to keep things simple here, and only introduce basic concepts. Later Colabs will cover more advanced problems.
The problem we will solve is to convert from Celsius to Fahrenheit, where the approximate formula is:
$$ f = c \times 1.8 + 32 $$
Of course, it would be simple enough to create a conventional Python function that directly performs this calculation, but that wouldn't be machine learning.
Instead, we will give TensorFlow some sample Celsius values (0, 8, 15, 22, 38) and their corresponding Fahrenheit values (32, 46, 59, 72, 100).
Then, we will train a model that figures out the above formula through the training process.
## Import dependencies
First, import TensorFlow. Here, we're calling it `tf` for ease of use. We also tell it to only display errors.
Next, import [NumPy](http://www.numpy.org/) as `np`. Numpy helps us to represent our data as highly performant lists.
```
from __future__ import absolute_import, division, print_function
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
import numpy as np
```
## Set up training data
As we saw before, supervised Machine Learning is all about figuring out an algorithm given a set of inputs and outputs. Since the task in this Codelab is to create a model that can give the temperature in Fahrenheit when given the degrees in Celsius, we create two lists `celsius_q` and `fahrenheit_a` that we can use to train our model.
```
celsius_q = np.array([-40, -10, 0, 8, 15, 22, 38], dtype=float)
fahrenheit_a = np.array([-40, 14, 32, 46, 59, 72, 100], dtype=float)
for i,c in enumerate(celsius_q):
print("{} degrees Celsius = {} degrees Fahrenheit".format(c, fahrenheit_a[i]))
```
### Some Machine Learning terminology
- **Feature** — The input(s) to our model. In this case, a single value — the degrees in Celsius.
- **Labels** — The output our model predicts. In this case, a single value — the degrees in Fahrenheit.
- **Example** — A pair of inputs/outputs used during training. In our case a pair of values from `celsius_q` and `fahrenheit_a` at a specific index, such as `(22,72)`.
## Create the model
Next create the model. We will use simplest possible model we can, a Dense network. Since the problem is straightforward, this network will require only a single layer, with a single neuron.
### Build a layer
We'll call the layer `l0` and create it by instantiating `tf.keras.layers.Dense` with the following configuration:
* `input_shape=[1]` — This specifies that the input to this layer is a single value. That is, the shape is a one-dimensional array with one member. Since this is the first (and only) layer, that input shape is the input shape of the entire model. The single value is a floating point number, representing degrees Celsius.
* `units=1` — This specifies the number of neurons in the layer. The number of neurons defines how many internal variables the layer has to try to learn how to solve the problem (more later). Since this is the final layer, it is also the size of the model's output — a single float value representing degrees Fahrenheit. (In a multi-layered network, the size and shape of the layer would need to match the `input_shape` of the next layer.)
```
l0 = tf.keras.layers.Dense(units=1, input_shape=[1])
```
### Assemble layers into the model
Once layers are defined, they need to be assembled into a model. The Sequential model definition takes a list of layers as argument, specifying the calculation order from the input to the output.
This model has just a single layer, l0.
```
model = tf.keras.Sequential([l0])
```
**Note**
You will often see the layers defined inside the model definition, rather than beforehand:
```python
model = tf.keras.Sequential([
tf.keras.layers.Dense(units=1, input_shape=[1])
])
```
## Compile the model, with loss and optimizer functions
Before training, the model has to be compiled. When compiled for training, the model is given:
- **Loss function** — A way of measuring how far off predictions are from the desired outcome. (The measured difference is called the "loss".
- **Optimizer function** — A way of adjusting internal values in order to reduce the loss.
```
model.compile(loss='mean_squared_error',
optimizer=tf.keras.optimizers.Adam(0.1))
```
These are used during training (`model.fit()`, below) to first calculate the loss at each point, and then improve it. In fact, the act of calculating the current loss of a model and then improving it is precisely what training is.
During training, the optimizer function is used to calculate adjustments to the model's internal variables. The goal is to adjust the internal variables until the model (which is really a math function) mirrors the actual equation for converting Celsius to Fahrenheit.
TensorFlow uses numerical analysis to perform this tuning, and all this complexity is hidden from you so we will not go into the details here. What is useful to know about these parameters are:
The loss function ([mean squared error](https://en.wikipedia.org/wiki/Mean_squared_error)) and the optimizer ([Adam](https://machinelearningmastery.com/adam-optimization-algorithm-for-deep-learning/)) used here are standard for simple models like this one, but many others are available. It is not important to know how these specific functions work at this point.
One part of the Optimizer you may need to think about when building your own models is the learning rate (`0.1` in the code above). This is the step size taken when adjusting values in the model. If the value is too small, it will take too many iterations to train the model. Too large, and accuracy goes down. Finding a good value often involves some trial and error, but the range is usually within 0.001 (default), and 0.1
## Train the model
Train the model by calling the `fit` method.
During training, the model takes in Celsius values, performs a calculation using the current internal variables (called "weights") and outputs values which are meant to be the Fahrenheit equivalent. Since the weights are intially set randomly, the output will not be close to the correct value. The difference between the actual output and the desired output is calculated using the loss function, and the optimizer function directs how the weights should be adjusted.
This cycle of calculate, compare, adjust is controlled by the `fit` method. The first argument is the inputs, the second argument is the desired outputs. The `epochs` argument specifies how many times this cycle should be run, and the `verbose` argument controls how much output the method produces.
```
history = model.fit(celsius_q, fahrenheit_a, epochs=500, verbose=False)
print("Finished training the model")
```
In later videos, we will go into more details on what actually happens here and how a Dense layer actually works internally.
## Display training statistics
The `fit` method returns a history object. We can use this object to plot how the loss of our model goes down after each training epoch. A high loss means that the Fahrenheit degrees the model predicts is far from the corresponding value in `fahrenheit_a`.
We'll use [Matplotlib](https://matplotlib.org/) to visualize this (you could use another tool). As you can see, our model improves very quickly at first, and then has a steady, slow improvement until it is very near "perfect" towards the end.
```
import matplotlib.pyplot as plt
plt.xlabel('Epoch Number')
plt.ylabel("Loss Magnitude")
plt.plot(history.history['loss'])
```
## Use the model to predict values
Now you have a model that has been trained to learn the relationshop between `celsius_q` and `fahrenheit_a`. You can use the predict method to have it calculate the Fahrenheit degrees for a previously unknown Celsius degrees.
So, for example, if the Celsius value is 200, what do you think the Fahrenheit result will be? Take a guess before you run this code.
```
print(model.predict([100.0]))
```
The correct answer is $100 \times 1.8 + 32 = 212$, so our model is doing really well.
### To review
* We created a model with a Dense layer
* We trained it with 3500 examples (7 pairs, over 500 epochs).
Our model tuned the variables (weights) in the Dense layer until it was able to return the correct Fahrenheit value for any Celsius value. (Remember, 100 Celsius was not part of our training data.)
## Looking at the layer weights
Finally, let's print the internal variables of the Dense layer.
```
print("These are the layer variables: {}".format(l0.get_weights()))
```
The first variable is close to ~1.8 and the second to ~32. These values (1.8 and 32) are the actual variables in the real conversion formula.
This is really close to the values in the conversion formula. We'll explain this in an upcoming video where we show how a Dense layer works, but for a single neuron with a single input and a single output, the internal math looks the same as [the equation for a line](https://en.wikipedia.org/wiki/Linear_equation#Slope%E2%80%93intercept_form), $y = mx + b$, which has the same form as the conversion equation, $f = 1.8c + 32$.
Since the form is the same, the variables should converge on the standard values of 1.8 and 32, which is exactly what happened.
With additional neurons, additional inputs, and additional outputs, the formula becomes much more complex, but the idea is the same.
### A little experiment
Just for fun, what if we created more Dense layers with different units, which therefore also has more variables?
```
l0 = tf.keras.layers.Dense(units=4, input_shape=[1])
l1 = tf.keras.layers.Dense(units=4)
l2 = tf.keras.layers.Dense(units=1)
model = tf.keras.Sequential([l0, l1, l2])
model.compile(loss='mean_squared_error', optimizer=tf.keras.optimizers.Adam(0.1))
model.fit(celsius_q, fahrenheit_a, epochs=500, verbose=False)
print("Finished training the model")
print(model.predict([100.0]))
print("Model predicts that 100 degrees Celsius is: {} degrees Fahrenheit".format(model.predict([100.0])))
print("These are the l0 variables: {}".format(l0.get_weights()))
print("These are the l1 variables: {}".format(l1.get_weights()))
print("These are the l2 variables: {}".format(l2.get_weights()))
```
As you can see, this model is also able to predict the corresponding Fahrenheit value really well. But when you look at the variables (weights) in the `l0` and `l1` layers, they are nothing even close to ~1.8 and ~32. The added complexity hides the "simple" form of the conversion equation.
Stay tuned for the upcoming video on how Dense layers work for the explanation.
| github_jupyter |
<a href="https://colab.research.google.com/github/julianox5/Desafios-Resolvidos-do-curso-machine-learning-crash-course-google/blob/master/numpy_para_machine_learning.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Importando o numpy
```
import numpy as np
```
## Preencher matrizes com números expecíficos
Criando uma matriz com o numpy.array()
```
myArray = np.array([1,2,3,4,5,6,7,8,9,0])
print(myArray)
```
Criando uma matriz bidimensional 3 x 2
```
matriz_bi = np.array([[6 , 5], [11 , 4], [5 , 9] ])
print(matriz_bi)
```
Prencher um matriz com uma sequência de numeros, numpy.arange()
```
metodArange = np.arange(5, 12)
print(metodArange)
```
## Preencher matrizes com sequência de números
Numpy possui varias funções para preencher matrizes com números aleatórios em determinados intervalos.
***numpy.random.randint*** gera números inteiros aleatórios entre um valor baixo e alto.
```
aleatorio_randint = np.random.randint(low = 10, high=100, size=(10))
print(aleatorio_randint)
```
Criar valores aleatórios de ponto flutuante entre 0,0 e 1,0 use **numpy.random.random()**
```
float_random = np.random.random([10])
print(float_random)
```
O Numpy possui um truque chamado broadcasting que expande virtualmente o operando menor para dimensões compatíveis com a álgebra linear.
```
random_floats_2_e_3 = float_random + 2.0
print (random_floats_2_e_3)
```
## Tarefa 1: Criar um conjunto de dados linear
Seu objetivo é criar um conjunto de dados simples que consiste em um único recurso e um rótulo da seguinte maneira:
1. Atribua uma sequência de números inteiros de 6 a 20 (inclusive) a uma matriz NumPy denominada `feature`.
2.Atribua 15 valores a uma matriz NumPy denominada de labelmodo que:
```
label = (3)(feature) + 4
```
Por exemplo, o primeiro valor para `label`deve ser:
```
label = (3)(6) + 4 = 22
```
```
feature = np.arange(6, 21)
print(feature)
label = (feature * 3) + 4
print(label)
```
## Tarefa 2: adicionar algum ruído ao conjunto de dados
Para tornar seu conjunto de dados um pouco mais realista, insira um pouco de ruído aleatório em cada elemento da labelmatriz que você já criou. Para ser mais preciso, modifique cada valor atribuído rótulo, adicionando um valor de ponto flutuante aleatório diferente entre -2 e +2.
ão confie na transmissão. Em vez disso, crie um ruido na matriz com a mesma dimensão que rótulo.
```
noise = (np.random.random([15]) * 4) -2
print(noise)
label += noise
print(label)
#@title Example form fields
#@markdown Forms support many types of fields.
no_type_checking = '' #@param
string_type = 'example' #@param {type: "string"}
slider_value = 142 #@param {type: "slider", min: 100, max: 200}
number = 102 #@param {type: "number"}
date = '2010-11-05' #@param {type: "date"}
pick_me = "monday" #@param ['monday', 'tuesday', 'wednesday', 'thursday']
select_or_input = "apples" #@param ["apples", "bananas", "oranges"] {allow-input: true}
#@markdown ---
```
| github_jupyter |
<a href="https://colab.research.google.com/github/boangri/uai-thesis-notebooks/blob/main/notebooks/Pong_PyTorch_DQN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Решение задачи Pong методом DQN в PyTorch
```
import os
import torch as T
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
import pandas as pd
import collections
import cv2
import matplotlib.pyplot as plt
import gym
import time
```
Классы сети и буфера воспроизведения опыта.
```
class DeepQNetwork(nn.Module):
def __init__(self, lr, n_actions, name, input_dims, chkpt_dir):
super(DeepQNetwork, self).__init__()
self.checkpoint_dir = chkpt_dir
self.checkpoint_file = os.path.join(self.checkpoint_dir, name)
self.conv1 = nn.Conv2d(input_dims[0], 32, 8, stride=4)
self.conv2 = nn.Conv2d(32, 64, 4, stride=2)
self.conv3 = nn.Conv2d(64, 64, 3, stride=1)
fc_input_dims = self.calculate_conv_output_dims(input_dims)
self.fc1 = nn.Linear(fc_input_dims, 512)
self.fc2 = nn.Linear(512, n_actions)
self.optimizer = optim.RMSprop(self.parameters(), lr=lr)
self.loss = nn.MSELoss()
self.device = T.device('cuda:0' if T.cuda.is_available() else 'cpu')
self.to(self.device)
def calculate_conv_output_dims(self, input_dims):
state = T.zeros(1, *input_dims)
dims = self.conv1(state)
dims = self.conv2(dims)
dims = self.conv3(dims)
return int(np.prod(dims.size()))
def forward(self, state):
conv1 = F.relu(self.conv1(state))
conv2 = F.relu(self.conv2(conv1))
conv3 = F.relu(self.conv3(conv2))
# conv3 shape is BS x n_filters x H x W
conv_state = conv3.view(conv3.size()[0], -1)
# conv_state shape is BS x (n_filters * H * W)
flat1 = F.relu(self.fc1(conv_state))
actions = self.fc2(flat1)
return actions
def save_checkpoint(self):
print('... saving checkpoint ...')
T.save(self.state_dict(), self.checkpoint_file)
def load_checkpoint(self):
print('... loading checkpoint ...')
self.load_state_dict(T.load(self.checkpoint_file))
class ReplayBuffer(object):
def __init__(self, max_size, input_shape, n_actions):
self.mem_size = max_size
self.mem_cntr = 0
self.state_memory = np.zeros((self.mem_size, *input_shape),
dtype=np.float32)
self.new_state_memory = np.zeros((self.mem_size, *input_shape),
dtype=np.float32)
self.action_memory = np.zeros(self.mem_size, dtype=np.int64)
self.reward_memory = np.zeros(self.mem_size, dtype=np.float32)
self.terminal_memory = np.zeros(self.mem_size, dtype=np.bool)
def store_transition(self, state, action, reward, state_, done):
index = self.mem_cntr % self.mem_size
self.state_memory[index] = state
self.new_state_memory[index] = state_
self.action_memory[index] = action
self.reward_memory[index] = reward
self.terminal_memory[index] = done
self.mem_cntr += 1
def sample_buffer(self, batch_size):
max_mem = min(self.mem_cntr, self.mem_size)
batch = np.random.choice(max_mem, batch_size, replace=False)
states = self.state_memory[batch]
actions = self.action_memory[batch]
rewards = self.reward_memory[batch]
states_ = self.new_state_memory[batch]
terminal = self.terminal_memory[batch]
return states, actions, rewards, states_, terminal
```
Классы-обертки
```
class RepeatActionAndMaxFrame(gym.Wrapper):
def __init__(self, env=None, repeat=4, clip_reward=False, no_ops=0,
fire_first=False):
super(RepeatActionAndMaxFrame, self).__init__(env)
self.repeat = repeat
self.shape = env.observation_space.low.shape
self.frame_buffer = np.zeros_like((2, self.shape))
self.clip_reward = clip_reward
self.no_ops = no_ops
self.fire_first = fire_first
def step(self, action):
t_reward = 0.0
done = False
for i in range(self.repeat):
obs, reward, done, info = self.env.step(action)
if self.clip_reward:
reward = np.clip(np.array([reward]), -1, 1)[0]
t_reward += reward
idx = i % 2
self.frame_buffer[idx] = obs
if done:
break
max_frame = np.maximum(self.frame_buffer[0], self.frame_buffer[1])
return max_frame, t_reward, done, info
def reset(self):
obs = self.env.reset()
no_ops = np.random.randint(self.no_ops)+1 if self.no_ops > 0 else 0
for _ in range(no_ops):
_, _, done, _ = self.env.step(0)
if done:
self.env.reset()
if self.fire_first:
assert self.env.unwrapped.get_action_meanings()[1] == 'FIRE'
obs, _, _, _ = self.env.step(1)
self.frame_buffer = np.zeros_like((2,self.shape))
self.frame_buffer[0] = obs
return obs
class PreprocessFrame(gym.ObservationWrapper):
def __init__(self, shape, env=None):
super(PreprocessFrame, self).__init__(env)
self.shape = (shape[2], shape[0], shape[1])
self.observation_space = gym.spaces.Box(low=0.0, high=1.0,
shape=self.shape, dtype=np.float32)
def observation(self, obs):
new_frame = cv2.cvtColor(obs, cv2.COLOR_RGB2GRAY)
resized_screen = cv2.resize(new_frame, self.shape[1:],
interpolation=cv2.INTER_AREA)
new_obs = np.array(resized_screen, dtype=np.uint8).reshape(self.shape)
new_obs = new_obs / 255.0
return new_obs
class StackFrames(gym.ObservationWrapper):
def __init__(self, env, repeat):
super(StackFrames, self).__init__(env)
self.observation_space = gym.spaces.Box(
env.observation_space.low.repeat(repeat, axis=0),
env.observation_space.high.repeat(repeat, axis=0),
dtype=np.float32)
self.stack = collections.deque(maxlen=repeat)
def reset(self):
self.stack.clear()
observation = self.env.reset()
for _ in range(self.stack.maxlen):
self.stack.append(observation)
return np.array(self.stack).reshape(self.observation_space.low.shape)
def observation(self, observation):
self.stack.append(observation)
return np.array(self.stack).reshape(self.observation_space.low.shape)
def make_env(env_name, shape=(84,84,1), repeat=4, clip_rewards=False,
no_ops=0, fire_first=False):
env = gym.make(env_name)
env = RepeatActionAndMaxFrame(env, repeat, clip_rewards, no_ops, fire_first)
env = PreprocessFrame(shape, env)
env = StackFrames(env, repeat)
return env
```
Универсальный класс DQN-агента
```
class DQNAgent(object):
def __init__(self, gamma, epsilon, lr, n_actions, input_dims,
mem_size, batch_size, eps_min=0.01, eps_dec=5e-7,
replace=1000, algo=None, env_name=None, chkpt_dir='tmp/dqn'):
self.gamma = gamma
self.epsilon = epsilon
self.lr = lr
self.n_actions = n_actions
self.input_dims = input_dims
self.batch_size = batch_size
self.eps_min = eps_min
self.eps_dec = eps_dec
self.replace_target_cnt = replace
self.algo = algo
self.env_name = env_name
self.chkpt_dir = chkpt_dir
self.action_space = [i for i in range(n_actions)]
self.learn_step_counter = 0
self.memory = ReplayBuffer(mem_size, input_dims, n_actions)
self.q_eval = DeepQNetwork(self.lr, self.n_actions,
input_dims=self.input_dims,
name=self.env_name+'_'+self.algo+'_q_eval',
chkpt_dir=self.chkpt_dir)
self.q_next = DeepQNetwork(self.lr, self.n_actions,
input_dims=self.input_dims,
name=self.env_name+'_'+self.algo+'_q_next',
chkpt_dir=self.chkpt_dir)
def choose_action(self, observation):
if np.random.random() > self.epsilon:
state = T.tensor([observation],dtype=T.float).to(self.q_eval.device)
actions = self.q_eval.forward(state)
action = T.argmax(actions).item()
else:
action = np.random.choice(self.action_space)
return action
def store_transition(self, state, action, reward, state_, done):
self.memory.store_transition(state, action, reward, state_, done)
def sample_memory(self):
state, action, reward, new_state, done = \
self.memory.sample_buffer(self.batch_size)
states = T.tensor(state).to(self.q_eval.device)
rewards = T.tensor(reward).to(self.q_eval.device)
dones = T.tensor(done).to(self.q_eval.device)
actions = T.tensor(action).to(self.q_eval.device)
states_ = T.tensor(new_state).to(self.q_eval.device)
return states, actions, rewards, states_, dones
def replace_target_network(self):
if self.learn_step_counter % self.replace_target_cnt == 0:
self.q_next.load_state_dict(self.q_eval.state_dict())
def decrement_epsilon(self):
self.epsilon = self.epsilon - self.eps_dec \
if self.epsilon > self.eps_min else self.eps_min
def save_models(self):
self.q_eval.save_checkpoint()
self.q_next.save_checkpoint()
def load_models(self):
self.q_eval.load_checkpoint()
self.q_next.load_checkpoint()
def learn(self):
if self.memory.mem_cntr < self.batch_size:
return
self.q_eval.optimizer.zero_grad()
self.replace_target_network()
states, actions, rewards, states_, dones = self.sample_memory()
indices = np.arange(self.batch_size)
q_pred = self.q_eval.forward(states)[indices, actions]
q_next = self.q_next.forward(states_).max(dim=1)[0]
q_next[dones] = 0.0
q_target = rewards + self.gamma*q_next
loss = self.q_eval.loss(q_target, q_pred).to(self.q_eval.device)
loss.backward()
self.q_eval.optimizer.step()
self.learn_step_counter += 1
self.decrement_epsilon()
```
Цикл обучения
```
%%time
path = '/content/drive/My Drive/weights/Pong/'
env = make_env('PongNoFrameskip-v4')
best_score = -np.inf
load_checkpoint = False
n_games = 120
agent = DQNAgent(gamma=0.99, epsilon=1.0, lr=0.0001,
input_dims=(env.observation_space.shape),
n_actions=env.action_space.n, mem_size=50000, eps_min=0.05,
batch_size=32, replace=1000, eps_dec=1e-5,
chkpt_dir=path, algo='DQNAgent',
env_name='PongNoFrameskip-v4')
if load_checkpoint:
agent.load_models()
n_steps = 0
scores, eps_history, steps_array = [], [], []
startTime = time.time()
for i in range(n_games):
done = False
observation = env.reset()
score = 0
while not done:
action = agent.choose_action(observation)
observation_, reward, done, info = env.step(action)
score += reward
if not load_checkpoint:
agent.store_transition(observation, action, reward, observation_, int(done))
agent.learn()
observation = observation_
n_steps += 1
scores.append(score)
steps_array.append(n_steps)
avg_score = np.mean(scores[-10:])
print("ep:%d score:%.0f avg_score:%.2f best_score:%.2f epsilon:%.4f steps:%d time:%.1f" % (i+1, score, avg_score, best_score, agent.epsilon, n_steps, time.time() - startTime))
with open(path + 'torch_dqn_history4.csv', 'a') as h:
h.write("%d,%.0f,%.2f,%.6f,%d,%.1f\n" % (i+1, score, avg_score, agent.epsilon, n_steps, time.time() - startTime))
if avg_score > best_score:
if not load_checkpoint:
agent.save_models()
best_score = avg_score
eps_history.append(agent.epsilon)
```
График обучения
```
path = '/content/drive/My Drive/weights/Pong/'
df = pd.read_csv(path + 'torch_dqn_history.csv', header=None, names=('episode', 'score', 'avg_score', 'epsilon', 'steps', 'time'))
x = df.episode
y = df.score
y1 = np.zeros_like(x)
for i in range(len(y1)):
imin = i - 10 if i > 10 else 0
y1[i] = y[imin:i+1].mean()
# y1 = df.avg_score
plt.figure(figsize=(12,6))
plt.scatter(x, y, label='очки')
plt.plot(x, y1, color='C1', label='среднее за 10')
plt.ylabel('Очки')
plt.xlabel('Игры')
plt.legend()
plt.grid()
plt.title('История обучения - Pong-v0 - Deep Q Learning')
plt.show()
```
Демонстрация игры
```
%%time
path = '/content/drive/My Drive/weights/Pong/'
env = make_env('PongNoFrameskip-v4')
env.seed(3)
env = wrap_env(env)
load_checkpoint = True
n_games = 1
agent = DQNAgent(gamma=0.99, epsilon=0.0, lr=0.0001,
input_dims=(env.observation_space.shape),
n_actions=env.action_space.n, mem_size=1, eps_min=0.0,
batch_size=32, replace=1000, eps_dec=1e-5,
chkpt_dir=path, algo='DQNAgent',
env_name='PongNoFrameskip-v4')
if load_checkpoint:
agent.load_models()
n_steps = 0
scores, eps_history, steps_array = [], [], []
startTime = time.time()
for i in range(n_games):
done = False
observation = env.reset()
score = 0
while not done:
action = agent.choose_action(observation)
observation, reward, done, info = env.step(action)
score += reward
n_steps += 1
scores.append(score)
steps_array.append(n_steps)
avg_score = np.mean(scores[-10:])
print(i+1, score)
print("avg_score=%.1f" % avg_score)
# print("ep:%d score:%.0f avg_score:%.2f best_score:%.2f epsilon:%.4f steps:%d time:%.1f" % (i+1, score, avg_score, best_score, agent.epsilon, n_steps, time.time - startTime))
env.close()
show_video()
```
| github_jupyter |
```
import torch
import torch.nn as nn
class ResNetBlock(nn.Module): # <1>
def __init__(self, dim):
super(ResNetBlock, self).__init__()
self.conv_block = self.build_conv_block(dim)
def build_conv_block(self, dim):
conv_block = []
conv_block += [nn.ReflectionPad2d(1)]
conv_block += [nn.Conv2d(dim, dim, kernel_size=3, padding=0, bias=True),
nn.InstanceNorm2d(dim),
nn.ReLU(True)]
conv_block += [nn.ReflectionPad2d(1)]
conv_block += [nn.Conv2d(dim, dim, kernel_size=3, padding=0, bias=True),
nn.InstanceNorm2d(dim)]
return nn.Sequential(*conv_block)
def forward(self, x):
out = x + self.conv_block(x) # <2>
return out
class ResNetGenerator(nn.Module):
def __init__(self, input_nc=3, output_nc=3, ngf=64, n_blocks=9): # <3>
assert(n_blocks >= 0)
super(ResNetGenerator, self).__init__()
self.input_nc = input_nc
self.output_nc = output_nc
self.ngf = ngf
model = [nn.ReflectionPad2d(3),
nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0, bias=True),
nn.InstanceNorm2d(ngf),
nn.ReLU(True)]
n_downsampling = 2
for i in range(n_downsampling):
mult = 2**i
model += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3,
stride=2, padding=1, bias=True),
nn.InstanceNorm2d(ngf * mult * 2),
nn.ReLU(True)]
mult = 2**n_downsampling
for i in range(n_blocks):
model += [ResNetBlock(ngf * mult)]
for i in range(n_downsampling):
mult = 2**(n_downsampling - i)
model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2),
kernel_size=3, stride=2,
padding=1, output_padding=1,
bias=True),
nn.InstanceNorm2d(int(ngf * mult / 2)),
nn.ReLU(True)]
model += [nn.ReflectionPad2d(3)]
model += [nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)]
model += [nn.Tanh()]
self.model = nn.Sequential(*model)
def forward(self, input): # <3>
return self.model(input)
```
```
from torchvision import models
import torch
netG = ResNetGenerator()
weight_path = '/Users/mnctty/Desktop/DL with Pytorch/1_lessons/horse2zebra_0.4.0.pth'
model_data = torch.load(weight_path)
#model_data.keys()
netG.load_state_dict(model_data)
### а что еще можно загрузить?
netG.eval()
from PIL import Image
from torchvision import transforms
preprocess = transforms.Compose([transforms.Resize(256),
transforms.ToTensor()])
img_path = '/Users/mnctty/Desktop/DL with Pytorch/1_lessons/horses.jpeg'
img = Image.open(img_path)
img.show()
img_transformed = preprocess(img)
batch_tensor = torch.unsqueeze(img_transformed, 0)
res = netG(batch_tensor)
res.show()
out_t = (res.data.squeeze() + 1.0) / 2.0
out_img = transforms.ToPILImage()(out_t)
out_img
out_img.save('/Users/mnctty/Desktop/DL with Pytorch/1_lessons/zebhorses.jpg')
```
| github_jupyter |
# Doom Deadly Corridor with Dqn
The purpose of this scenario is to teach the agent to navigate towards his fundamental goal (the vest) and make sure he survives at the same time.
### Enviroment
Map is a corridor with shooting monsters on both sides (6 monsters in total). A green vest is placed at the oposite end of the corridor.Reward is proportional (negative or positive) to change of the distance between the player and the vest. If player ignores monsters on the sides and runs straight for the vest he will be killed somewhere along the way.
### Action
- MOVE_LEFT
- MOVE_RIGHT
- ATTACK
- MOVE_FORWARD
- MOVE_BACKWARD
- TURN_LEFT
- TURN_RIGHT
### Rewards
- +dX for getting closer to the vest.
- -dX for getting further from the vest.
- -100 death penalty
## Step 1: Import the libraries
```
import numpy as np
import random # Handling random number generation
import time # Handling time calculation
import cv2
import torch
from vizdoom import * # Doom Environment
import matplotlib.pyplot as plt
from IPython.display import clear_output
from collections import namedtuple, deque
import math
%matplotlib inline
import sys
sys.path.append('../../')
from algos.agents import DQNAgent
from algos.models import DQNCnn
from algos.preprocessing.stack_frame import preprocess_frame, stack_frame
```
## Step 2: Create our environment
Initialize the environment in the code cell below.
```
def create_environment():
game = DoomGame()
# Load the correct configuration
game.load_config("doom_files/deadly_corridor.cfg")
# Load the correct scenario (in our case defend_the_center scenario)
game.set_doom_scenario_path("doom_files/deadly_corridor.wad")
# Here our possible actions
possible_actions = np.identity(7, dtype=int).tolist()
return game, possible_actions
game, possible_actions = create_environment()
# if gpu is to be used
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print("Device: ", device)
```
## Step 3: Viewing our Enviroment
```
print("The size of frame is: (", game.get_screen_height(), ", ", game.get_screen_width(), ")")
print("No. of Actions: ", possible_actions)
game.init()
plt.figure()
plt.imshow(game.get_state().screen_buffer.transpose(1, 2, 0))
plt.title('Original Frame')
plt.show()
game.close()
```
### Execute the code cell below to play Pong with a random policy.
```
def random_play():
game.init()
game.new_episode()
score = 0
while True:
reward = game.make_action(possible_actions[np.random.randint(3)])
done = game.is_episode_finished()
score += reward
time.sleep(0.01)
if done:
print("Your total score is: ", score)
game.close()
break
random_play()
```
## Step 4:Preprocessing Frame
```
game.init()
plt.figure()
plt.imshow(preprocess_frame(game.get_state().screen_buffer.transpose(1, 2, 0), (0, -60, -40, 60), 84), cmap="gray")
game.close()
plt.title('Pre Processed image')
plt.show()
```
## Step 5: Stacking Frame
```
def stack_frames(frames, state, is_new=False):
frame = preprocess_frame(state, (0, -60, -40, 60), 84)
frames = stack_frame(frames, frame, is_new)
return frames
```
## Step 6: Creating our Agent
```
INPUT_SHAPE = (4, 84, 84)
ACTION_SIZE = len(possible_actions)
SEED = 0
GAMMA = 0.99 # discount factor
BUFFER_SIZE = 100000 # replay buffer size
BATCH_SIZE = 32 # Update batch size
LR = 0.0001 # learning rate
TAU = .1 # for soft update of target parameters
UPDATE_EVERY = 100 # how often to update the network
UPDATE_TARGET = 10000 # After which thershold replay to be started
EPS_START = 0.99 # starting value of epsilon
EPS_END = 0.01 # Ending value of epsilon
EPS_DECAY = 100 # Rate by which epsilon to be decayed
agent = DQNAgent(INPUT_SHAPE, ACTION_SIZE, SEED, device, BUFFER_SIZE, BATCH_SIZE, GAMMA, LR, TAU, UPDATE_EVERY, UPDATE_TARGET, DQNCnn)
```
## Step 7: Watching untrained agent play
```
# watch an untrained agent
game.init()
score = 0
state = stack_frames(None, game.get_state().screen_buffer.transpose(1, 2, 0), True)
while True:
action = agent.act(state, 0.01)
score += game.make_action(possible_actions[action])
done = game.is_episode_finished()
if done:
print("Your total score is: ", score)
break
else:
state = stack_frames(state, game.get_state().screen_buffer.transpose(1, 2, 0), False)
game.close()
```
## Step 8: Loading Agent
Uncomment line to load a pretrained agent
```
start_epoch = 0
scores = []
scores_window = deque(maxlen=20)
```
## Step 9: Train the Agent with DQN
```
epsilon_by_epsiode = lambda frame_idx: EPS_END + (EPS_START - EPS_END) * math.exp(-1. * frame_idx /EPS_DECAY)
plt.plot([epsilon_by_epsiode(i) for i in range(1000)])
def train(n_episodes=1000):
"""
Params
======
n_episodes (int): maximum number of training episodes
"""
game.init()
for i_episode in range(start_epoch + 1, n_episodes+1):
game.new_episode()
state = stack_frames(None, game.get_state().screen_buffer.transpose(1, 2, 0), True)
score = 0
eps = epsilon_by_epsiode(i_episode)
while True:
action = agent.act(state, eps)
reward = game.make_action(possible_actions[action])
done = game.is_episode_finished()
score += reward
if done:
agent.step(state, action, reward, state, done)
break
else:
next_state = stack_frames(state, game.get_state().screen_buffer.transpose(1, 2, 0), False)
agent.step(state, action, reward, next_state, done)
state = next_state
scores_window.append(score) # save most recent score
scores.append(score) # save most recent score
clear_output(True)
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
print('\rEpisode {}\tAverage Score: {:.2f}\tEpsilon: {:.2f}'.format(i_episode, np.mean(scores_window), eps), end="")
game.close()
return scores
scores = train(5000)
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
```
## Step 10: Watch a Smart Agent!
```
game.init()
score = 0
state = stack_frames(None, game.get_state().screen_buffer.transpose(1, 2, 0), True)
while True:
action = agent.act(state, 0.01)
score += game.make_action(possible_actions[action])
done = game.is_episode_finished()
if done:
print("Your total score is: ", score)
break
else:
state = stack_frames(state, game.get_state().screen_buffer.transpose(1, 2, 0), False)
game.close()
```
| github_jupyter |
# TRTR Dataset D
```
#import libraries
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
import os
print('Libraries imported!!')
#define directory of functions and actual directory
HOME_PATH = '' #home path of the project
FUNCTIONS_DIR = 'EVALUATION FUNCTIONS/UTILITY'
ACTUAL_DIR = os.getcwd()
#change directory to functions directory
os.chdir(HOME_PATH + FUNCTIONS_DIR)
#import functions for data labelling analisys
from utility_evaluation import DataPreProcessor
from utility_evaluation import train_evaluate_model
#change directory to actual directory
os.chdir(ACTUAL_DIR)
print('Functions imported!!')
```
## 1. Read data
```
#read real dataset
train_data = pd.read_csv(HOME_PATH + 'REAL DATASETS/TRAIN DATASETS/D_ContraceptiveMethod_Real_Train.csv')
categorical_columns = ['wife_education','husband_education','wife_religion','wife_working','husband_occupation',
'standard_of_living_index','media_exposure','contraceptive_method_used']
for col in categorical_columns :
train_data[col] = train_data[col].astype('category')
train_data
#read test data
test_data = pd.read_csv(HOME_PATH + 'REAL DATASETS/TEST DATASETS/D_ContraceptiveMethod_Real_Test.csv')
for col in categorical_columns :
test_data[col] = test_data[col].astype('category')
test_data
target = 'contraceptive_method_used'
#quick look at the breakdown of class values
print('Train data')
print(train_data.shape)
print(train_data.groupby(target).size())
print('#####################################')
print('Test data')
print(test_data.shape)
print(test_data.groupby(target).size())
```
## 2. Pre-process training data
```
target = 'contraceptive_method_used'
categorical_columns = ['wife_education','husband_education','wife_religion','wife_working','husband_occupation',
'standard_of_living_index','media_exposure']
numerical_columns = train_data.select_dtypes(include=['int64','float64']).columns.tolist()
categories = [np.array([0, 1, 2, 3]), np.array([0, 1, 2, 3]), np.array([0, 1]), np.array([0, 1]),
np.array([0, 1, 2, 3]), np.array([0, 1, 2, 3]), np.array([0, 1])]
data_preprocessor = DataPreProcessor(categorical_columns, numerical_columns, categories)
x_train = data_preprocessor.preprocess_train_data(train_data.loc[:, train_data.columns != target])
y_train = train_data.loc[:, target]
x_train.shape, y_train.shape
```
## 3. Preprocess test data
```
x_test = data_preprocessor.preprocess_test_data(test_data.loc[:, test_data.columns != target])
y_test = test_data.loc[:, target]
x_test.shape, y_test.shape
```
## 4. Create a dataset to save the results
```
results = pd.DataFrame(columns = ['model','accuracy','precision','recall','f1'])
results
```
## 4. Train and evaluate Random Forest Classifier
```
rf_results = train_evaluate_model('RF', x_train, y_train, x_test, y_test)
results = results.append(rf_results, ignore_index=True)
rf_results
```
## 5. Train and Evaluate KNeighbors Classifier
```
knn_results = train_evaluate_model('KNN', x_train, y_train, x_test, y_test)
results = results.append(knn_results, ignore_index=True)
knn_results
```
## 6. Train and evaluate Decision Tree Classifier
```
dt_results = train_evaluate_model('DT', x_train, y_train, x_test, y_test)
results = results.append(dt_results, ignore_index=True)
dt_results
```
## 7. Train and evaluate Support Vector Machines Classifier
```
svm_results = train_evaluate_model('SVM', x_train, y_train, x_test, y_test)
results = results.append(svm_results, ignore_index=True)
svm_results
```
## 8. Train and evaluate Multilayer Perceptron Classifier
```
mlp_results = train_evaluate_model('MLP', x_train, y_train, x_test, y_test)
results = results.append(mlp_results, ignore_index=True)
mlp_results
```
## 9. Save results file
```
results.to_csv('RESULTS/models_results_real.csv', index=False)
results
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Distributed training in TensorFlow
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/alpha/tutorials/distribute/keras"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/distribute/keras.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/distribute/keras.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
## Overview
The `tf.distribute.Strategy` API provides an abstraction for distributing your training
across multiple processing units. The goal is to allow users to enable distributed training using existing models and training code, with minimal changes.
This tutorial uses the `tf.distribute.MirroredStrategy`, which
does in-graph replication with synchronous training on many GPUs on one machine.
Essentially, it copies all of the model's variables to each processor.
Then, it uses [all-reduce](http://mpitutorial.com/tutorials/mpi-reduce-and-allreduce/) to combine the gradients from all processors and applies the combined value to all copies of the model.
`MirroredStategy` is one of several distribution strategy available in TensorFlow core. You can read about more strategies at [distribution strategy guide](../../guide/distribute_strategy.ipynb).
### Keras API
This example uses the `tf.keras` API to build the model and training loop. For custom training loops, see [this tutorial](training_loops.ipynb).
## Import Dependencies
```
from __future__ import absolute_import, division, print_function
# Import TensorFlow
!pip install tensorflow-gpu==2.0.0-alpha0
import tensorflow_datasets as tfds
import tensorflow as tf
import os
```
## Download the dataset
Download the MNIST dataset and load it from [TensorFlow Datasets](https://www.tensorflow.org/datasets). This returns a dataset in `tf.data` format.
Setting `with_info` to `True` includes the metadata for the entire dataset, which is being saved here to `ds_info`.
Among other things, this metadata object includes the number of train and test examples.
```
datasets, ds_info = tfds.load(name='mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = datasets['train'], datasets['test']
```
## Define Distribution Strategy
Create a `MirroredStrategy` object. This will handle distribution, and provides a context manager (`tf.distribute.MirroredStrategy.scope`) to build your model inside.
```
strategy = tf.distribute.MirroredStrategy()
print ('Number of devices: {}'.format(strategy.num_replicas_in_sync))
```
## Setup Input pipeline
If a model is trained on multiple GPUs, the batch size should be increased accordingly so as to make effective use of the extra computing power. Moreover, the learning rate should be tuned accordingly.
```
# You can also do ds_info.splits.total_num_examples to get the total
# number of examples in the dataset.
num_train_examples = ds_info.splits['train'].num_examples
num_test_examples = ds_info.splits['test'].num_examples
BUFFER_SIZE = 10000
BATCH_SIZE_PER_REPLICA = 64
BATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync
```
Pixel values, which are 0-255, [have to be normalized to the 0-1 range](https://en.wikipedia.org/wiki/Feature_scaling). Define this scale in a function.
```
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
```
Apply this function to the training and test data, shuffle the training data, and [batch it for training](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#batch).
```
train_dataset = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
eval_dataset = mnist_test.map(scale).batch(BATCH_SIZE)
```
## Create the model
Create and compile the Keras model in the context of `strategy.scope`.
```
with strategy.scope():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(loss='sparse_categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
```
## Define the callbacks.
The callbacks used here are:
* *Tensorboard*: This callback writes a log for Tensorboard which allows you to visualize the graphs.
* *Model Checkpoint*: This callback saves the model after every epoch.
* *Learning Rate Scheduler*: Using this callback, you can schedule the learning rate to change after every epoch/batch.
For illustrative purposes, add a print callback to display the *learning rate* in the notebook.
```
# Define the checkpoint directory to store the checkpoints
checkpoint_dir = './training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
# Function for decaying the learning rate.
# You can define any decay function you need.
def decay(epoch):
if epoch < 3:
return 1e-3
elif epoch >= 3 and epoch < 7:
return 1e-4
else:
return 1e-5
# Callback for printing the LR at the end of each epoch.
class PrintLR(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
print ('\nLearning rate for epoch {} is {}'.format(epoch + 1,
model.optimizer.lr.numpy()))
callbacks = [
tf.keras.callbacks.TensorBoard(log_dir='./logs'),
tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_prefix,
save_weights_only=True),
tf.keras.callbacks.LearningRateScheduler(decay),
PrintLR()
]
```
## Train and evaluate
Now, train the model in the usual way, calling `fit` on the model and passing in the dataset created at the beginning of the tutorial. This step is the same whether you are distributing the training or not.
```
model.fit(train_dataset, epochs=10, callbacks=callbacks)
```
As you can see below, the checkpoints are getting saved.
```
# check the checkpoint directory
!ls {checkpoint_dir}
```
To see how the model perform, load the latest checkpoint and call `evaluate` on the test data.
Call `evaluate` as before using appropriate datasets.
```
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
eval_loss, eval_acc = model.evaluate(eval_dataset)
print ('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc))
```
To see the output, you can download and view the TensorBoard logs at the terminal.
```
$ tensorboard --logdir=path/to/log-directory
```
```
!ls -sh ./logs
```
## Export to SavedModel
If you want to export the graph and the variables, SavedModel is the best way of doing this. The model can be loaded back with or without the scope. Moreover, SavedModel is platform agnostic.
```
path = 'saved_model/'
tf.keras.experimental.export_saved_model(model, path)
```
Load the model without `strategy.scope`.
```
unreplicated_model = tf.keras.experimental.load_from_saved_model(path)
unreplicated_model.compile(
loss='sparse_categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
eval_loss, eval_acc = unreplicated_model.evaluate(eval_dataset)
print ('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc))
```
Load the model with `strategy.scope`.
```
with strategy.scope():
replicated_model = tf.keras.experimental.load_from_saved_model(path)
replicated_model.compile(loss='sparse_categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
eval_loss, eval_acc = replicated_model.evaluate(eval_dataset)
print ('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc))
```
## What's next?
Read the [distribution strategy guide](../../guide/distribute_strategy.ipynb).
Try the [Distributed Training with Custom Training Loops](training_loops.ipynb) tutorial.
Note: `tf.distribute.Strategy` is actively under development and we will be adding more examples and tutorials in the near future. Please give it a try. We welcome your feedback via [issues on GitHub](https://github.com/tensorflow/tensorflow/issues/new).
| github_jupyter |
# Create a cell (function for calculating the DGSLR index from the input datat)
```
import numpy as np
from scipy import interp
import matplotlib.pyplot as plt
from itertools import cycle
# the module for the roc_curve value
from sklearn.metrics import *
# Config the matlotlib backend as plotting inline in IPython
%matplotlib inline
# function for DGLSR index calculation
def calculateDGLSR(data):
'''
function for calculating the DGLSR index
@Params
data = the numpy array for the input data
@Return
calculated DGLSR index for the input data
'''
# The following weights array was derived by using the AHP process for determining
# the influencing parameters
weights_array = np.array([
3.00, # drainage density Very high
2.40, # --===---- High
1.80, # --===---- Moderate
1.20, # --===---- Low
0.60, # --===---- Very Low
6.60, # Geology Diveghat Formation
5.40, # --===---- Purandargarh formation
4.75, # Slope Very Steep
4.07, # --===---- Mod. Steep
3.39, # --===---- Strong
2.72, # --===---- Mod. Strong
2.03, # --===---- Gentle
1.36, # --===---- Very Gentle
0.68, # --===---- Nearly level
4.40, # Landform classi Plateau surface remnants
3.30, # --===---- Plateau fringe surface
2.20, # --===---- Buried Pediment
1.10, # --===---- Rolling Piedmont Plain
4.67, # Landuse/land cov Waste Land
3.73, # --===---- Forest/vegetation
2.80, # --===---- Agriculture Land
1.87, # --===---- Water Bodies
0.93, # --===---- Built-up land
8.33, # Rainfall < 900mm
6.67, # --===---- 900mm - 975mm
5.00, # --===---- 975mm - 1050mm
3.33, # --===---- 1050mm - 1100mm
1.67, # --===---- > 1100mm
3.33, # Runoff Very High
2.67, # --===---- High
2.00, # --===---- Moderate
1.33, # --===---- Low
0.67 # --===---- Very Low
])
print data.shape, weights_array.shape
return np.sum(data * weights_array) / 100 # formula for calculating the DGSLR index
# test for the Subwater shed 1
calculateDGLSR(np.array([
81.05,
11.17,
6.33,
1.44,
0.00,
48.26,
51.74,
1.81,
9.30,
15.31,
21.88,
11.10,
25.74,
14.87,
16.52,
59.98,
23.50,
0.00,
39.44,
51.41,
6.44,
0.06,
2.65,
0.00,
0.00,
20.07,
56.83,
23.10,
5.28,
31.65,
18.57,
4.40,
40.10
]))
```
# The CSV File now contains some historical facts related to the ground water level fluctuation. This can be used for validating the DGSLR model using ROC based validation.
```
# read the csv file into a numpy array.
validation_data = np.genfromtxt('../data/validation.csv', delimiter='\t', skip_header=1)
print validation_data.shape # to print the shape of the numpy array
validation_data[:] # print the first 10 values of the data
```
# now we transform this array into one hot encoded values.
# such that first array has the predicted class and second array has the ground truth/ actual class
```
# function to produce a class for the water-level fluctuation (aka. the actual class)
def actual_class(value):
'''
function to give the priority class for the water-level fluctuation
@Param:
value = the water level fluctuation value
@Return
the numerical class for the value.
'''
# the implementation is a simply condition ladder of the values given in the excel file.
if(value <= 3.07):
return 0 # priority is low
elif(value > 3.07 and value <= 5.20):
return 1 # priority is moderate
elif(value > 5.20 and value <= 7.77):
return 2 # priority is high
else:
return 3 # priority is very high
# function to produce a class for the DGSLR index value (aka. the predicted class)
def predicted_class(index):
'''
function to give the priority class for the DGLSR index value
@Param:
value = the DGLSR index so calcuated
@Return
the numerical class for the value.
'''
# the implementation is a simply condition ladder of the values given in the excel file.
if(index <= 28.02):
return 0 # priority is low
elif(index > 28.02 and index <= 28.72):
return 1 # priority is moderate
elif(index > 28.72 and index <= 29.42):
return 2 # priority is high
else:
return 3 # priority is very high
# number of classes is 4, so:
n_classes = 4
# initialize the two arrays to zero values
predictions = np.zeros(shape=(validation_data.shape[0], n_classes))
actual_values = np.zeros(shape=(validation_data.shape[0], n_classes))
(predictions[:3], actual_values[:3])
# loop through the validation_data and populate the predictions and the actual_values
for i in range(validation_data.shape[0]):
predictions[i, predicted_class(validation_data[i, 4])] = 1
actual_values[i, actual_class(validation_data[i, 3])] = 1
# print the predictions
predictions
# print the actual classes:
actual_values
# define the reverse label mappings for better visual representation:
reverse_labels_mappings = {
0: "Low priority",
1: "Moderate priority",
2: "High priority",
3: "Very high priority"
}
# now time to calculate the ROC_auc and generate the curve plots.
# first generate the curves as follows
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(actual_values[:, i], predictions[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# now plot the 4 roc curves using the calculations
# plot for all the labels
for i in range(n_classes):
plt.figure()
lw = 2
plt.plot(fpr[i], tpr[i], color='green',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc[i])
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristics for label: ' + reverse_labels_mappings[i])
plt.legend(loc="lower right")
plt.savefig("../ROC_plots/" + reverse_labels_mappings[i] + ".png")
plt.show()
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(actual_values.ravel(), predictions.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
# Compute macro-average ROC curve and ROC area
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure(figsize=(10, 10))
plt.plot(fpr["micro"], tpr["micro"],
label='micro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["micro"]),
color='deeppink', linestyle=':', linewidth=4)
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["macro"]),
color='navy', linestyle=':', linewidth=4)
colors = cycle(['aqua', 'darkorange', 'cornflowerblue', 'green'])
for i, color in zip(range(n_classes), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label='ROC curve of ' + reverse_labels_mappings[i] + ' (area = {1:0.2f})'
''.format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC plot containing all the curves')
plt.legend(loc="lower right")
plt.savefig("../ROC_plots/all_curves.png")
plt.show()
# Plot all ROC curves
plt.figure(figsize=(10, 10))
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Micro and macro average plots for the earlier plots')
plt.plot(fpr["micro"], tpr["micro"],
label='micro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["micro"]),
color='deeppink', linestyle=':', linewidth=4)
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["macro"]),
color='navy', linestyle=':', linewidth=4)
plt.plot([0, 1], [0, 1], 'k--', lw=lw, label='Random model line')
plt.legend(loc="lower right")
plt.savefig("../ROC_plots/micro_and_macro_average.png")
plt.show()
```
| github_jupyter |
# matplotlib基础
- [API path](https://matplotlib.org/api/path_api.html)
- [Path Tutorial](https://matplotlib.org/tutorials/advanced/path_tutorial.html#sphx-glr-tutorials-advanced-path-tutorial-py)
众所周知,matplotlib的图表是由艺术家使用渲染器在画布上完成的。
其API自然分为3层:
- 画布是绘制图形的区域:matplotlib.backend_bases.FigureCanvas
- 渲染器是知晓如何在画布上绘制的对象:matplotlib.backend_bases.Renderer
- 艺术家是知晓如何使用渲染器在画布上绘制的对象:matplotlib.artist.Artist
FigureCanvas和Renderer处理与诸如wxPython之类的用户界面工具包,或PostScript®之类的绘图语言会话的所有细节,而Artist处理所有高级结构,如表示和布置图形,文本和线条。
艺术家有两种类型:图元与容器。图元表示绘制在画布上的标准图形对象,如:Line2D,Rectangle,Text,AxesImage等,容器是放置图元的位置如:Axis,Axes和Figure。标准用法是创建一个Figure实例,使用Figure来创建一个或多个Axes或Subplot实例,并使用Axes实例的辅助方法创建图元。
有很多人将Figure当作画布,其实它是长的像画布的艺术家。
<@-<
既然是基础,我们就从最简单的地方开始。
path模块处理matplotlib中所有的polyline
而处理polyline的基础类是Path
Path与MarkerStyle一样,基类都是object而不是Artist
为什么我会知道MarkerStyle,过程时这样的,我在写[Python可视化实践-手机篇]时,图1想从散点图改成折线图,但有几个问题没想明白,就想认真的学一遍plot折线图,我们都知道plot方法的本质是配置Line2D实例,Line2D是Artist的子类,它包括顶点及连接它们的线段。而顶点的标记是通过MarkerStyle类调用Path实现的。
既然Path不是Artist的子类,自然就不能被渲染器绘制到画布上。所有matplotlib中就需要有Artist的子类来处理Path, PathPatch与PathCollection就是这样的子类。
实际上Path对象是所有matplotlib.patches对象的基础。
Path对象除了包含一组顶点作为路点以外,还包含一组6个标准命令。
```python
code_type = np.uint8
# Path codes
STOP = code_type(0) # 1 vertex
MOVETO = code_type(1) # 1 vertex
LINETO = code_type(2) # 1 vertex
CURVE3 = code_type(3) # 2 vertices
CURVE4 = code_type(4) # 3 vertices
CLOSEPOLY = code_type(79) # 1 vertex
#: A dictionary mapping Path codes to the number of vertices that the
#: code expects.
NUM_VERTICES_FOR_CODE = {STOP: 1,
MOVETO: 1,
LINETO: 1,
CURVE3: 2,
CURVE4: 3,
CLOSEPOLY: 1}
```
所以Path实例化时就需要(N, 2)的顶点数组及N-length的路径命令数组。
多说无益,以图示例
```
import matplotlib.pyplot as plt
from matplotlib.path import Path
import matplotlib.patches as patches
%matplotlib inline
verts = [
(-0.5, -0.5), # 左, 下
(-0.5, 0.5), # 左, 上
( 0.5, 0.5), # 右, 上
( 0.5, -0.5), # 右, 下
(-0.5, -0.5), # 忽略
]
codes = [
Path.MOVETO,
Path.LINETO,
Path.LINETO,
Path.LINETO,
Path.CLOSEPOLY,
]
path = Path(verts, codes)
patch = patches.PathPatch(path)
fig = plt.figure()
fig.add_artist(patch)
```
和你想的一样,只能看到矩形的一角,这是因为Figure的坐标区间是[(0,1),(0,1]
有人说这很像海龟,其实差别还是挺大的,海龟的命令比较多,而且风格是向前爬10步;左转,向前爬10步;左转,向前爬10步;左转,向前爬10步!
```
言归正传,要看到整个矩形的最简单办法是在Figure中加入坐标空间Axes,之后在Axes空间中制图,坐标系就会自动转换。我们再来一次。
fig.clf()
ax = fig.add_subplot(111)
patch2 = patches.PathPatch(path)
ax.add_patch(patch2)
ax.set_xlim(-1, 1)
ax.set_ylim(-1, 1)
```
如果不新建patch2,而是直接加入patch会是什么样的效果呢?有兴趣可以自己试试,想想为什么?
坑已挖好,有缘再填。
```
verts = [
(0., 0.), # P0
(0.2, 1.), # P1
(1., 0.8), # P2
(0.8, 0.), # P3
]
codes = [
Path.MOVETO,
Path.CURVE4,
Path.CURVE4,
Path.CURVE4,
]
path = Path(verts, codes)
patch = patches.PathPatch(path)
fig, ax = plt.subplots()
ax.add_patch(patch)
fig
```
如果点数不够呢
```
verts = [
(0., 0.), # P0
(0.2, 1.), # P1
(1., 0.8), # P2
]
codes = [
Path.MOVETO,
Path.CURVE3,
Path.CURVE3,
]
path2 = Path(verts, codes)
patch2 = patches.PathPatch(path2, facecolor='none')
ax.cla()
ax.add_patch(patch2)
fig
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Train and hyperparameter tune on Iris Dataset with Scikit-learn
In this tutorial, we demonstrate how to use the Azure ML Python SDK to train a support vector machine (SVM) on a single-node CPU with Scikit-learn to perform classification on the popular [Iris dataset](https://archive.ics.uci.edu/ml/datasets/iris). We will also demonstrate how to perform hyperparameter tuning of the model using Azure ML's HyperDrive service.
## Prerequisites
* Go through the [Configuration](../../../configuration.ipynb) notebook to install the Azure Machine Learning Python SDK and create an Azure ML Workspace
```
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
```
## Diagnostics
Opt-in diagnostics for better experience, quality, and security of future releases.
```
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics=True)
```
## Initialize workspace
Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`.
```
from azureml.core.workspace import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep = '\n')
```
## Create AmlCompute
You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, we use Azure ML managed compute ([AmlCompute](https://docs.microsoft.com/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute)) for our remote training compute resource.
As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
```
from azureml.core.compute import ComputeTarget
# choose a name for your cluster
cluster_name = "cpu-cluster"
compute_target = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing compute target.')
# use get_status() to get a detailed status for the current cluster.
print(compute_target.get_status().serialize())
```
The above code retrieves an existing CPU compute target. Scikit-learn does not support GPU computing.
## Train model on the remote compute
Now that you have your data and training script prepared, you are ready to train on your remote compute. You can take advantage of Azure compute to leverage a CPU cluster.
### Create a project directory
Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script and any additional files your training script depends on.
```
import os
project_folder = './sklearn-iris'
os.makedirs(project_folder, exist_ok=True)
```
### Prepare training script
Now you will need to create your training script. In this tutorial, the training script is already provided for you at `train_iris`.py. In practice, you should be able to take any custom training script as is and run it with Azure ML without having to modify your code.
However, if you would like to use Azure ML's [tracking and metrics](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#metrics) capabilities, you will have to add a small amount of Azure ML code inside your training script.
In `train_iris.py`, we will log some metrics to our Azure ML run. To do so, we will access the Azure ML Run object within the script:
```python
from azureml.core.run import Run
run = Run.get_context()
```
Further within `train_iris.py`, we log the kernel and penalty parameters, and the highest accuracy the model achieves:
```python
run.log('Kernel type', np.string(args.kernel))
run.log('Penalty', np.float(args.penalty))
run.log('Accuracy', np.float(accuracy))
```
These run metrics will become particularly important when we begin hyperparameter tuning our model in the "Tune model hyperparameters" section.
Once your script is ready, copy the training script `train_iris.py` into your project directory.
```
import shutil
shutil.copy('train_iris.py', project_folder)
```
### Create an experiment
Create an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#experiment) to track all the runs in your workspace for this Scikit-learn tutorial.
```
from azureml.core import Experiment
experiment_name = 'train_iris'
experiment = Experiment(ws, name=experiment_name)
```
### Create a Scikit-learn estimator
The Azure ML SDK's Scikit-learn estimator enables you to easily submit Scikit-learn training jobs for single-node runs. The following code will define a single-node Scikit-learn job.
```
from azureml.train.sklearn import SKLearn
script_params = {
'--kernel': 'linear',
'--penalty': 1.0,
}
estimator = SKLearn(source_directory=project_folder,
script_params=script_params,
compute_target=compute_target,
entry_script='train_iris.py',
pip_packages=['joblib']
)
```
The `script_params` parameter is a dictionary containing the command-line arguments to your training script `entry_script`.
### Submit job
Run your experiment by submitting your estimator object. Note that this call is asynchronous.
```
run = experiment.submit(estimator)
```
## Monitor your run
You can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes.
```
from azureml.widgets import RunDetails
RunDetails(run).show()
run.cancel()
```
## Tune model hyperparameters
Now that we've seen how to do a simple Scikit-learn training run using the SDK, let's see if we can further improve the accuracy of our model. We can optimize our model's hyperparameters using Azure Machine Learning's hyperparameter tuning capabilities.
### Start a hyperparameter sweep
First, we will define the hyperparameter space to sweep over. Let's tune the `kernel` and `penalty` parameters. In this example we will use random sampling to try different configuration sets of hyperparameters to maximize our primary metric, `Accuracy`.
```
from azureml.train.hyperdrive.runconfig import HyperDriveRunConfig
from azureml.train.hyperdrive.sampling import RandomParameterSampling
from azureml.train.hyperdrive.run import PrimaryMetricGoal
from azureml.train.hyperdrive.parameter_expressions import choice
param_sampling = RandomParameterSampling( {
"--kernel": choice('linear', 'rbf', 'poly', 'sigmoid'),
"--penalty": choice(0.5, 1, 1.5)
}
)
hyperdrive_run_config = HyperDriveRunConfig(estimator=estimator,
hyperparameter_sampling=param_sampling,
primary_metric_name='Accuracy',
primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,
max_total_runs=12,
max_concurrent_runs=4)
```
Finally, lauch the hyperparameter tuning job.
```
# start the HyperDrive run
hyperdrive_run = experiment.submit(hyperdrive_run_config)
```
## Monitor HyperDrive runs
You can monitor the progress of the runs with the following Jupyter widget.
```
RunDetails(hyperdrive_run).show()
hyperdrive_run.wait_for_completion(show_output=True)
```
### Find and register best model
When all jobs finish, we can find out the one that has the highest accuracy.
```
best_run = hyperdrive_run.get_best_run_by_primary_metric()
print(best_run.get_details()['runDefinition']['arguments'])
```
Now, let's list the model files uploaded during the run.
```
print(best_run.get_file_names())
```
We can then register the folder (and all files in it) as a model named `sklearn-iris` under the workspace for deployment
```
model = best_run.register_model(model_name='sklearn-iris', model_path='model.joblib')
```
| github_jupyter |
<h2> ======================================================</h2>
<h1>MA477 - Theory and Applications of Data Science</h1>
<h1>Lesson 12: Lab </h1>
<h4>Dr. Valmir Bucaj</h4>
United States Military Academy, West Point
AY20-2
<h2>======================================================</h2>
<h2>House Voting Datset</h2>
In today's lecture we will be exploring the 1984 United Stated Congressional Voting Records via machine-learning to obtain useful insights.
<h3>Description</h3>
This data set includes votes for each of the U.S. House of Representatives Congressmen on the 16 key votes identified by the CQA. The CQA lists nine different types of votes: voted for, paired for, and announced for (these three simplified to yea), voted against, paired against, and announced against (these three simplified to nay), voted present, voted present to avoid conflict of interest, and did not vote or otherwise make a position known (these three simplified to an unknown disposition).
Attribute Information:
Class Name: 2 (democrat, republican)
handicapped-infants: 2 (y,n)
water-project-cost-sharing: 2 (y,n)
adoption-of-the-budget-resolution: 2 (y,n)
physician-fee-freeze: 2 (y,n)
el-salvador-aid: 2 (y,n)
religious-groups-in-schools: 2 (y,n)
anti-satellite-test-ban: 2 (y,n)
aid-to-nicaraguan-contras: 2 (y,n)
mx-missile: 2 (y,n)
immigration: 2 (y,n)
synfuels-corporation-cutback: 2 (y,n)
education-spending: 2 (y,n)
superfund-right-to-sue: 2 (y,n)
crime: 2 (y,n)
duty-free-exports: 2 (y,n)
export-administration-act-south-africa: 2 (y,n)
Source
Origin:
Congressional Quarterly Almanac, 98th Congress, 2nd session 1984, Volume XL: Congressional Quarterly Inc. Washington, D.C., 1985.
https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records
Citation:
Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
<h2>Tasks</h2>
Use the following tasks to guide your work, but don't get limited by them. You are strongly encouraged to pursue any additional avenue that you feel is valuable.
<ul>
<li>Build a machine-learning classification model that predicts whether a member of the Congress is a Democrat or Republican based on how they voted on these 16 issues?
<ul>
<li>Describe how are you dealing with missing values?</li>
<li> Describe the choice of the model and the reasons for that choice.</li>
<li> What metric are you measuring? Accuracy? Recall? Precision? Why?</li>
<li> Build the ROC curve and compute AUC</li>
</ul>
</li>
<li>Explore the voting patterns of Democrats vs. Republicans. Explain what stands out.</li>
</ul>
```
import pandas as pd
df=pd.read_csv('house-votes.csv')
df.head()
df.shape
df.columns
```
Variable assignment:
0=n
1=y
0.5=?
```
for col in df.columns[1:]:
df[col]=df[col].apply(lambda x: 0 if x=='n' else 1 if x=='y' else 0.5)
df.head()
df[df.columns[0]]=df[df.columns[0]].apply(lambda x: 1 if x=='republican' else 0)
df.head()
```
| github_jupyter |
# Combine DESI Imaging ccds for DR9
The eboss ccd files did not have the same dtype, therefore we could not easily combine them. We have to enfore a dtype to all of them.
```
# import modules
import fitsio as ft
import numpy as np
from glob import glob
# read files
ccdsn = glob('/home/mehdi/data/templates/ccds/dr9/ccds-annotated-*.fits')
print(ccdsn) # ccdfiles names
prt_keep = ['camera', 'filter', 'fwhm', 'mjd_obs', 'exptime',
'ra', 'dec', 'ra0','ra1','ra2','ra3','dec0','dec1','dec2','dec3',
'galdepth', 'ebv', 'airmass', 'ccdskycounts', 'pixscale_mean', 'ccdzpt']
# read one file to check the columns
d = ft.read(ccdsn[0], columns=prt_keep)
print(d.dtype)
# attrs for the general quicksip
# 'crval1', 'crval2', 'crpix1', 'crpix2', 'cd1_1',
# 'cd1_2', 'cd2_1', 'cd2_2', 'width', 'height'
# dtype = np.dtype([('filter', 'S1'), ('exptime', '>f4'), ('mjd_obs', '>f8'), ('airmass', '>f4'),\
# ('fwhm', '>f4'), ('width', '>i2'), ('height', '>i2'), ('crpix1', '>f4'), ('crpix2', '>f4'),\
# ('crval1', '>f8'), ('crval2', '>f8'), ('cd1_1', '>f4'), ('cd1_2', '>f4'), ('cd2_1', '>f4'),\
# ('cd2_2', '>f4'), ('ra', '>f8'), ('dec', '>f8'), ('ccdzpt', '>f4'), ('ccdskycounts', '>f4'),
# ('pixscale_mean', '>f4'), ('ebv', '>f4'), ('galdepth', '>f4')])
#
# only read & combine the following columns
# this is what the pipeline need to make the MJD maps
prt_keep = ['camera', 'filter', 'fwhm', 'mjd_obs', 'exptime',
'ra', 'dec', 'ra0','ra1','ra2','ra3','dec0','dec1','dec2','dec3',
'galdepth', 'ebv', 'airmass', 'ccdskycounts', 'pixscale_mean', 'ccdzpt']
# camera could be different for 90prime, decam, mosaic -- we pick S7
dtype = np.dtype([('camera', '<U7'),('filter', '<U1'), ('exptime', '>f4'), ('mjd_obs', '>f8'),
('airmass', '>f4'), ('fwhm', '>f4'), ('ra', '>f8'), ('dec', '>f8'), ('ccdzpt', '>f4'),
('ccdskycounts', '>f4'), ('ra0', '>f8'), ('dec0', '>f8'), ('ra1', '>f8'),
('dec1', '>f8'), ('ra2', '>f8'), ('dec2', '>f8'), ('ra3', '>f8'), ('dec3', '>f8'),
('pixscale_mean', '>f4'), ('ebv', '>f4'), ('galdepth', '>f4')])
def fixdtype(data_in, indtype=dtype):
m = data_in.size
data_out = np.zeros(m, dtype=dtype)
for name in dtype.names:
data_out[name] = data_in[name].astype(dtype[name])
return data_out
#
# read each ccd file > fix its dtype > move on to the next
ccds_data = []
for ccd_i in ccdsn:
print('working on .... %s'%ccd_i)
data_in = ft.FITS(ccd_i)[1].read(columns=prt_keep)
#print(data_in.dtype)
data_out = fixdtype(data_in)
print('number of ccds in this file : %d'%data_in.size)
print('number of different dtypes (before) : %d'%len(np.setdiff1d(dtype.descr, data_in.dtype.descr)), np.setdiff1d(dtype.descr, data_in.dtype.descr))
print('number of different dtypes (after) : %d'%len(np.setdiff1d(dtype.descr, data_out.dtype.descr)), np.setdiff1d(dtype.descr, data_out.dtype.descr))
ccds_data.append(data_out)
ccds_data_c = np.concatenate(ccds_data)
print('Total number of combined ccds : %d'%ccds_data_c.size)
ft.write('/home/mehdi/data/templates/ccds/dr9/ccds-annotated-dr9-combined.fits',
ccds_data_c, header=dict(NOTE='dr9 combined'), clobber=True)
```
| github_jupyter |
# Brazilian Newspaper analysis
In this project, we'll use a dataset from a Brazilian Newspaper called "Folha de São Paulo".
We're going to use word embeddings, tensorboard and rnn's and search for political opinions and positions.
You can find the dataset at [kaggle](https://www.kaggle.com/marlesson/news-of-the-site-folhauol).
I want to find in this study case:
+ Political opinions
+ Check if this newspaper is impartial or biased
## Skip-gram model
Let's use a word embedding model to find the relationship between words in the articles. Our model will learn how one word is related to another word and we'll see this relationship in tensorboard and a T-SNE chart (to project our model in a 2D chart).
We have two options to use: CBOW (Continuous Bag-Of-Words) and Skip-gram.
In our case we'll use Skip-gram because it performs better than CBOW.
The models works like this:

In CBOW we get some words around another word and try to predict the "middle" word.
In Skip-gram we do the opposite, we get one word and try to predict the words around it.
## Loading the data
After downloading the dataset, put it on a directory `data/` and let's load it using pandas.
**Using python 3.6 and tensorflow 1.3**
```
# Import dependencies
import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib
import os
import pickle
import random
import time
import math
from collections import Counter
dataset = pd.read_csv('data/articles.csv')
dataset.head()
```
## Preprocessing the data
### Removing unnecessary articles
We are trying to find political opinions. So, let's take only the articles in category 'poder' (power).
```
political_dataset = dataset.loc[dataset.category == 'poder']
political_dataset.head()
```
### Merging title and text
To maintain article titles and text related, let's merge then together and use this merged text as our inputs
```
# Merges the title and text with a separator (---)
merged_text = [str(title) + ' ---- ' + str(text) for title, text in zip(political_dataset.title, political_dataset.text)]
print(merged_text[0])
```
### Tokenizing punctuation
We need to tokenize all text punctuation, otherwise the network will see punctuated words differently (eg: hello != hello!)
```
def token_lookup():
tokens = {
'.' : 'period',
',' : 'comma',
'"' : 'quote',
'\'' : 'single-quote',
';' : 'semicolon',
':' : 'colon',
'!' : 'exclamation-mark',
'?' : 'question-mark',
'(' : 'parentheses-left',
')' : 'parentheses-right',
'[' : 'brackets-left',
']' : 'brackets-right',
'{' : 'braces-left',
'}' : 'braces-right',
'_' : 'underscore',
'--' : 'dash',
'\n' : 'return'
}
return {token: '||{0}||'.format(value) for token, value in tokens.items()}
token_dict = token_lookup()
tokenized_text = []
for text in merged_text:
for key, token in token_dict.items():
text = text.replace(key, ' {} '.format(token))
tokenized_text.append(text)
print(tokenized_text[0])
```
### Lookup tables
We need to create two dicts: `word_to_int` and `int_to_word`.
```
def lookup_tables(tokenized_text):
vocab = set()
for text in tokenized_text:
text = text.lower()
vocab = vocab.union(set(text.split()))
vocab_to_int = {word: ii for ii, word in enumerate(vocab)}
int_to_vocab = {ii: word for ii, word in enumerate(vocab)}
return vocab, vocab_to_int, int_to_vocab
vocab, vocab_to_int, int_to_vocab = lookup_tables(tokenized_text)
print('First ten vocab words: ')
print(list(vocab_to_int.items())[0:10])
print('\nVocab length:')
print(len(vocab_to_int))
pickle.dump((tokenized_text, vocab, vocab_to_int, int_to_vocab, token_dict), open('preprocess/preprocess.p', 'wb'))
```
### Convert all text to integers
Let's convert all articles to integer using the `vocab_to_int` variable.
```
tokenized_text, vocab, vocab_to_int, int_to_vocab, token_dict = pickle.load(open('preprocess/preprocess.p', mode='rb'))
def text_to_int(text):
int_text = []
for word in text.split():
if word in vocab_to_int.keys():
int_text.append(vocab_to_int[word])
return np.asarray(int_text, dtype=np.int32)
def convert_articles_to_int(tokenized_text):
all_int_text = []
for text in tokenized_text:
all_int_text.append(text_to_int(text))
return np.asarray(all_int_text)
converted_text = convert_articles_to_int(tokenized_text)
pickle.dump((converted_text, vocab, vocab_to_int, int_to_vocab, token_dict), open('preprocess/preprocess2.p', 'wb'))
converted_text, vocab, vocab_to_int, int_to_vocab, token_dict = pickle.load(open('preprocess/preprocess2.p', mode='rb'))
converted_text[3]
```
### Subsampling text
We need to subsample our text and remove the words that not provides meaningful information, like: 'the', 'of', 'for'.
Let's use Mikolov's subsampling formula, that's give us the probability of a word to be discarted:
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
Where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
```
# Converts all articles to one big text
all_converted_text = np.concatenate(converted_text)
def subsampling(int_words, threshold=1e-5):
word_counts = Counter(int_words)
total_count = len(int_words)
freqs = {word: count/total_count for word, count in word_counts.items()}
p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}
train_words = [word for word in int_words if random.random() < (1 - p_drop[word])]
return np.asarray(train_words)
subsampled_text = subsampling(all_converted_text)
print('Lenght before sumsampling: {0}'.format(len(all_converted_text)))
print('Lenght after sumsampling: {0}'.format(len(subsampled_text)))
pickle.dump((subsampled_text, vocab, vocab_to_int, int_to_vocab, token_dict), open('preprocess/preprocess3.p', 'wb'))
subsampled_text, vocab, vocab_to_int, int_to_vocab, token_dict = pickle.load(open('preprocess/preprocess3.p', mode='rb'))
```
### Save vocab to csv
Let's save our vocab to csv file, so that way we can use it as an embedding on tensorboard.
```
subsampled_ints = set(subsampled_text)
subsampled_vocab = []
for word in subsampled_ints:
subsampled_vocab.append(int_to_vocab[word])
vocab_df = pd.DataFrame.from_dict(int_to_vocab, orient='index')
vocab_df.head()
vocab_df.to_csv('preprocess/vocab.tsv', header=False, index=False)
```
### Generate batches
Now, we need to convert all text to numbers with lookup tables and create a batch generator.
```
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
words = words.flat
words = list(words)
R = np.random.randint(1, window_size+1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = set(words[start:idx] + words[idx+1:stop+1])
return list(target_words)
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
```
## Building the Embedding Graph
```
def get_embed_placeholders(graph, reuse=False):
with graph.as_default():
with tf.variable_scope('placeholder', reuse=reuse):
inputs = tf.placeholder(tf.int32, [None], name='inputs')
labels = tf.placeholder(tf.int32, [None, None], name='labels')
learning_rate = tf.placeholder(tf.float32, [None], name='learning_rate')
return inputs, labels, learning_rate
def get_embed_embeddings(graph, vocab_size, embedding_size, inputs, reuse=False):
with graph.as_default():
with tf.variable_scope('embedding', reuse=reuse):
embedding = tf.Variable(tf.random_uniform((vocab_size, embedding_size),
-0.5 / embedding_size,
0.5 / embedding_size))
embed = tf.nn.embedding_lookup(embedding, inputs)
return embed
def get_nce_weights_biases(graph, vocab_size, embedding_size, reuse=False):
with graph.as_default():
with tf.variable_scope('nce', reuse=reuse):
nce_weights = tf.Variable(tf.truncated_normal((vocab_size, embedding_size),
stddev=1.0/math.sqrt(embedding_size)))
nce_biases = tf.Variable(tf.zeros(vocab_size))
# Historigram for tensorboard
tf.summary.histogram('weights', nce_weights)
tf.summary.histogram('biases', nce_biases)
return nce_weights, nce_biases
def get_embed_loss(graph, num_sampled, nce_weights, nce_biases, labels, embed, vocab_size, reuse=False):
with graph.as_default():
with tf.variable_scope('nce', reuse=reuse):
loss = tf.reduce_mean(tf.nn.sampled_softmax_loss(weights=nce_weights,
biases=nce_biases,
labels=labels,
inputs=embed,
num_sampled=num_sampled,
num_classes=vocab_size))
# Scalar for tensorboard
tf.summary.scalar('loss', loss)
return loss
def get_embed_opt(graph, learning_rate, loss, reuse=False):
with graph.as_default():
with tf.variable_scope('optmizer'):
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)
return optimizer
def train_embed(graph,
batch_size,
learning_rate,
epochs,
window_size,
train_words,
num_sampled,
embedding_size,
vocab_size,
save_dir,
print_every):
with tf.Session(graph=graph) as sess:
inputs, labels, lr = get_embed_placeholders(graph)
embed = get_embed_embeddings(graph, vocab_size, embedding_size, inputs)
nce_weights, nce_biases = get_nce_weights_biases(graph, vocab_size, embedding_size, reuse=True)
loss = get_embed_loss(graph, num_sampled, nce_weights, nce_biases, labels, embed, vocab_size, reuse=True)
optimizer = get_embed_opt(graph, learning_rate, loss, reuse=True)
merged_summary = tf.summary.merge_all()
train_writer = tf.summary.FileWriter(save_dir)
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver()
avg_loss = 0
iteration = 1
for e in range(1, epochs + 1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {
inputs: x,
labels: np.array(y)[:, None]
}
summary, _, train_loss = sess.run([merged_summary, optimizer, loss], feed_dict=feed)
avg_loss += train_loss
train_writer.add_summary(summary, epochs + 1)
if iteration % print_every == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Batch: {}".format(iteration),
"Training loss: {:.4f}".format(avg_loss/print_every),
"Speed: {:.4f} sec/batch".format((end-start)/print_every))
avg_loss = 0
start = time.time()
#break
iteration += 1
save_path = saver.save(sess, save_dir + '/embed.ckpt')
epochs = 10
learning_rate = 0.01
window_size = 10
batch_size = 1024
num_sampled = 100
embedding_size = 200
vocab_size = len(vocab_to_int)
save_dir = 'checkpoints/embed/train'
print_every = 1000
tf.reset_default_graph()
embed_train_graph = tf.Graph()
train_embed(embed_train_graph,
batch_size,
learning_rate,
epochs,
window_size,
subsampled_text,
num_sampled,
embedding_size,
vocab_size,
save_dir,
print_every
)
```
| github_jupyter |

# Join
Copyright (c) Microsoft Corporation. All rights reserved.<br>
Licensed under the MIT License.<br>
In Data Prep you can easily join two Dataflows.
```
import azureml.dataprep as dprep
```
First, get the left side of the data into a shape that is ready for the join.
```
# get the first Dataflow and derive desired key column
dflow_left = dprep.read_csv(path='https://dpreptestfiles.blob.core.windows.net/testfiles/BostonWeather.csv')
dflow_left = dflow_left.derive_column_by_example(source_columns='DATE', new_column_name='date_timerange',
example_data=[('11/11/2015 0:54', 'Nov 11, 2015 | 12AM-2AM'),
('2/1/2015 0:54', 'Feb 1, 2015 | 12AM-2AM'),
('1/29/2015 20:54', 'Jan 29, 2015 | 8PM-10PM')])
dflow_left = dflow_left.drop_columns(['DATE'])
# convert types and summarize data
dflow_left = dflow_left.set_column_types(type_conversions={'HOURLYDRYBULBTEMPF': dprep.TypeConverter(dprep.FieldType.DECIMAL)})
dflow_left = dflow_left.filter(expression=~dflow_left['HOURLYDRYBULBTEMPF'].is_error())
dflow_left = dflow_left.summarize(group_by_columns=['date_timerange'],summary_columns=[dprep.SummaryColumnsValue('HOURLYDRYBULBTEMPF', dprep.api.engineapi.typedefinitions.SummaryFunction.MEAN, 'HOURLYDRYBULBTEMPF_Mean')] )
# cache the result so the steps above are not executed every time we pull on the data
import os
from pathlib import Path
cache_dir = str(Path(os.getcwd(), 'dataflow-cache'))
dflow_left.cache(directory_path=cache_dir)
dflow_left.head(5)
```
Now let's prepare the data for the right side of the join.
```
# get the second Dataflow and desired key column
dflow_right = dprep.read_csv(path='https://dpreptestfiles.blob.core.windows.net/bike-share/*-hubway-tripdata.csv')
dflow_right = dflow_right.keep_columns(['starttime', 'start station id'])
dflow_right = dflow_right.derive_column_by_example(source_columns='starttime', new_column_name='l_date_timerange',
example_data=[('2015-01-01 00:21:44', 'Jan 1, 2015 | 12AM-2AM')])
dflow_right = dflow_right.drop_columns('starttime')
# cache the results
dflow_right.cache(directory_path=cache_dir)
dflow_right.head(5)
```
There are three ways you can join two Dataflows in Data Prep:
1. Create a `JoinBuilder` object for interactive join configuration.
2. Call ```join()``` on one of the Dataflows and pass in the other along with all other arguments.
3. Call ```Dataflow.join()``` method and pass in two Dataflows along with all other arguments.
We will explore the builder object as it simplifies the determination of correct arguments.
```
# construct a builder for joining dataflow_l with dataflow_r
join_builder = dflow_left.builders.join(right_dataflow=dflow_right, left_column_prefix='l', right_column_prefix='r')
join_builder
```
So far the builder has no properties set except default values.
From here you can set each of the options and preview its effect on the join result or use Data Prep to determine some of them.
Let's start with determining appropriate column prefixes for left and right side of the join and lists of columns that would not conflict and therefore don't need to be prefixed.
```
join_builder.detect_column_info()
join_builder
```
You can see that Data Prep has performed a pull on both Dataflows to determine the column names in them. Given that `dataflow_r` already had a column starting with `l_` new prefix got generated which would not collide with any column names that are already present.
Additionally columns in each Dataflow that won't conflict during join would remain unprefixed.
This apprach to column naming is crucial for join robustness to schema changes in the data. Let's say that at some time in future the data consumed by left Dataflow will also have `l_date_timerange` column in it.
Configured as above the join will still run as expected and the new column will be prefixed with `l2_` ensuring that ig column `l_date_timerange` was consumed by some other future transformation it remains unaffected.
Note: `KEY_generated` is appended to both lists and is reserved for Data Prep use in case Autojoin is performed.
### Autojoin
Autojoin is a Data prep feature that determines suitable join arguments given data on both sides. In some cases Autojoin can even derive a key column from a number of available columns in the data.
Here is how you can use Autojoin:
```
# generate join suggestions
join_builder.generate_suggested_join()
# list generated suggestions
join_builder.list_join_suggestions()
```
Now let's select the first suggestion and preview the result of the join.
```
# apply first suggestion
join_builder.apply_suggestion(0)
join_builder.preview(10)
```
Now, get our new joined Dataflow.
```
dflow_autojoined = join_builder.to_dataflow().drop_columns(['l_date_timerange'])
```
### Joining two Dataflows without pulling the data
If you don't want to pull on data and know what join should look like, you can always use the join method on the Dataflow.
```
dflow_joined = dprep.Dataflow.join(left_dataflow=dflow_left,
right_dataflow=dflow_right,
join_key_pairs=[('date_timerange', 'l_date_timerange')],
left_column_prefix='l2_',
right_column_prefix='r_')
dflow_joined.head(5)
dflow_joined = dflow_joined.filter(expression=dflow_joined['r_start station id'] == '67')
df = dflow_joined.to_pandas_dataframe()
df
```
| github_jupyter |
# Credit Risk Resampling Techniques
```
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
from pathlib import Path
from collections import Counter
```
# Read the CSV and Perform Basic Data Cleaning
```
columns = [
"loan_amnt", "int_rate", "installment", "home_ownership",
"annual_inc", "verification_status", "issue_d", "loan_status",
"pymnt_plan", "dti", "delinq_2yrs", "inq_last_6mths",
"open_acc", "pub_rec", "revol_bal", "total_acc",
"initial_list_status", "out_prncp", "out_prncp_inv", "total_pymnt",
"total_pymnt_inv", "total_rec_prncp", "total_rec_int", "total_rec_late_fee",
"recoveries", "collection_recovery_fee", "last_pymnt_amnt", "next_pymnt_d",
"collections_12_mths_ex_med", "policy_code", "application_type", "acc_now_delinq",
"tot_coll_amt", "tot_cur_bal", "open_acc_6m", "open_act_il",
"open_il_12m", "open_il_24m", "mths_since_rcnt_il", "total_bal_il",
"il_util", "open_rv_12m", "open_rv_24m", "max_bal_bc",
"all_util", "total_rev_hi_lim", "inq_fi", "total_cu_tl",
"inq_last_12m", "acc_open_past_24mths", "avg_cur_bal", "bc_open_to_buy",
"bc_util", "chargeoff_within_12_mths", "delinq_amnt", "mo_sin_old_il_acct",
"mo_sin_old_rev_tl_op", "mo_sin_rcnt_rev_tl_op", "mo_sin_rcnt_tl", "mort_acc",
"mths_since_recent_bc", "mths_since_recent_inq", "num_accts_ever_120_pd", "num_actv_bc_tl",
"num_actv_rev_tl", "num_bc_sats", "num_bc_tl", "num_il_tl",
"num_op_rev_tl", "num_rev_accts", "num_rev_tl_bal_gt_0",
"num_sats", "num_tl_120dpd_2m", "num_tl_30dpd", "num_tl_90g_dpd_24m",
"num_tl_op_past_12m", "pct_tl_nvr_dlq", "percent_bc_gt_75", "pub_rec_bankruptcies",
"tax_liens", "tot_hi_cred_lim", "total_bal_ex_mort", "total_bc_limit",
"total_il_high_credit_limit", "hardship_flag", "debt_settlement_flag"
]
target = ["loan_status"]
# Load the data
file_path = Path('../Resources/LoanStats_2019Q1.csv.zip')
df = pd.read_csv(file_path, skiprows=1)[:-2]
df = df.loc[:, columns].copy()
# Drop the null columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop the null rows
df = df.dropna()
# Remove the `Issued` loan status
issued_mask = df['loan_status'] != 'Issued'
df = df.loc[issued_mask]
# convert interest rate to numerical
df['int_rate'] = df['int_rate'].str.replace('%', '')
df['int_rate'] = df['int_rate'].astype('float') / 100
# Convert the target column values to low_risk and high_risk based on their values
x = {'Current': 'low_risk'}
df = df.replace(x)
x = dict.fromkeys(['Late (31-120 days)', 'Late (16-30 days)', 'Default', 'In Grace Period'], 'high_risk')
df = df.replace(x)
df.reset_index(inplace=True, drop=True)
df.head()
```
# Split the Data into Training and Testing
```
# Create our features
X = # YOUR CODE HERE
# Create our target
y = # YOUR CODE HERE
X.describe()
# Check the balance of our target values
y['loan_status'].value_counts()
# Create X_train, X_test, y_train, y_test
# YOUR CODE HERE
```
## Data Pre-Processing
Scale the training and testing data using the `StandardScaler` from `sklearn`. Remember that when scaling the data, you only scale the features data (`X_train` and `X_testing`).
```
# Create the StandardScaler instance
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
# Fit the Standard Scaler with the training data
# When fitting scaling functions, only train on the training dataset
# YOUR CODE HERE
# Scale the training and testing data
# YOUR CODE HERE
```
# Oversampling
In this section, you will compare two oversampling algorithms to determine which algorithm results in the best performance. You will oversample the data using the naive random oversampling algorithm and the SMOTE algorithm. For each algorithm, be sure to complete the folliowing steps:
1. View the count of the target classes using `Counter` from the collections library.
3. Use the resampled data to train a logistic regression model.
3. Calculate the balanced accuracy score from sklearn.metrics.
4. Print the confusion matrix from sklearn.metrics.
5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.
Note: Use a random state of 1 for each sampling algorithm to ensure consistency between tests
### Naive Random Oversampling
```
# Resample the training data with the RandomOversampler
# YOUR CODE HERE
# Train the Logistic Regression model using the resampled data
# YOUR CODE HERE
# Calculated the balanced accuracy score
# YOUR CODE HERE
# Display the confusion matrix
# YOUR CODE HERE
# Print the imbalanced classification report
# YOUR CODE HERE
```
### SMOTE Oversampling
```
# Resample the training data with SMOTE
# YOUR CODE HERE
# Train the Logistic Regression model using the resampled data
# YOUR CODE HERE
# Calculated the balanced accuracy score
# YOUR CODE HERE
# Display the confusion matrix
# YOUR CODE HERE
# Print the imbalanced classification report
# YOUR CODE HERE
```
# Undersampling
In this section, you will test an undersampling algorithms to determine which algorithm results in the best performance compared to the oversampling algorithms above. You will undersample the data using the Cluster Centroids algorithm and complete the folliowing steps:
1. View the count of the target classes using `Counter` from the collections library.
3. Use the resampled data to train a logistic regression model.
3. Calculate the balanced accuracy score from sklearn.metrics.
4. Print the confusion matrix from sklearn.metrics.
5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.
Note: Use a random state of 1 for each sampling algorithm to ensure consistency between tests
```
# Resample the data using the ClusterCentroids resampler
# YOUR CODE HERE
# Train the Logistic Regression model using the resampled data
# YOUR CODE HERE
# Calculated the balanced accuracy score
# YOUR CODE HERE
# Display the confusion matrix
# YOUR CODE HERE
# Print the imbalanced classification report
# YOUR CODE HERE
```
# Combination (Over and Under) Sampling
In this section, you will test a combination over- and under-sampling algorithm to determine if the algorithm results in the best performance compared to the other sampling algorithms above. You will resample the data using the SMOTEENN algorithm and complete the folliowing steps:
1. View the count of the target classes using `Counter` from the collections library.
3. Use the resampled data to train a logistic regression model.
3. Calculate the balanced accuracy score from sklearn.metrics.
4. Print the confusion matrix from sklearn.metrics.
5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.
Note: Use a random state of 1 for each sampling algorithm to ensure consistency between tests
```
# Resample the training data with SMOTEENN
# YOUR CODE HERE
# Train the Logistic Regression model using the resampled data
# YOUR CODE HERE
# Calculated the balanced accuracy score
# YOUR CODE HERE
# Display the confusion matrix
# YOUR CODE HERE
# Print the imbalanced classification report
# YOUR CODE HERE
```
| github_jupyter |
```
import numpy as np
import random
from tqdm import *
import os
import sklearn.preprocessing
from utils import *
from graph_utils import *
from rank_metrics import *
import time
params = get_cmdline_params()
model_name = "STHgraph_{}_{}_step{}".format(params.walk_type, params.modelinfo, params.walk_steps)
##################################################################################################
nameManager = createGraphNameManager(params.dataset)
data = Load_Graph_Dataset(nameManager.bow_fn)
print('num train:{}'.format(data.n_trains))
print('num test:{}'.format(data.n_tests))
print('num vocabs:{}'.format(data.n_feas))
print('num labels:{}'.format(data.n_tags))
##################################################################################################
train_graph = GraphData(nameManager.train_graph)
test_graph = GraphData(nameManager.test_graph)
#################################################################################################
from scipy.sparse.linalg import eigsh
from scipy.sparse import coo_matrix
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.svm import LinearSVC
class STH:
def __init__(self, num_bits):
super(STH, self).__init__()
self.num_bits = num_bits
self.clfs = [LinearSVC() for n in range(num_bits)]
def create_weight_matrix(self, train_mat, num_train, graph):
columns = []
rows = []
weights = []
for node_id in range(num_train):
col = graph.graph[node_id]
#col = DFS_walk(graph, node_id, 20)
#col = second_order_neighbor_walk(graph, node_id)
#print(node_id)
if len(col) <= 0:
col = [node_id]
#assert(len(col) > 0)
row = [node_id] * len(col)
w = cosine_similarity(train_mat[node_id], train_mat[col])
#w = [[0.9] * len(col)]
columns += col
rows += row
weights += list(w[0])
W = coo_matrix((weights, (rows, columns)), shape=(num_train, num_train))
return W
def fit_transform(self, train_mat, num_train, graph):
W = self.create_weight_matrix(train_mat, num_train, graph)
D = np.asarray(W.sum(axis=1)).squeeze() + 0.0001 # adding damping value for a numerical stabability
D = scipy.sparse.diags(D)
L = D - W
L = scipy.sparse.csc_matrix(L)
D = scipy.sparse.csc_matrix(D)
num_attempts = 0
max_attempts = 3
success = False
while not success:
E, Y = eigsh(L, k=self.num_bits+1, M=D, which='SM')
success = np.all(np.isreal(Y))
if not success:
print("Warning: Some eigenvalues are not real values. Retry to solve Eigen-decomposition.")
num_attempts += 1
if num_attempts > max_attempts:
assert(np.all(np.isreal(Y))) # if this fails, re-run fit again
assert(False) # Check your data
Y = np.real(Y)
Y = Y[:, 1:]
medHash = MedianHashing()
cbTrain = medHash.fit_transform(Y)
for b in range(0, cbTrain.shape[1]):
self.clfs[b].fit(train_mat, cbTrain[:, b])
return cbTrain
def transform(self, test_mat, num_test):
cbTest = np.zeros((num_test, self.num_bits), dtype=np.int64)
for b in range(0, self.num_bits):
cbTest[:,b] = self.clfs[b].predict(test_mat)
return cbTest
os.environ["CUDA_VISIBLE_DEVICES"]=params.gpu_num
sth_model = STH(params.nbits)
cbTrain = sth_model.fit_transform(data.train, data.n_trains, train_graph)
cbTest = sth_model.transform(data.test, data.n_tests)
gnd_train = data.gnd_train.toarray()
gnd_test = data.gnd_test.toarray()
eval_results = DotMap()
top_k_indices = retrieveTopKDoc(cbTrain, cbTest, batchSize=params.test_batch_size, TopK=100)
relevances = countNumRelevantDoc(gnd_train, gnd_test, top_k_indices)
relevances = relevances.cpu().numpy()
eval_results.ndcg_at_5 = np.mean([ndcg_at_k(r, 5) for r in relevances[:, :5]])
eval_results.ndcg_at_10 = np.mean([ndcg_at_k(r, 10) for r in relevances[:, :10]])
eval_results.ndcg_at_20 = np.mean([ndcg_at_k(r, 20) for r in relevances[:, :20]])
eval_results.ndcg_at_50 = np.mean([ndcg_at_k(r, 50) for r in relevances[:, :50]])
eval_results.ndcg_at_100 = np.mean([ndcg_at_k(r, 100) for r in relevances[:, :100]])
relevances = (relevances > 0)
eval_results.prec_at_5 = np.mean(np.sum(relevances[:, :5], axis=1)) / 100
eval_results.prec_at_10 = np.mean(np.sum(relevances[:, :10], axis=1)) / 100
eval_results.prec_at_20 = np.mean(np.sum(relevances[:, :20], axis=1)) / 100
eval_results.prec_at_50 = np.mean(np.sum(relevances[:, :50], axis=1)) / 100
eval_results.prec_at_100 = np.mean(np.sum(relevances[:, :100], axis=1)) / 100
best_results = EvalResult(eval_results)
print('*' * 80)
model_name = "STH_graph"
if params.save:
import scipy.io
data_path = os.path.join(os.environ['HOME'], 'projects/graph_embedding/save_bincode', params.dataset)
save_fn = os.path.join(data_path, '{}.bincode.{}.mat'.format(model_name, params.nbits))
print("save the binary code to {} ...".format(save_fn))
cbTrain = sth_model.fit_transform(data.train, data.n_trains, train_graph)
cbTest = sth_model.transform(data.test, data.n_tests)
scipy.io.savemat(save_fn, mdict={'train': cbTrain, 'test': cbTest})
print('save data to {}'.format(save_fn))
if params.save_results:
fn = "results/{}/results.{}.csv".format(params.dataset, params.nbits)
save_eval_results(fn, model_name, best_results)
print('*' * 80)
print("{}".format(model_name))
metrics = ['prec_at_{}'.format(n) for n in ['5', '10', '20', '50', '100']]
prec_results = ",".join(["{:.3f}".format(best_results.best_scores[metric]) for metric in metrics])
print("prec: {}".format(prec_results))
metrics = ['ndcg_at_{}'.format(n) for n in ['5', '10', '20', '50', '100']]
ndcg_results = ",".join(["{:.3f}".format(best_results.best_scores[metric]) for metric in metrics])
print("ndcg: {}".format(ndcg_results))
```
| github_jupyter |
# <font color='Purple'>Gravitational Wave Generation Array</font>
A Phase Array of dumbells can make a detectable signal...
#### To do:
1. Calculate the dumbell parameters for given mass and frequency
1. How many dumbells?
1. Far-field radiation pattern from many radiators.
1. Beamed GW won't be a plane wave. So what?
1. How much energy is lost to keep it spinning?
1. How do we levitate while spinning?
##### Related work on GW radiation
1. https://www.mit.edu/~iancross/8901_2019A/readings/Quadrupole-GWradiation-Ferrari.pdf
1. Wikipedia article on the GW Quadrupole formula (https://en.wikipedia.org/wiki/Quadrupole_formula)
1. MIT 8.901 lecture on GW radiation (http://www.mit.edu/~iancross/8901_2019A/lec005.pdf)
## <font color='Orange'>Imports, settings, and constants</font>
```
import numpy as np
#import matplotlib as mpl
import matplotlib.pyplot as plt
#import multiprocessing as mproc
#import scipy.signal as sig
import scipy.constants as scc
#import scipy.special as scsp
#import sys, time
from scipy.io import loadmat
# http://www.astropy.org/astropy-tutorials/Quantities.html
# http://docs.astropy.org/en/stable/constants/index.html
from astropy import constants as ascon
# Update the matplotlib configuration parameters:
plt.rcParams.update({'text.usetex': False,
'lines.linewidth': 4,
'font.family': 'serif',
'font.serif': 'Georgia',
'font.size': 22,
'xtick.direction': 'in',
'ytick.direction': 'in',
'xtick.labelsize': 'medium',
'ytick.labelsize': 'medium',
'axes.labelsize': 'medium',
'axes.titlesize': 'medium',
'axes.grid.axis': 'both',
'axes.grid.which': 'both',
'axes.grid': True,
'grid.color': 'xkcd:beige',
'grid.alpha': 0.253,
'lines.markersize': 12,
'legend.borderpad': 0.2,
'legend.fancybox': True,
'legend.fontsize': 'small',
'legend.framealpha': 0.8,
'legend.handletextpad': 0.5,
'legend.labelspacing': 0.33,
'legend.loc': 'best',
'figure.figsize': ((12, 8)),
'savefig.dpi': 140,
'savefig.bbox': 'tight',
'pdf.compression': 9})
def setGrid(ax):
ax.grid(which='major', alpha=0.6)
ax.grid(which='major', linestyle='solid', alpha=0.6)
cList = [(0, 0.1, 0.9),
(0.9, 0, 0),
(0, 0.7, 0),
(0, 0.8, 0.8),
(1.0, 0, 0.9),
(0.8, 0.8, 0),
(1, 0.5, 0),
(0.5, 0.5, 0.5),
(0.4, 0, 0.5),
(0, 0, 0),
(0.3, 0, 0),
(0, 0.3, 0)]
G = scc.G # N * m**2 / kg**2; gravitational constant
c = scc.c
```
## Terrestrial Dumbell (Current Tech)
```
sigma_yield = 9000e6 # Yield strength of annealed silicon [Pa]
m_dumb = 100 # mass of the dumbell end [kg]
L_dumb = 10 # Length of the dumbell [m]
r_dumb = 1 # radius of the dumbell rod [m]
rho_pb = 11.34e3 # density of lead [kg/m^3]
r_ball = ((m_dumb / rho_pb)/(4/3 * np.pi))**(1/3)
f_rot = 1e3 / 2
lamduh = c / f_rot
v_dumb = 2*np.pi*(L_dumb/2) * f_rot
a_dumb = v_dumb**2 / (L_dumb / 2)
F = a_dumb * m_dumb
stress = F / (np.pi * r_dumb**2)
print('Ball radius is ' + '{:0.2f}'.format(r_ball) + ' m')
print(r'Acceleration of ball = ' + '{:0.2g}'.format(a_dumb) + r' m/s^2')
print('Stress = ' + '{:0.2f}'.format(stress/sigma_yield) + 'x Yield Stress')
```
#### Futuristic Dumbell
```
sigma_yield = 5000e9 # ultimate tensile strength of ??? [Pa]
m_f = 1000 # mass of the dumbell end [kg]
L_f = 3000 # Length of the dumbell [m]
r_f = 40 # radius of the dumbell rod [m]
rho_pb = 11.34e3 # density of lead [kg/m^3]
r_b = ((m_dumb / rho_pb)/(4/3 * np.pi))**(1/3)
f_f = 37e3 / 2
lamduh_f = c / f_f
v_f = 2*np.pi*(L_f/2) * f_f
a_f = v_f**2 / (L_f / 2)
F = a_f * m_f
stress = F / (np.pi * r_f**2)
print('Ball radius = ' + '{:0.2f}'.format(r_f) + ' m')
print('Acceleration of ball = ' + '{:0.2g}'.format(a_f) + ' m/s**2')
print('Stress = ' + '{:0.2f}'.format(stress/sigma_yield) + 'x Yield Stress')
```
## <font color='Navy'>Radiation of a dumbell</font>
The dumbell is levitated from its middle point using a magnet. So we can spin it at any frequency without friction.
The quadrupole formula for the strain from this rotating dumbell is:
$\ddot{I} = \omega^2 \frac{M R^2}{2}$
$\ddot{I} = \frac{1}{2} \sigma_{yield}~A~(L_{dumb} / 2)$
The resulting strain is:
$h = \frac{2 G}{c^4 r} \ddot{I}$
```
def h_of_f(omega_rotor, M_ball, d_earth_alien, L_rotor):
I_ddot = 1/2 * M_ball * (L_rotor/2)**2 * (omega_rotor**2)
h = (2*G)/(c**4 * d_earth_alien) * I_ddot
return h
r = 2 * lamduh # take the distance to be 2 x wavelength
#h_2020 = (2*G)/(c**4 * r) * (1/2 * m_dumb * (L_dumb/2)**2) * (2*np.pi*f_rot)**2
w_rot = 2 * np.pi * f_rot
h_2020 = h_of_f(w_rot, m_dumb, r, L_dumb)
d_ref = c * 3600*24*365 * 1000 # 1000 light years [m]
d = 1 * d_ref
h_2035 = h_of_f(w_rot, m_dumb, d, L_dumb)
print('Strain from a single (2018) dumbell is {h:0.3g} at a distance of {r:0.1f} km'.format(h=h_2020, r=r/1000))
print('Strain from a single (2018) dumbell is {h:0.3g} at a distance of {r:0.1f} kilo lt-years'.format(h=h_2035, r=d/d_ref))
r = 2 * lamduh_f # take the distance to be 2 x wavelength
h_f = (2*G)/(c**4 * r) * (1/2 * m_f * (L_f/2)**2) * (2*np.pi*f_f)**2
h_2345 = h_of_f(2*np.pi*f_f, m_f, d, L_dumb)
N_rotors = 100e6
print("Strain from a single (alien) dumbell is {h:0.3g} at a distance of {r:0.1f} kilo lt-years".format(h=h_2345, r=d/d_ref))
print("Strain from many many (alien) dumbells is " + '{:0.3g}'.format(N_rotors*h_2345) + ' at ' + str(1) + ' k lt-yr')
```
## <font color='Navy'>Phased Array</font>
Beam pattern for a 2D grid of rotating dumbells
Treat them like point sources?
Make an array and add up all the spherical waves
| github_jupyter |
# Introduction to Linear Regression
*Adapted from Chapter 3 of [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/)*
||continuous|categorical|
|---|---|---|
|**supervised**|**regression**|classification|
|**unsupervised**|dimension reduction|clustering|
## Motivation
Why are we learning linear regression?
- widely used
- runs fast
- easy to use (not a lot of tuning required)
- highly interpretable
- basis for many other methods
## Libraries
Will be using [Statsmodels](http://statsmodels.sourceforge.net/) for **teaching purposes** since it has some nice characteristics for linear modeling. However, we recommend that you spend most of your energy on [scikit-learn](http://scikit-learn.org/stable/) since it provides significantly more useful functionality for machine learning in general.
```
# imports
import pandas as pd
import matplotlib.pyplot as plt
# this allows plots to appear directly in the notebook
%matplotlib inline
```
## Example: Advertising Data
Let's take a look at some data, ask some questions about that data, and then use linear regression to answer those questions!
```
# read data into a DataFrame
data = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)
data.head()
```
What are the **features**?
- TV: advertising dollars spent on TV for a single product in a given market (in thousands of dollars)
- Radio: advertising dollars spent on Radio
- Newspaper: advertising dollars spent on Newspaper
What is the **response**?
- Sales: sales of a single product in a given market (in thousands of widgets)
```
# print the shape of the DataFrame
data.shape
```
There are 200 **observations**, and thus 200 markets in the dataset.
```
# visualize the relationship between the features and the response using scatterplots
fig, axs = plt.subplots(1, 3, sharey=True)
data.plot(kind='scatter', x='TV', y='Sales', ax=axs[0], figsize=(16, 8))
data.plot(kind='scatter', x='Radio', y='Sales', ax=axs[1])
data.plot(kind='scatter', x='Newspaper', y='Sales', ax=axs[2])
```
## Questions About the Advertising Data
Let's pretend you work for the company that manufactures and markets this widget. The company might ask you the following: On the basis of this data, how should we spend our advertising money in the future?
This general question might lead you to more specific questions:
1. Is there a relationship between ads and sales?
2. How strong is that relationship?
3. Which ad types contribute to sales?
4. What is the effect of each ad type of sales?
5. Given ad spending in a particular market, can sales be predicted?
We will explore these questions below!
## Simple Linear Regression
Simple linear regression is an approach for predicting a **quantitative response** using a **single feature** (or "predictor" or "input variable"). It takes the following form:
$y = \beta_0 + \beta_1x$
What does each term represent?
- $y$ is the response
- $x$ is the feature
- $\beta_0$ is the intercept
- $\beta_1$ is the coefficient for x
Together, $\beta_0$ and $\beta_1$ are called the **model coefficients**. To create your model, you must "learn" the values of these coefficients. And once we've learned these coefficients, we can use the model to predict Sales!
## Estimating ("Learning") Model Coefficients
Generally speaking, coefficients are estimated using the **least squares criterion**, which means we are find the line (mathematically) which minimizes the **sum of squared residuals** (or "sum of squared errors"):
<img src="08_estimating_coefficients.png">
What elements are present in the diagram?
- The black dots are the **observed values** of x and y.
- The blue line is our **least squares line**.
- The red lines are the **residuals**, which are the distances between the observed values and the least squares line.
How do the model coefficients relate to the least squares line?
- $\beta_0$ is the **intercept** (the value of $y$ when $x$=0)
- $\beta_1$ is the **slope** (the change in $y$ divided by change in $x$)
Here is a graphical depiction of those calculations:
<img src="08_slope_intercept.png">
Let's use **Statsmodels** to estimate the model coefficients for the advertising data:
```
# this is the standard import if you're using "formula notation" (similar to R)
import statsmodels.formula.api as smf
# create a fitted model in one line
lm = smf.ols(formula='Sales ~ TV', data=data).fit()
# print the coefficients
lm.params
```
## Interpreting Model Coefficients
How do we interpret the TV coefficient ($\beta_1$)?
- A "unit" increase in TV ad spending is **associated with** a 0.047537 "unit" increase in Sales.
- Or more clearly: An additional $1,000 spent on TV ads is **associated with** an increase in sales of 47.537 widgets.
Note that if an increase in TV ad spending was associated with a **decrease** in sales, $\beta_1$ would be **negative**.
## Using the Model for Prediction
Let's say that there was a new market where the TV advertising spend was **$50,000**. What would we predict for the Sales in that market?
$$y = \beta_0 + \beta_1x$$
$$y = 7.032594 + 0.047537 \times 50$$
```
# manually calculate the prediction
7.032594 + 0.047537*50
```
Thus, we would predict Sales of **9,409 widgets** in that market.
Of course, we can also use Statsmodels to make the prediction:
```
# you have to create a DataFrame since the Statsmodels formula interface expects it
X_new = pd.DataFrame({'TV': [50]})
X_new.head()
# use the model to make predictions on a new value
lm.predict(X_new)
```
## Plotting the Least Squares Line
Let's make predictions for the **smallest and largest observed values of x**, and then use the predicted values to plot the least squares line:
```
# create a DataFrame with the minimum and maximum values of TV
X_new = pd.DataFrame({'TV': [data.TV.min(), data.TV.max()]})
X_new.head()
# make predictions for those x values and store them
preds = lm.predict(X_new)
preds
# first, plot the observed data
data.plot(kind='scatter', x='TV', y='Sales')
# then, plot the least squares line
plt.plot(X_new, preds, c='red', linewidth=2)
```
## Confidence in our Model
**Question:** Is linear regression a high bias/low variance model, or a low bias/high variance model?
**Answer:** High bias/low variance. Under repeated sampling, the line will stay roughly in the same place (low variance), but the average of those models won't do a great job capturing the true relationship (high bias). Note that low variance is a useful characteristic when you don't have a lot of training data!
A closely related concept is **confidence intervals**. Statsmodels calculates 95% confidence intervals for our model coefficients, which are interpreted as follows: If the population from which this sample was drawn was **sampled 100 times**, approximately **95 of those confidence intervals** would contain the "true" coefficient.
```
# print the confidence intervals for the model coefficients
lm.conf_int()
```
Keep in mind that we only have a **single sample of data**, and not the **entire population of data**. The "true" coefficient is either within this interval or it isn't, but there's no way to actually know. We estimate the coefficient with the data we do have, and we show uncertainty about that estimate by giving a range that the coefficient is **probably** within.
Note that using 95% confidence intervals is just a convention. You can create 90% confidence intervals (which will be more narrow), 99% confidence intervals (which will be wider), or whatever intervals you like.
## Hypothesis Testing and p-values
Closely related to confidence intervals is **hypothesis testing**. Generally speaking, you start with a **null hypothesis** and an **alternative hypothesis** (that is opposite the null). Then, you check whether the data supports **rejecting the null hypothesis** or **failing to reject the null hypothesis**.
(Note that "failing to reject" the null is not the same as "accepting" the null hypothesis. The alternative hypothesis may indeed be true, except that you just don't have enough data to show that.)
As it relates to model coefficients, here is the conventional hypothesis test:
- **null hypothesis:** There is no relationship between TV ads and Sales (and thus $\beta_1$ equals zero)
- **alternative hypothesis:** There is a relationship between TV ads and Sales (and thus $\beta_1$ is not equal to zero)
How do we test this hypothesis? Intuitively, we reject the null (and thus believe the alternative) if the 95% confidence interval **does not include zero**. Conversely, the **p-value** represents the probability that the coefficient is actually zero:
```
# print the p-values for the model coefficients
lm.pvalues
```
If the 95% confidence interval **includes zero**, the p-value for that coefficient will be **greater than 0.05**. If the 95% confidence interval **does not include zero**, the p-value will be **less than 0.05**. Thus, a p-value less than 0.05 is one way to decide whether there is likely a relationship between the feature and the response. (Again, using 0.05 as the cutoff is just a convention.)
In this case, the p-value for TV is far less than 0.05, and so we **believe** that there is a relationship between TV ads and Sales.
Note that we generally ignore the p-value for the intercept.
## How Well Does the Model Fit the data?
The most common way to evaluate the overall fit of a linear model is by the **R-squared** value. R-squared is the **proportion of variance explained**, meaning the proportion of variance in the observed data that is explained by the model, or the reduction in error over the **null model**. (The null model just predicts the mean of the observed response, and thus it has an intercept and no slope.)
R-squared is between 0 and 1, and higher is better because it means that more variance is explained by the model. Here's an example of what R-squared "looks like":
<img src="08_r_squared.png">
You can see that the **blue line** explains some of the variance in the data (R-squared=0.54), the **green line** explains more of the variance (R-squared=0.64), and the **red line** fits the training data even further (R-squared=0.66). (Does the red line look like it's overfitting?)
Let's calculate the R-squared value for our simple linear model:
```
# print the R-squared value for the model
lm.rsquared
```
Is that a "good" R-squared value? It's hard to say. The threshold for a good R-squared value depends widely on the domain. Therefore, it's most useful as a tool for **comparing different models**.
## Multiple Linear Regression
Simple linear regression can easily be extended to include multiple features. This is called **multiple linear regression**:
$y = \beta_0 + \beta_1x_1 + ... + \beta_nx_n$
Each $x$ represents a different feature, and each feature has its own coefficient. In this case:
$y = \beta_0 + \beta_1 \times TV + \beta_2 \times Radio + \beta_3 \times Newspaper$
Let's use Statsmodels to estimate these coefficients:
```
# create a fitted model with all three features
lm = smf.ols(formula='Sales ~ TV + Radio + Newspaper', data=data).fit()
# print the coefficients
lm.params
```
How do we interpret these coefficients? For a given amount of Radio and Newspaper ad spending, an **increase of $1000 in TV ad spending** is associated with an **increase in Sales of 45.765 widgets**.
A lot of the information we have been reviewing piece-by-piece is available in the model summary output:
```
# print a summary of the fitted model
lm.summary()
```
What are a few key things we learn from this output?
- TV and Radio have significant **p-values**, whereas Newspaper does not. Thus we reject the null hypothesis for TV and Radio (that there is no association between those features and Sales), and fail to reject the null hypothesis for Newspaper.
- TV and Radio ad spending are both **positively associated** with Sales, whereas Newspaper ad spending is **slightly negatively associated** with Sales. (However, this is irrelevant since we have failed to reject the null hypothesis for Newspaper.)
- This model has a higher **R-squared** (0.897) than the previous model, which means that this model provides a better fit to the data than a model that only includes TV.
## Feature Selection
How do I decide **which features to include** in a linear model? Here's one idea:
- Try different models, and only keep predictors in the model if they have small p-values.
- Check whether the R-squared value goes up when you add new predictors.
What are the **drawbacks** to this approach?
- Linear models rely upon a lot of **assumptions** (such as the features being independent), and if those assumptions are violated (which they usually are), R-squared and p-values are less reliable.
- Using a p-value cutoff of 0.05 means that if you add 100 predictors to a model that are **pure noise**, 5 of them (on average) will still be counted as significant.
- R-squared is susceptible to **overfitting**, and thus there is no guarantee that a model with a high R-squared value will generalize. Below is an example:
```
# only include TV and Radio in the model
lm = smf.ols(formula='Sales ~ TV + Radio', data=data).fit()
lm.rsquared
# add Newspaper to the model (which we believe has no association with Sales)
lm = smf.ols(formula='Sales ~ TV + Radio + Newspaper', data=data).fit()
lm.rsquared
```
**R-squared will always increase as you add more features to the model**, even if they are unrelated to the response. Thus, selecting the model with the highest R-squared is not a reliable approach for choosing the best linear model.
There is alternative to R-squared called **adjusted R-squared** that penalizes model complexity (to control for overfitting), but it generally [under-penalizes complexity](http://scott.fortmann-roe.com/docs/MeasuringError.html).
So is there a better approach to feature selection? **Cross-validation.** It provides a more reliable estimate of out-of-sample error, and thus is a better way to choose which of your models will best **generalize** to out-of-sample data. There is extensive functionality for cross-validation in scikit-learn, including automated methods for searching different sets of parameters and different models. Importantly, cross-validation can be applied to any model, whereas the methods described above only apply to linear models.
## Linear Regression in scikit-learn
Let's redo some of the Statsmodels code above in scikit-learn:
```
# create X and y
feature_cols = ['TV', 'Radio', 'Newspaper']
X = data[feature_cols]
y = data.Sales
# follow the usual sklearn pattern: import, instantiate, fit
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
lm.fit(X, y)
# print intercept and coefficients
print lm.intercept_
print lm.coef_
# pair the feature names with the coefficients
zip(feature_cols, lm.coef_)
# predict for a new observation
lm.predict([100, 25, 25])
# calculate the R-squared
lm.score(X, y)
```
Note that **p-values** and **confidence intervals** are not (easily) accessible through scikit-learn.
## Handling Categorical Predictors with Two Categories
Up to now, all of our predictors have been numeric. What if one of our predictors was categorical?
Let's create a new feature called **Size**, and randomly assign observations to be **small or large**:
```
import numpy as np
# set a seed for reproducibility
np.random.seed(12345)
# create a Series of booleans in which roughly half are True
nums = np.random.rand(len(data))
mask_large = nums > 0.5
# initially set Size to small, then change roughly half to be large
data['Size'] = 'small'
data.loc[mask_large, 'Size'] = 'large'
data.head()
```
For scikit-learn, we need to represent all data **numerically**. If the feature only has two categories, we can simply create a **dummy variable** that represents the categories as a binary value:
```
# create a new Series called IsLarge
data['IsLarge'] = data.Size.map({'small':0, 'large':1})
data.head()
```
Let's redo the multiple linear regression and include the **IsLarge** predictor:
```
# create X and y
feature_cols = ['TV', 'Radio', 'Newspaper', 'IsLarge']
X = data[feature_cols]
y = data.Sales
# instantiate, fit
lm = LinearRegression()
lm.fit(X, y)
# print coefficients
zip(feature_cols, lm.coef_)
```
How do we interpret the **IsLarge coefficient**? For a given amount of TV/Radio/Newspaper ad spending, being a large market is associated with an average **increase** in Sales of 57.42 widgets (as compared to a Small market, which is called the **baseline level**).
What if we had reversed the 0/1 coding and created the feature 'IsSmall' instead? The coefficient would be the same, except it would be **negative instead of positive**. As such, your choice of category for the baseline does not matter, all that changes is your **interpretation** of the coefficient.
## Handling Categorical Predictors with More than Two Categories
Let's create a new feature called **Area**, and randomly assign observations to be **rural, suburban, or urban**:
```
# set a seed for reproducibility
np.random.seed(123456)
# assign roughly one third of observations to each group
nums = np.random.rand(len(data))
mask_suburban = (nums > 0.33) & (nums < 0.66)
mask_urban = nums > 0.66
data['Area'] = 'rural'
data.loc[mask_suburban, 'Area'] = 'suburban'
data.loc[mask_urban, 'Area'] = 'urban'
data.head()
```
We have to represent Area numerically, but we can't simply code it as 0=rural, 1=suburban, 2=urban because that would imply an **ordered relationship** between suburban and urban (and thus urban is somehow "twice" the suburban category).
Instead, we create **another dummy variable**:
```
# create three dummy variables using get_dummies, then exclude the first dummy column
area_dummies = pd.get_dummies(data.Area, prefix='Area').iloc[:, 1:]
# concatenate the dummy variable columns onto the original DataFrame (axis=0 means rows, axis=1 means columns)
data = pd.concat([data, area_dummies], axis=1)
data.head()
```
Here is how we interpret the coding:
- **rural** is coded as Area_suburban=0 and Area_urban=0
- **suburban** is coded as Area_suburban=1 and Area_urban=0
- **urban** is coded as Area_suburban=0 and Area_urban=1
Why do we only need **two dummy variables, not three?** Because two dummies captures all of the information about the Area feature, and implicitly defines rural as the baseline level. (In general, if you have a categorical feature with k levels, you create k-1 dummy variables.)
If this is confusing, think about why we only needed one dummy variable for Size (IsLarge), not two dummy variables (IsSmall and IsLarge).
Let's include the two new dummy variables in the model:
```
# create X and y
feature_cols = ['TV', 'Radio', 'Newspaper', 'IsLarge', 'Area_suburban', 'Area_urban']
X = data[feature_cols]
y = data.Sales
# instantiate, fit
lm = LinearRegression()
lm.fit(X, y)
# print coefficients
zip(feature_cols, lm.coef_)
```
How do we interpret the coefficients?
- Holding all other variables fixed, being a **suburban** area is associated with an average **decrease** in Sales of 106.56 widgets (as compared to the baseline level, which is rural).
- Being an **urban** area is associated with an average **increase** in Sales of 268.13 widgets (as compared to rural).
**A final note about dummy encoding:** If you have categories that can be ranked (i.e., strongly disagree, disagree, neutral, agree, strongly agree), you can potentially use a single dummy variable and represent the categories numerically (such as 1, 2, 3, 4, 5).
## What Didn't We Cover?
- Detecting collinearity
- Diagnosing model fit
- Transforming predictors to fit non-linear relationships
- Interaction terms
- Assumptions of linear regression
- And so much more!
You could certainly go very deep into linear regression, and learn how to apply it really, really well. It's an excellent way to **start your modeling process** when working a regression problem. However, it is limited by the fact that it can only make good predictions if there is a **linear relationship** between the features and the response, which is why more complex methods (with higher variance and lower bias) will often outperform linear regression.
Therefore, we want you to understand linear regression conceptually, understand its strengths and weaknesses, be familiar with the terminology, and know how to apply it. However, we also want to spend time on many other machine learning models, which is why we aren't going deeper here.
## Resources
- To go much more in-depth on linear regression, read Chapter 3 of [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/), from which this lesson was adapted. Alternatively, watch the [related videos](http://www.dataschool.io/15-hours-of-expert-machine-learning-videos/) or read my [quick reference guide](http://www.dataschool.io/applying-and-interpreting-linear-regression/) to the key points in that chapter.
- To learn more about Statsmodels and how to interpret the output, DataRobot has some decent posts on [simple linear regression](http://www.datarobot.com/blog/ordinary-least-squares-in-python/) and [multiple linear regression](http://www.datarobot.com/blog/multiple-regression-using-statsmodels/).
- This [introduction to linear regression](http://people.duke.edu/~rnau/regintro.htm) is much more detailed and mathematically thorough, and includes lots of good advice.
- This is a relatively quick post on the [assumptions of linear regression](http://pareonline.net/getvn.asp?n=2&v=8).
| github_jupyter |
# Testing Configurations
The behavior of a program is not only governed by its data. The _configuration_ of a program – that is, the settings that govern the execution of a program on its (regular) input data, as set by options or configuration files – just as well influences behavior, and thus can and should be tested. In this chapter, we explore how to systematically _test_ and _cover_ software configurations. By _automatically inferring configuration options_, we can apply these techniques out of the box, with no need for writing a grammar. Finally, we show how to systematically cover _combinations_ of configuration options, quickly detecting unwanted interferences.
**Prerequisites**
* You should have read the [chapter on grammars](Grammars.ipynb).
* You should have read the [chapter on grammar coverage](GrammarCoverageFuzzer.ipynb).
## Configuration Options
When we talk about the input to a program, we usually think of the _data_ it processes. This is also what we have been fuzzing in the past chapters – be it with [random input](Fuzzer.ipynb), [mutation-based fuzzing](MutationFuzzer.ipynb), or [grammar-based fuzzing](GrammarFuzzer.ipynb). However, programs typically have several input sources, all of which can and should be tested – and included in test generation.
One important source of input is the program's _configuration_ – that is, a set of inputs that typically is set once when beginning to process data and then stays constant while processing data, while the program is running, or even while the program is deployed. Such a configuration is frequently set in _configuration files_ (for instance, as key/value pairs); the most ubiquitous method for command-line tools, though, are _configuration options_ on the command line.
As an example, consider the `grep` utility to find textual patterns in files. The exact mode by which `grep` works is governed by a multitude of options, which can be listed by providing a `--help` option:
```
!grep --help
```
All these options need to be tested for whether they operate correctly. In security testing, any such option may also trigger a yet unknown vulnerability. Hence, such options can become _fuzz targets_ on their own. In this chapter, we analyze how to systematically test such options – and better yet, how to extract possible configurations right out of given program files, such that we do not have to specify anything.
## Options in Python
Let us stick to our common programming language here and examine how options are processed in Python. The `argparse` module provides a parser for command-line arguments (and options) with great functionality – and great complexity. You start by defining a parser (`argparse.ArgumentParser()`) to which individual arguments with various features are added, one after another. Additional parameters for each argument can specify the type (`type`) of the argument (say, integers or strings), or the number of arguments (`nargs`).
By default, arguments are stored under their name in the `args` object coming from `parse_args()` – thus, `args.integers` holds the `integers` arguments added earlier. Special actions (`actions`) allow to store specific values in given variables; the `store_const` action stores the given `const` in the attribute named by `dest`. The following example takes a number of integer arguments (`integers`) as well as an operator (`--sum`, `--min`, or `--max`) to be applied on these integers. The operators all store a function reference in the `accumulate` attribute, which is finally invoked on the integers parsed:
```
import argparse
def process_numbers(args=[]):
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('integers', metavar='N', type=int, nargs='+',
help='an integer for the accumulator')
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('--sum', dest='accumulate', action='store_const',
const=sum,
help='sum the integers')
group.add_argument('--min', dest='accumulate', action='store_const',
const=min,
help='compute the minimum')
group.add_argument('--max', dest='accumulate', action='store_const',
const=max,
help='compute the maximum')
args = parser.parse_args(args)
print(args.accumulate(args.integers))
```
Here's how `process_numbers()` works. We can, for instance, invoke the `--min` option on the given arguments to compute the minimum:
```
process_numbers(["--min", "100", "200", "300"])
```
Or compute the sum of three numbers:
```
process_numbers(["--sum", "1", "2", "3"])
```
When defined via `add_mutually_exclusive_group()` (as above), options are mutually exclusive. Consequently, we can have only one operator:
```
import fuzzingbook_utils
from ExpectError import ExpectError
with ExpectError(print_traceback=False):
process_numbers(["--sum", "--max", "1", "2", "3"])
```
## A Grammar for Configurations
How can we test a system with several options? The easiest answer is to write a grammar for it. The grammar `PROCESS_NUMBERS_EBNF_GRAMMAR` reflects the possible combinations of options and arguments:
```
from Grammars import crange, srange, convert_ebnf_grammar, is_valid_grammar, START_SYMBOL, new_symbol
PROCESS_NUMBERS_EBNF_GRAMMAR = {
"<start>": ["<operator> <integers>"],
"<operator>": ["--sum", "--min", "--max"],
"<integers>": ["<integer>", "<integers> <integer>"],
"<integer>": ["<digit>+"],
"<digit>": crange('0', '9')
}
assert is_valid_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
PROCESS_NUMBERS_GRAMMAR = convert_ebnf_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
```
We can feed this grammar into our [grammar coverage fuzzer](GrammarCoverageFuzzer.ipynb) and have it cover one option after another:
```
from GrammarCoverageFuzzer import GrammarCoverageFuzzer
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
print(f.fuzz())
```
Of course, we can also invoke `process_numbers()` with these very arguments. To this end, we need to convert the string produced by the grammar back into a list of individual arguments, using `split()`:
```
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
args = f.fuzz().split()
print(args)
process_numbers(args)
```
In a similar way, we can define grammars for any program to be tested; as well as define grammars for, say, configuration files. Yet, the grammar has to be updated with every change to the program, which creates a maintenance burden. Given that the information required for the grammar is already all encoded in the program, the question arises: _Can't we go and extract configuration options right out of the program in the first place?_
## Mining Configuration Options
In this section, we try to extract option and argument information right out of a program, such that we do not have to specify a configuration grammar. The aim is to have a configuration fuzzer that works on the options and arguments of an arbitrary program, as long as it follows specific conventions for processing its arguments. In the case of Python programs, this means using the `argparse` module.
Our idea is as follows: We execute the given program up to the point where the arguments are actually parsed – that is, `argparse.parse_args()` is invoked. Up to this point, we track all calls into the argument parser, notably those calls that define arguments and options (`add_argument()`). From these, we construct the grammar.
### Tracking Arguments
Let us illustrate this approach with a simple experiment: We define a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while `process_numbers` is invoked. If we have a call to a method `add_argument`, we access and print out the local variables (which at this point are the arguments to the method).
```
import sys
import string
def traceit(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(method_name, locals)
```
What we get is a list of all calls to `add_argument()`, together with the method arguments passed:
```
sys.settrace(traceit)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
```
From the `args` argument, we can access the individual options and arguments to be defined:
```
def traceit(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(locals['args'])
sys.settrace(traceit)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
```
We see that each argument comes as a tuple with one (say, `integers` or `--sum`) or two members (`-h` and `--help`), which denote alternate forms for the same option. Our job will be to go through the arguments of `add_arguments()` and detect not only the names of options and arguments, but also whether they accept additional parameters, as well as the type of the parameters.
### A Grammar Miner for Options and Arguments
Let us now build a class that gathers all this information to create a grammar.
We use the `ParseInterrupt` exception to interrupt program execution after gathering all arguments and options:
```
class ParseInterrupt(Exception):
pass
```
The class `OptionGrammarMiner` takes an executable function for which the grammar of options and arguments is to be mined:
```
class OptionGrammarMiner(object):
def __init__(self, function, log=False):
self.function = function
self.log = log
```
The method `mine_ebnf_grammar()` is where everything happens. It creates a grammar of the form
```
<start> ::= <option>* <arguments>
<option> ::= <empty>
<arguments> ::= <empty>
```
in which the options and arguments will be collected. It then sets a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while the previously defined `function` is invoked. Raising `ParseInterrupt` (when `parse_args()` is invoked) ends execution.
```
class OptionGrammarMiner(OptionGrammarMiner):
OPTION_SYMBOL = "<option>"
ARGUMENTS_SYMBOL = "<arguments>"
def mine_ebnf_grammar(self):
self.grammar = {
START_SYMBOL: ["(" + self.OPTION_SYMBOL + ")*" + self.ARGUMENTS_SYMBOL],
self.OPTION_SYMBOL: [],
self.ARGUMENTS_SYMBOL: []
}
self.current_group = self.OPTION_SYMBOL
old_trace = sys.settrace(self.traceit)
try:
self.function()
except ParseInterrupt:
pass
sys.settrace(old_trace)
return self.grammar
def mine_grammar(self):
return convert_ebnf_grammar(self.mine_ebnf_grammar())
```
The trace function checks for four methods: `add_argument()` is the most important function, resulting in processing arguments; `frame.f_locals` again is the set of local variables, which at this point is mostly the arguments to `add_argument()`. Since mutually exclusive groups also have a method `add_argument()`, we set the flag `in_group` to differentiate.
Note that we make no specific efforts to differentiate between multiple parsers or groups; we simply assume that there is one parser, and at any point at most one mutually exclusive group.
```
class OptionGrammarMiner(OptionGrammarMiner):
def traceit(self, frame, event, arg):
if event != "call":
return
if "self" not in frame.f_locals:
return
self_var = frame.f_locals["self"]
method_name = frame.f_code.co_name
if method_name == "add_argument":
in_group = repr(type(self_var)).find("Group") >= 0
self.process_argument(frame.f_locals, in_group)
elif method_name == "add_mutually_exclusive_group":
self.add_group(frame.f_locals, exclusive=True)
elif method_name == "add_argument_group":
# self.add_group(frame.f_locals, exclusive=False)
pass
elif method_name == "parse_args":
raise ParseInterrupt
return None
```
The `process_arguments()` now analyzes the arguments passed and adds them to the grammar:
* If the argument starts with `-`, it gets added as an optional element to the `<option>` list
* Otherwise, it gets added to the `<argument>` list.
The optional `nargs` argument specifies how many arguments can follow. If it is a number, we add the appropriate number of elements to the grammar; if it is an abstract specifier (say, `+` or `*`), we use it directly as EBNF operator.
Given the large number of parameters and optional behavior, this is a somewhat messy function, but it does the job.
```
class OptionGrammarMiner(OptionGrammarMiner):
def process_argument(self, locals, in_group):
args = locals["args"]
kwargs = locals["kwargs"]
if self.log:
print(args)
print(kwargs)
print()
for arg in args:
self.process_arg(arg, in_group, kwargs)
class OptionGrammarMiner(OptionGrammarMiner):
def process_arg(self, arg, in_group, kwargs):
if arg.startswith('-'):
if not in_group:
target = self.OPTION_SYMBOL
else:
target = self.current_group
metavar = None
arg = " " + arg
else:
target = self.ARGUMENTS_SYMBOL
metavar = arg
arg = ""
if "nargs" in kwargs:
nargs = kwargs["nargs"]
else:
nargs = 1
param = self.add_parameter(kwargs, metavar)
if param == "":
nargs = 0
if isinstance(nargs, int):
for i in range(nargs):
arg += param
else:
assert nargs in "?+*"
arg += '(' + param + ')' + nargs
if target == self.OPTION_SYMBOL:
self.grammar[target].append(arg)
else:
self.grammar[target].append(arg)
```
The method `add_parameter()` handles possible parameters of options. If the argument has an `action` defined, it takes no parameter. Otherwise, we identify the type of the parameter (as `int` or `str`) and augment the grammar with an appropriate rule.
```
import inspect
class OptionGrammarMiner(OptionGrammarMiner):
def add_parameter(self, kwargs, metavar):
if "action" in kwargs:
# No parameter
return ""
type_ = "str"
if "type" in kwargs:
given_type = kwargs["type"]
# int types come as '<class int>'
if inspect.isclass(given_type) and issubclass(given_type, int):
type_ = "int"
if metavar is None:
if "metavar" in kwargs:
metavar = kwargs["metavar"]
else:
metavar = type_
self.add_type_rule(type_)
if metavar != type_:
self.add_metavar_rule(metavar, type_)
param = " <" + metavar + ">"
return param
```
The method `add_type_rule()` adds a rule for parameter types to the grammar. If the parameter is identified by a meta-variable (say, `N`), we add a rule for this as well to improve legibility.
```
class OptionGrammarMiner(OptionGrammarMiner):
def add_type_rule(self, type_):
if type_ == "int":
self.add_int_rule()
else:
self.add_str_rule()
def add_int_rule(self):
self.grammar["<int>"] = ["(-)?<digit>+"]
self.grammar["<digit>"] = crange('0', '9')
def add_str_rule(self):
self.grammar["<str>"] = ["<char>+"]
self.grammar["<char>"] = srange(
string.digits
+ string.ascii_letters
+ string.punctuation)
def add_metavar_rule(self, metavar, type_):
self.grammar["<" + metavar + ">"] = ["<" + type_ + ">"]
```
The method `add_group()` adds a new mutually exclusive group to the grammar. We define a new symbol (say, `<group>`) for the options added to the group, and use the `required` and `exclusive` flags to define an appropriate expansion operator. The group is then prefixed to the grammar, as in
```
<start> ::= <group><option>* <arguments>
<group> ::= <empty>
```
and filled with the next calls to `add_argument()` within the group.
```
class OptionGrammarMiner(OptionGrammarMiner):
def add_group(self, locals, exclusive):
kwargs = locals["kwargs"]
if self.log:
print(kwargs)
required = kwargs.get("required", False)
group = new_symbol(self.grammar, "<group>")
if required and exclusive:
group_expansion = group
if required and not exclusive:
group_expansion = group + "+"
if not required and exclusive:
group_expansion = group + "?"
if not required and not exclusive:
group_expansion = group + "*"
self.grammar[START_SYMBOL][0] = group_expansion + \
self.grammar[START_SYMBOL][0]
self.grammar[group] = []
self.current_group = group
```
That's it! With this, we can now extract the grammar from our `process_numbers()` program. Turning on logging again reveals the variables we draw upon.
```
miner = OptionGrammarMiner(process_numbers, log=True)
process_numbers_grammar = miner.mine_ebnf_grammar()
```
Here is the extracted grammar:
```
process_numbers_grammar
```
The grammar properly identifies the group found:
```
process_numbers_grammar["<start>"]
process_numbers_grammar["<group>"]
```
It also identifies a `--help` option provided not by us, but by the `argparse` module:
```
process_numbers_grammar["<option>"]
```
The grammar also correctly identifies the types of the arguments:
```
process_numbers_grammar["<arguments>"]
process_numbers_grammar["<integers>"]
```
The rules for `int` are set as defined by `add_int_rule()`
```
process_numbers_grammar["<int>"]
```
We can take this grammar and convert it to BNF, such that we can fuzz with it right away:
```
assert is_valid_grammar(process_numbers_grammar)
grammar = convert_ebnf_grammar(process_numbers_grammar)
assert is_valid_grammar(grammar)
f = GrammarCoverageFuzzer(grammar)
for i in range(10):
print(f.fuzz())
```
Each and every invocation adheres to the rules as set forth in the `argparse` calls. By mining options and arguments from existing programs, we can now fuzz these options out of the box – without having to specify a grammar.
## Testing Autopep8
Let us try out the option grammar miner on real-world Python programs. `autopep8` is a tool that automatically converts Python code to the [PEP 8 Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/). (Actually, all Python code in this book runs through `autopep8` during production.) `autopep8` offers a wide range of options, as can be seen by invoking it with `--help`:
```
!autopep8 --help
```
### Autopep8 Setup
We want to systematically test these options. In order to deploy our configuration grammar miner, we need to find the source code of the executable:
```
import os
def find_executable(name):
for path in os.get_exec_path():
qualified_name = os.path.join(path, name)
if os.path.exists(qualified_name):
return qualified_name
return None
autopep8_executable = find_executable("autopep8")
assert autopep8_executable is not None
autopep8_executable
```
Next, we build a function that reads the contents of the file and executes it.
```
def autopep8():
executable = find_executable("autopep8")
# First line has to contain "/usr/bin/env python" or like
first_line = open(executable).readline()
assert first_line.find("python") >= 0
contents = open(executable).read()
exec(contents)
```
### Mining an Autopep8 Grammar
We can use the `autopep8()` function in our grammar miner:
```
autopep8_miner = OptionGrammarMiner(autopep8)
```
and extract a grammar for it:
```
autopep8_ebnf_grammar = autopep8_miner.mine_ebnf_grammar()
```
This works because here, `autopep8` is not a separate process (and a separate Python interpreter), but we run the `autopep8()` function (and the `autopep8` code) in our current Python interpreter – up to the call to `parse_args()`, where we interrupt execution again. At this point, the `autopep8` code has done nothing but setting up the argument parser – which is what we are interested in.
The grammar options mined reflect precisely the options seen when providing `--help`:
```
print(autopep8_ebnf_grammar["<option>"])
```
Metavariables like `<n>` or `<line>` are placeholders for integers. We assume all metavariables of the same name have the same type:
```
autopep8_ebnf_grammar["<line>"]
```
The grammar miner has inferred that the argument to `autopep8` is a list of files:
```
autopep8_ebnf_grammar["<arguments>"]
```
which in turn all are strings:
```
autopep8_ebnf_grammar["<files>"]
```
As we are only interested in testing options, not arguments, we fix the arguments to a single mandatory input. (Otherwise, we'd have plenty of random file names generated.)
```
autopep8_ebnf_grammar["<arguments>"] = [" <files>"]
autopep8_ebnf_grammar["<files>"] = ["foo.py"]
assert is_valid_grammar(autopep8_ebnf_grammar)
```
### Creating Autopep8 Options
Let us now use the inferred grammar for fuzzing. Again, we convert the EBNF grammar into a regular BNF grammar:
```
autopep8_grammar = convert_ebnf_grammar(autopep8_ebnf_grammar)
assert is_valid_grammar(autopep8_grammar)
```
And we can use the grammar for fuzzing all options:
```
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=4)
for i in range(20):
print(f.fuzz())
```
Let us apply these options on the actual program. We need a file `foo.py` that will serve as input:
```
def create_foo_py():
open("foo.py", "w").write("""
def twice(x = 2):
return x + x
""")
create_foo_py()
print(open("foo.py").read(), end="")
```
We see how `autopep8` fixes the spacing:
```
!autopep8 foo.py
```
Let us now put things together. We define a `ProgramRunner` that will run the `autopep8` executable with arguments coming from the mined `autopep8` grammar.
```
from Fuzzer import ProgramRunner
```
Running `autopep8` with the mined options reveals a surprisingly high number of passing runs. (We see that some options depend on each other or are mutually exclusive, but this is handled by the program logic, not the argument parser, and hence out of our scope.) The `GrammarCoverageFuzzer` ensures that each option is tested at least once. (Digits and letters, too, by the way.)
```
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=5)
for i in range(20):
invocation = "autopep8" + f.fuzz()
print("$ " + invocation)
args = invocation.split()
autopep8 = ProgramRunner(args)
result, outcome = autopep8.run()
if result.stderr != "":
print(result.stderr, end="")
```
Our `foo.py` file now has been formatted in place a number of times:
```
print(open("foo.py").read(), end="")
```
We don't need it anymore, so we clean up things:
```
import os
os.remove("foo.py")
```
## Classes for Fuzzing Configuration Options
Let us now create reusable classes that we can use for testing arbitrary programs. (Okay, make that "arbitrary programs that are written in Python and use the `argparse` module to process command-line arguments.")
The class `OptionRunner` is a subclass of `ProgramRunner` that takes care of automatically determining the grammar, using the same steps we used for `autopep8`, above.
```
class OptionRunner(ProgramRunner):
def __init__(self, program, arguments=None):
if isinstance(program, str):
self.base_executable = program
else:
self.base_executable = program[0]
self.find_contents()
self.find_grammar()
if arguments is not None:
self.set_arguments(arguments)
super().__init__(program)
```
First, we find the contents of the Python executable:
```
class OptionRunner(OptionRunner):
def find_contents(self):
self._executable = find_executable(self.base_executable)
first_line = open(self._executable).readline()
assert first_line.find("python") >= 0
self.contents = open(self._executable).read()
def invoker(self):
exec(self.contents)
def executable(self):
return self._executable
```
Next, we determine the grammar using the `OptionGrammarMiner` class:
```
class OptionRunner(OptionRunner):
def find_grammar(self):
miner = OptionGrammarMiner(self.invoker)
self._ebnf_grammar = miner.mine_ebnf_grammar()
def ebnf_grammar(self):
return self._ebnf_grammar
def grammar(self):
return convert_ebnf_grammar(self._ebnf_grammar)
```
The two service methods `set_arguments()` and `set_invocation()` help us to change the arguments and program, respectively.
```
class OptionRunner(OptionRunner):
def set_arguments(self, args):
self._ebnf_grammar["<arguments>"] = [" " + args]
def set_invocation(self, program):
self.program = program
```
We can instantiate the class on `autopep8` and immediately get the grammar:
```
autopep8_runner = OptionRunner("autopep8", "foo.py")
print(autopep8_runner.ebnf_grammar()["<option>"])
```
An `OptionFuzzer` interacts with the given `OptionRunner` to obtain its grammar, which is then passed to its `GrammarCoverageFuzzer` superclass.
```
class OptionFuzzer(GrammarCoverageFuzzer):
def __init__(self, runner, *args, **kwargs):
assert issubclass(type(runner), OptionRunner)
self.runner = runner
grammar = runner.grammar()
super().__init__(grammar, *args, **kwargs)
```
When invoking `run()`, the `OptionFuzzer` creates a new invocation (using `fuzz()` from its grammar) and runs the now given (or previously set) runner with the arguments from the grammar. Note that the runner specified in `run()` can differ from the one set during initialization; this allows for mining options from one program and applying it in another context.
```
class OptionFuzzer(OptionFuzzer):
def run(self, runner=None, inp=""):
if runner is None:
runner = self.runner
assert issubclass(type(runner), OptionRunner)
invocation = runner.executable() + " " + self.fuzz()
runner.set_invocation(invocation.split())
return runner.run(inp)
```
### Example: Autopep8
Let us apply this on the `autopep8` runner:
```
autopep8_fuzzer = OptionFuzzer(autopep8_runner, max_nonterminals=5)
for i in range(3):
print(autopep8_fuzzer.fuzz())
```
We can now systematically test `autopep8` with these classes:
```
autopep8_fuzzer.run(autopep8_runner)
```
### Example: MyPy
We can extract options for the `mypy` static type checker for Python:
```
assert find_executable("mypy") is not None
mypy_runner = OptionRunner("mypy", "foo.py")
print(mypy_runner.ebnf_grammar()["<option>"])
mypy_fuzzer = OptionFuzzer(mypy_runner, max_nonterminals=5)
for i in range(10):
print(mypy_fuzzer.fuzz())
```
### Example: Notedown
Here's the configuration options for the `notedown` Notebook to Markdown converter:
```
assert find_executable("notedown") is not None
notedown_runner = OptionRunner("notedown")
print(notedown_runner.ebnf_grammar()["<option>"])
notedown_fuzzer = OptionFuzzer(notedown_runner, max_nonterminals=5)
for i in range(10):
print(notedown_fuzzer.fuzz())
```
## Combinatorial Testing
Our `CoverageGrammarFuzzer` does a good job in covering each and every option at least once, which is great for systematic testing. However, as we also can see in our examples above, some options require each other, while others interfere with each other. What we should do as good testers is not only to cover every option individually, but also _combinations_ of options.
The Python `itertools` module gives us means to create combinations from lists. We can, for instance, take the `notedown` options and create a list of all pairs.
```
from itertools import combinations
option_list = notedown_runner.ebnf_grammar()["<option>"]
pairs = list(combinations(option_list, 2))
```
There's quite a number of pairs:
```
len(pairs)
print(pairs[:20])
```
Testing every such pair of options frequently suffices to cover all interferences between options. (Programs rarely have conditions involving three or more configuration settings.) To this end, we _change_ the grammar from having a list of options to having a list of _option pairs_, such that covering these will automatically cover all pairs.
We create a function `pairwise()` that takes a list of options as occurring in our grammar and returns a list of _pairwise options_ – that is, our original options, but concatenated.
```
def pairwise(option_list):
return [option_1 + option_2
for (option_1, option_2) in combinations(option_list, 2)]
```
Here's the first 20 pairs:
```
print(pairwise(option_list)[:20])
```
The new grammar `pairwise_notedown_grammar` is a copy of the `notedown` grammar, but with the list of options replaced with the above pairwise option list.
```
from copy import deepcopy
notedown_grammar = notedown_runner.grammar()
pairwise_notedown_grammar = deepcopy(notedown_grammar)
pairwise_notedown_grammar["<option>"] = pairwise(notedown_grammar["<option>"])
assert is_valid_grammar(pairwise_notedown_grammar)
```
Using the "pairwise" grammar to fuzz now covers one pair after another:
```
notedown_fuzzer = GrammarCoverageFuzzer(
pairwise_notedown_grammar, max_nonterminals=4)
for i in range(10):
print(notedown_fuzzer.fuzz())
```
Can we actually test all combinations of options? Not in practice, as the number of combinations quickly grows as the length increases. It decreases again as the number of options reaches the maximum (with 20 options, there is only 1 combination involving _all_ options), but the absolute numbers are still staggering:
```
for combination_length in range(1, 20):
tuples = list(combinations(option_list, combination_length))
print(combination_length, len(tuples))
```
Formally, the number of combinations of length $k$ in a set of options of length $n$ is the binomial coefficient
$$
{n \choose k} = \frac{n!}{k!(n - k)!}
$$
which for $k = 2$ (all pairs) gives us
$$
{n \choose 2} = \frac{n!}{2(n - 2)!} = n \times (n - 1)
$$
For `autopep8` with its 29 options...
```
len(autopep8_runner.ebnf_grammar()["<option>"])
```
... we thus need 812 tests to cover all pairs:
```
len(autopep8_runner.ebnf_grammar()["<option>"]) * \
(len(autopep8_runner.ebnf_grammar()["<option>"]) - 1)
```
For `mypy` with its 110 options, though, we already end up with 11,990 tests to be conducted:
```
len(mypy_runner.ebnf_grammar()["<option>"])
len(mypy_runner.ebnf_grammar()["<option>"]) * \
(len(mypy_runner.ebnf_grammar()["<option>"]) - 1)
```
Even if each pair takes a second to run, we'd still be done in three hours of testing, though.
If your program has more options that you all want to get covered in combinations, it is advisable that you limit the number of configurations further – for instance by limiting combinatorial testing to those combinations that possibly can interact with each other; and covering all other (presumably orthogonal) options individually.
This mechanism of creating configurations by extending grammars can be easily extended to other configuration targets. One may want to explore a greater number of configurations, or expansions in specific contexts. The [exercises](#Exercises), below, have a number of options ready for you.
## Lessons Learned
* Besides regular input data, program _configurations_ make an important testing target.
* For a given program using a standard library to parse command-line options and arguments, one can automatically extract these and convert them into a grammar.
* To cover not only single options, but combinations of options, one can expand the grammar to cover all pairs, or come up with even more ambitious targets.
## Next Steps
If you liked the idea of mining a grammar from a program, do not miss:
* [how to mine grammars for input data](GrammarMiner.ipynb)
Our next steps in the book focus on:
* [how to parse and recombine inputs](Parser.ipynb)
* [how to assign weights and probabilities to specific productions](ProbabilisticGrammarFuzzer.ipynb)
* [how to simplify inputs that cause a failure](Reducer.ipynb)
## Background
Although configuration data is just as likely to cause failures as other input data, it has received relatively little attention in test generation – possibly because, unlike "regular" input data, configuration data is not so much under control of external parties, and because, again unlike regular data, there is little variance in configurations. Creating models for software configurations and using these models for testing is commonplace, as is the idea of pairwise testing. For an overview, see \cite{Pezze2008}; for a discussion and comparison of state-of-the-art techniques, see \cite{Petke2015}.
More specifically, \cite{Sutton2007} also discuss techniques to systematically cover command-line options. Dai et al. \cite{Dai2010} apply configuration fuzzing by changing variables associated with configuration files.
## Exercises
### Exercise 1: #ifdef Configuration Fuzzing
In C programs, the *C preprocessor* can be used to choose which code parts should be compiled and which ones should not. As an example, in the C code
```C
#ifdef LONG_FOO
long foo() { ... }
#else
int foo() { ... }
#endif
```
the compiler will compile the function `foo()` with return type`long` if the _preprocessor variable_ `LONG_FOO` is defined, and with return type `int` if not. Such preprocessor variables are either set in the source files (using `#define`, as in `#define LONG_FOO`) or on the C compiler command line (using `-D<variable>` or `-D<variable>=<value>`, as in `-DLONG_FOO`.
Such *conditional compilation* is used to configure C programs towards their environment. System-specific code can contain lots of conditional compilation. As an example, consider this excerpt of `xmlparse.c`, the XML parser that is part of the Python runtime library:
```c
#if defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32)
# define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800
#endif
#if !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \
&& !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \
&& !defined(XML_DEV_URANDOM) \
&& !defined(_WIN32) \
&& !defined(XML_POOR_ENTROPY)
# error
#endif
#if !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)
#define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */
#endif
#ifdef XML_UNICODE_WCHAR_T
#define XML_T(x) (const wchar_t)x
#define XML_L(x) L ## x
#else
#define XML_T(x) (const unsigned short)x
#define XML_L(x) x
#endif
int fun(int x) { return XML_T(x); }
```
A typical configuration for the C preprocessor on the above code could be `cc -c -D_WIN32 -DXML_POOR_ENTROPY -DXML_UNICODE_WCHAR_T xmlparse.c`, defining the given preprocessor variables and selecting the appropriate code fragments.
Since the compiler can only compile one configuration at a time (implying that we can also only _test_ one resulting executable at a time), your task is to find out which of these configurations actually compile. To this end, proceed in three steps.
#### Part 1: Extract Preprocessor Variables
Write a _function_ `cpp_identifiers()` that, given a set of lines (say, from `open(filename).readlines()`), extracts all preprocessor variables referenced in `#if` or `#ifdef` preprocessor instructions. Apply `ifdef_identifiers()` on the sample C input above, such that
```python
cpp_identifiers(open("xmlparse.c").readlines())
```
returns the set
```python
{'_WIN32', 'LOAD_LIBRARY_SEARCH_SYSTEM32', 'HAVE_GETRANDOM', 'HAVE_SYSCALL_GETRANDOM', 'HAVE_ARC4RANDOM_BUF', ...}
```
**Solution.** Let us start with creating a sample input file, `xmlparse.c`:
```
filename = "xmlparse.c"
open(filename, "w").write(
"""
#if defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32)
# define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800
#endif
#if !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \
&& !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \
&& !defined(XML_DEV_URANDOM) \
&& !defined(_WIN32) \
&& !defined(XML_POOR_ENTROPY)
# error
#endif
#if !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)
#define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */
#endif
#ifdef XML_UNICODE_WCHAR_T
#define XML_T(x) (const wchar_t)x
#define XML_L(x) L ## x
#else
#define XML_T(x) (const unsigned short)x
#define XML_L(x) x
#endif
int fun(int x) { return XML_T(x); }
""");
```
To find C preprocessor `#if` directives and preprocessor variables, we use regular expressions matching them.
```
import re
re_cpp_if_directive = re.compile(r"\s*#\s*(el)?if")
re_cpp_identifier = re.compile(r"[a-zA-Z_$]+")
def cpp_identifiers(lines):
identifiers = set()
for line in lines:
if re_cpp_if_directive.match(line):
identifiers |= set(re_cpp_identifier.findall(line))
# These are preprocessor keywords
identifiers -= { "if", "ifdef", "ifndef", "defined" }
return identifiers
cpp_ids = cpp_identifiers(open("xmlparse.c").readlines())
cpp_ids
```
#### Part 2: Derive an Option Grammar
With the help of `cpp_identifiers()`, create a grammar which has C compiler invocations with a list of options, where each option takes the form `-D<variable>` for a preprocessor variable `<variable>`. Using this grammar `cpp_grammar`, a fuzzer
```python
g = GrammarCoverageFuzzer(cpp_grammar)
```
would create C compiler invocations such as
```python
[g.fuzz() for i in range(10)]
['cc -DHAVE_SYSCALL_GETRANDOM xmlparse.c',
'cc -D__SCO__ -DRANDOM_BUF -DXML_UNICODE_WCHAR_T -D__UNIXWARE__ xmlparse.c',
'cc -DXML_POOR_ENTROPY xmlparse.c',
'cc -DRANDOM xmlparse.c',
'cc -D_WIN xmlparse.c',
'cc -DHAVE_ARC xmlparse.c', ...]
```
**Solution.** This is not very difficult:
```
from Grammars import new_symbol
cpp_grammar = {
"<start>": ["cc -c<options> " + filename],
"<options>": ["<option>", "<options><option>"],
"<option>": []
}
for id in cpp_ids:
s = new_symbol(cpp_grammar, "<" + id + ">")
cpp_grammar["<option>"].append(s)
cpp_grammar[s] = [" -D" + id]
cpp_grammar
assert is_valid_grammar(cpp_grammar)
```
#### Part 3: C Preprocessor Configuration Fuzzing
Using the grammar just produced, use a `GrammarCoverageFuzzer` to
1. Test each processor variable individually
2. Test each pair of processor variables, using `pairwise()`.
What happens if you actually run the invocations?
**Solution.** We can simply run the coverage fuzzer, as described above.
```
g = GrammarCoverageFuzzer(cpp_grammar)
g.fuzz()
from Fuzzer import ProgramRunner
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
```
To test all pairs, we can use `pairwise()`:
```
pairwise_cpp_grammar = deepcopy(cpp_grammar)
pairwise_cpp_grammar["<option>"] = pairwise(cpp_grammar["<option>"])
pairwise_cpp_grammar["<option>"][:10]
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
```
Some of the compilation errors we get could be expected – for instance, defining `XML_UNICODE_WCHAR_T` when actually, the type is not supported in our environment. Other errors may not be expected – and it is these errors we would find through systematic configuration fuzzing, as described above.
At the end, don't forget to clean up:
```
os.remove("xmlparse.c")
if os.path.exists("xmlparse.o"):
os.remove("xmlparse.o")
```
### Exercise 2: .ini Configuration Fuzzing
Besides command-line options, another important source of configurations are _configuration files_. In this exercise, we will consider the very simple configuration language provided by the Python `ConfigParser` module, which is very similar to what is found in Microsoft Windows _.ini_ files.
The following example for a `ConfigParser` input file stems right from [the ConfigParser documentation](https://docs.python.org/3/library/configparser.html):
```
[DEFAULT]
ServerAliveInterval = 45
Compression = yes
CompressionLevel = 9
ForwardX11 = yes
[bitbucket.org]
User = hg
[topsecret.server.com]
Port = 50022
ForwardX11 = no
```
The above `ConfigParser` file can be created programmatically:
```
import configparser
config = configparser.ConfigParser()
config['DEFAULT'] = {'ServerAliveInterval': '45',
'Compression': 'yes',
'CompressionLevel': '9'}
config['bitbucket.org'] = {}
config['bitbucket.org']['User'] = 'hg'
config['topsecret.server.com'] = {}
topsecret = config['topsecret.server.com']
topsecret['Port'] = '50022' # mutates the parser
topsecret['ForwardX11'] = 'no' # same here
config['DEFAULT']['ForwardX11'] = 'yes'
with open('example.ini', 'w') as configfile:
config.write(configfile)
with open('example.ini') as configfile:
print(configfile.read(), end="")
```
and be read in again:
```
config = configparser.ConfigParser()
config.read('example.ini')
topsecret = config['topsecret.server.com']
topsecret['Port']
```
#### Part 1: Read Configuration
Using `configparser`, create a program reading in the above configuration file and accessing the individual elements.
#### Part 2: Create a Configuration Grammar
Design a grammar that will automatically create configuration files suitable for your above program. Fuzz your program with it.
#### Part 3: Mine a Configuration Grammar
By dynamically tracking the individual accesses to configuration elements, you can again extract a basic grammar from the execution. To this end, create a subclass of `ConfigParser` with a special method `__getitem__`:
```
class TrackingConfigParser(configparser.ConfigParser):
def __getitem__(self, key):
print("Accessing", repr(key))
return super().__getitem__(key)
```
For a `TrackingConfigParser` object `p`, `p.__getitem__(key)` will be invoked whenever `p[key]` is accessed:
```
tracking_config_parser = TrackingConfigParser()
tracking_config_parser.read('example.ini')
section = tracking_config_parser['topsecret.server.com']
```
Using `__getitem__()`, as above, implement a tracking mechanism that, while your program accesses the read configuration, automatically saves options accessed and values read. Create a prototype grammar from these values; use it for fuzzing.
At the end, don't forget to clean up:
```
import os
os.remove("example.ini")
```
**Solution.** Left to the reader. Enjoy!
### Exercise 3: Extracting and Fuzzing C Command-Line Options
In C programs, the `getopt()` function are frequently used to process configuration options. A call
```
getopt(argc, argv, "bf:")
```
indicates that the program accepts two options `-b` and `-f`, with `-f` taking an argument (as indicated by the following colon).
#### Part 1: Getopt Fuzzing
Write a framework which, for a given C program, automatically extracts the argument to `getopt()` and derives a fuzzing grammar for it. There are multiple ways to achieve this:
1. Scan the program source code for occurrences of `getopt()` and return the string passed. (Crude, but should frequently work.)
2. Insert your own implementation of `getopt()` into the source code (effectively replacing `getopt()` from the runtime library), which outputs the `getopt()` argument and exits the program. Recompile and run.
3. (Advanced.) As above, but instead of changing the source code, hook into the _dynamic linker_ which at runtime links the program with the C runtime library. Set the library loading path (on Linux and Unix, this is the `LD_LIBRARY_PATH` environment variable) such that your own version of `getopt()` is linked first, and the regular libraries later. Executing the program (without recompiling) should yield the desired result.
Apply this on `grep` and `ls`; report the resulting grammars and results.
**Solution.** Left to the reader. Enjoy hacking!
#### Part 2: Fuzzing Long Options in C
Same as Part 1, but also hook into the GNU variant `getopt_long()`, which accepts "long" arguments with double dashes such as `--help`. Note that method 1, above, will not work here, since the "long" options are defined in a separately defined structure.
**Solution.** Left to the reader. Enjoy hacking!
### Exercise 4: Expansions in Context
In our above option configurations, we have multiple symbols which all expand to the same integer. For instance, the `--line-range` option of `autopep8` takes two `<line>` parameters which both expand into the same `<int>` symbol:
```
<option> ::= ... | --line-range <line> <line> | ...
<line> ::= <int>
<int> ::= (-)?<digit>+
<digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
```
```
autopep8_runner.ebnf_grammar()["<line>"]
autopep8_runner.ebnf_grammar()["<int>"]
autopep8_runner.ebnf_grammar()["<digit>"]
```
Once the `GrammarCoverageFuzzer` has covered all variations of `<int>` (especially by covering all digits) for _one_ option, though, it will no longer strive to achieve such coverage for the next option. Yet, it could be desirable to achieve such coverage for each option separately.
One way to achieve this with our existing `GrammarCoverageFuzzer` is again to change the grammar accordingly. The idea is to _duplicate_ expansions – that is, to replace an expansion of a symbol $s$ with a new symbol $s'$ whose definition is duplicated from $s$. This way, $s'$ and $s$ are separate symbols from a coverage point of view and would be independently covered.
As an example, consider again the above `--line-range` option. If we want our tests to independently cover all elements of the two `<line>` parameters, we can duplicate the second `<line>` expansion into a new symbol `<line'>` with subsequent duplicated expansions:
```
<option> ::= ... | --line-range <line> <line'> | ...
<line> ::= <int>
<line'> ::= <int'>
<int> ::= (-)?<digit>+
<int'> ::= (-)?<digit'>+
<digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
<digit'> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
```
Design a function `inline(grammar, symbol)` that returns a duplicate of `grammar` in which every occurrence of `<symbol>` and its expansions become separate copies. The above grammar could be a result of `inline(autopep8_runner.ebnf_grammar(), "<line>")`.
When copying, expansions in the copy should also refer to symbols in the copy. Hence, when expanding `<int>` in
```<int> ::= <int><digit>```
make that
```<int> ::= <int><digit>
<int'> ::= <int'><digit'>
```
(and not `<int'> ::= <int><digit'>` or `<int'> ::= <int><digit>`).
Be sure to add precisely one new set of symbols for each occurrence in the original grammar, and not to expand further in the presence of recursion.
**Solution.** Again, left to the reader. Enjoy!
| github_jupyter |
# The Discrete Fourier Transform
*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Comunications Engineering, Universität Rostock. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Fast Convolution
The linear convolution of signals is a basic building block in many practical applications. The straightforward convolution of two finite-length signals $x[k]$ and $h[k]$ has considerable numerical complexity. This has led to the development of various algorithms that realize the convolution with lower complexity. The basic concept of the *fast convolution* is to exploit the [convolution theorem](theorems.ipynb#Convolution-Theorem) of the discrete Fourier transform (DFT). This theorem states that the periodic convolution of two signals is equal to a scalar multiplication of their spectra. The scalar multiplication has considerably less numerical operations that the convolution. The transformation of the signals can be performed efficiently by the [fast Fourier transform](fast_fourier_transform.ipynb) (FFT).
Since the scalar multiplication of the spectra realizes a periodic convolution, special care has to be taken to realize a linear convolution in the spectral domain. The equivalence between linear and periodic convolution is discussed in the following.
### Equivalence of Linear and Periodic Convolution
The [linear convolution](../discrete_systems/linear_convolution.ipynb#Finite-Length-Signals) of a causal signal $x_L[k]$ of length $L$ with a causal signal $h_N[k]$ of length $N$ reads
\begin{equation}
y[k] = x_L[k] * h_N[k] = \sum_{\kappa = 0}^{L-1} x_L[\kappa] \; h_N[k - \kappa] = \sum_{\kappa = 0}^{N-1} h_N[\kappa] \; x_L[k - \kappa]
\end{equation}
The resulting signal $y[k]$ is of finite length $M = N+L-1$. Without loss of generality it is assumed in the following that $N \leq L$. The computation of $y[k]$ for $k=0,1, \dots, M-1$ requires $M \cdot N$ multiplications and $M \cdot (N-1)$ additions. The computational complexity of the convolution is consequently [on the order of](https://en.wikipedia.org/wiki/Big_O_notation) $\mathcal{O}(M \cdot N)$.
The periodic convolution of the two signals $x_L[k]$ and $h_N[k]$ is defined as
\begin{equation}
x_L[k] \circledast_P h_N[k] = \sum_{\kappa = 0}^{N-1} h_N[\kappa] \cdot \tilde{x}[k-\kappa]
\end{equation}
where $\tilde{x}[k]$ denotes the periodic summation of $x_L[k]$ with period $P$
\begin{equation}
\tilde{x}[k] = \sum_{\nu = -\infty}^{\infty} x_L[k - \nu P]
\end{equation}
The result of the circular convolution is periodic with period $P$. To compute the linear convolution by a periodic convolution, one has to take care that the result of the linear convolution fits into one period of the periodic convolution. Hence, the periodicity has to be chosen as $P \geq M$ where $M = N+L-1$. This can be achieved by zero-padding of $x_L[k]$ to the total length $M$ resulting in the signal $x_M[k]$ of length $M$ which is defined as
\begin{equation}
x_M[k] = \begin{cases}
x_L[k] & \text{for } 0 \leq k < L \\
0 & \text{for } L \leq k < M
\end{cases}
\end{equation}
and similar for $h_N[k]$ resulting in the zero-padded signal $h_M[k]$ which is defined as
\begin{equation}
h_M[k] = \begin{cases}
h_N[k] & \text{for } 0 \leq k < N \\
0 & \text{for } N \leq k < M
\end{cases}
\end{equation}
Using these signals, the linear and periodic convolution are equivalent for the first $M$ samples $k = 0,1,\dots, M-1$
\begin{equation}
x_L[k] * h_N[k] = x_M[k] \circledast_M h_M[k]
\end{equation}
#### Example
The following example computes the linear, periodic and linear by periodic convolution of two signals $x[k] = \text{rect}_L[k]$ and $h[k] = \text{rect}_N[k]$.
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from tools import cconv
L = 8 # length of signal x[k]
N = 10 # length of signal h[k]
P = 14 # periodicity of periodic convolution
# generate signals
x = np.ones(L)
h = np.ones(N)
# linear convolution
y1 = np.convolve(x, h, 'full')
# periodic convolution
y2 = cconv(x, h, P)
# linear convolution via periodic convolution
xp = np.append(x, np.zeros(N-1))
hp = np.append(h, np.zeros(L-1))
y3 = cconv(xp, hp, L+N-1)
# plot results
def plot_signal(x):
plt.stem(x)
plt.xlabel('$k$')
plt.ylabel('$y[k]$')
plt.xlim([0, N+L])
plt.gca().margins(y=0.1)
plt.figure(figsize = (10, 8))
plt.subplot(3,1,1)
plot_signal(y1)
plt.title('Linear convolution')
plt.subplot(3,1,2)
plot_signal(y2)
plt.title('Periodic convolution with period $P=%d$'%P)
plt.subplot(3,1,3)
plot_signal(y3)
plt.title('Linear convolution as periodic convolution')
plt.tight_layout()
```
**Exercise**
* Change the lengths `L`, `N` and `P` and check how the results for the different convolutions change.
### The Fast Convolution Algorithm
Using the above derived equality of the linear and periodic convolution one can express the linear convolution $y[k] = x_L[k] * h_N[k]$ by the DFT as
$$ y[k] = \text{IDFT}_M \{ \; \text{DFT}_M\{ x_M[k] \} \cdot \text{DFT}_M\{ h_M[k] \} \; \} $$
The resulting algorithm is composed of the following steps
1. Zero-padding of the two input signals $x_L[k]$ and $h_N[k]$ to at least a total length of $M \geq N+L-1$
2. Computation of the DFTs $X[\mu]$ and $H[\mu]$ using a FFT of length $M$
3. Multiplication of the spectra $Y[\mu] = X[\mu] \cdot H[\mu]$
4. Inverse DFT of $Y[\mu]$ using an inverse FFT of length $M$
The algorithm requires two DFTs of length $M$, $M$ complex multiplications and one IDFT of length $M$. On first sight this does not seem to be an improvement, since one DFT/IDFT requires $M^2$ complex multiplications and $M \cdot (M-1)$ complex additions. The overall numerical complexity is hence in the order of $\mathcal{O}(M^2)$. The DFT can be realized efficiently by the [fast Fourier transformation](fast_fourier_transform.ipynb) (FFT), which lowers the number of numerical operations for each DFT/IDFT significantly. The actual gain depends on the particular implementation of the FFT. Many FFTs are most efficient for lengths which are a power of two. It therefore can make sense, in terms of the number of numerical operations, to choose $M$ as a power of two instead of the shortest possible length $N+L-1$. In this case, the numerical complexity of the radix-2 algorithm is on the order of $\mathcal{O}(M \log_2 M)$.
The introduced algorithm is known as *fast convolution* due to its computational efficiency when realized by the FFT. For real valued signals $x[k] \in \mathbb{R}$ and $h[k] \in \mathbb{R}$ the number of numerical operations can be reduced further by using a real valued FFT.
#### Example
The implementation of the fast convolution algorithm is straightforward. In the following example the fast convolution of two real-valued signals $x[k] = \text{rect}_L[k]$ and $h[k] = \text{rect}_N[k]$ is shown. The real valued FFT/IFFT is consequently used. Most implementations of the FFT include the zero-padding to a given length $M$, e.g as in `numpy` by `numpy.fft.rfft(x, M)`.
```
L = 8 # length of signal x[k]
N = 10 # length of signal h[k]
# generate signals
x = np.ones(L)
h = np.ones(N)
# fast convolution
M = N+L-1
y = np.fft.irfft(np.fft.rfft(x, M)*np.fft.rfft(h, M))
# show result
plt.figure(figsize=(10, 3))
plt.stem(y)
plt.xlabel('k')
plt.ylabel('y[k]');
```
### Benchmark
It was already argued that the numerical complexity of the fast convolution is considerably lower due to the usage of the FFT. As measure, the gain in terms of execution time with respect to the linear convolution is evaluated in the following. Both algorithms are executed for the convolution of two real-valued signals $x_L[k]$ and $h_N[k]$ of length $L=N=2^n$ for $n \in \mathbb{N}$. The length of the FFTs/IFFT was chosen as $M=2^{n+1}$. The results depend heavily on the implementation of the FFT and the hardware used. Note that the execution of the following script may take some time.
```
import timeit
n = np.arange(17) # lengths = 2**n to evaluate
reps = 20 # number of repetitions for timeit
gain = np.zeros(len(n))
for N in n:
length = 2**N
# setup environment for timeit
tsetup = 'import numpy as np; from numpy.fft import rfft, irfft; \
x=np.random.randn(%d); h=np.random.randn(%d)' % (length, length)
# direct convolution
tc = timeit.timeit('np.convolve(x, x, "full")', setup=tsetup, number=reps)
# fast convolution
tf = timeit.timeit('irfft(rfft(x, %d) * rfft(h, %d))' % (2*length, 2*length), setup=tsetup, number=reps)
# speedup by using the fast convolution
gain[N] = tc/tf
# show the results
plt.figure(figsize = (15, 10))
plt.barh(n-.5, gain, log=True)
plt.plot([1, 1], [-1, n[-1]+1], 'r-')
plt.yticks(n, 2**n)
plt.xlabel('Gain of fast convolution')
plt.ylabel('Length of signals')
plt.title('Comparison of execution times between direct and fast convolution')
plt.grid()
```
**Exercise**
* For which lengths is the fast convolution faster than the linear convolution?
* Why is it slower below a given signal length?
* Is the trend of the gain as expected from above considerations?
**Copyright**
The notebooks are provided as [Open Educational Resource](https://de.wikipedia.org/wiki/Open_Educational_Resources). Feel free to use the notebooks for your own educational purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Lecture Notes on Signals and Systems* by Sascha Spors.
| github_jupyter |
# Concise Implementation of Linear Regression
:label:`sec_linear_concise`
Broad and intense interest in deep learning for the past several years
has inspired companies, academics, and hobbyists
to develop a variety of mature open source frameworks
for automating the repetitive work of implementing
gradient-based learning algorithms.
In :numref:`sec_linear_scratch`, we relied only on
(i) tensors for data storage and linear algebra;
and (ii) auto differentiation for calculating gradients.
In practice, because data iterators, loss functions, optimizers,
and neural network layers
are so common, modern libraries implement these components for us as well.
In this section, (**we will show you how to implement
the linear regression model**) from :numref:`sec_linear_scratch`
(**concisely by using high-level APIs**) of deep learning frameworks.
## Generating the Dataset
To start, we will generate the same dataset as in :numref:`sec_linear_scratch`.
```
import numpy as np
import torch
from torch.utils import data
from d2l import torch as d2l
true_w = torch.tensor([2, -3.4])
true_b = 4.2
features, labels = d2l.synthetic_data(true_w, true_b, 1000)
```
## Reading the Dataset
Rather than rolling our own iterator,
we can [**call upon the existing API in a framework to read data.**]
We pass in `features` and `labels` as arguments and specify `batch_size`
when instantiating a data iterator object.
Besides, the boolean value `is_train`
indicates whether or not
we want the data iterator object to shuffle the data
on each epoch (pass through the dataset).
```
def load_array(data_arrays, batch_size, is_train=True): #@save
"""Construct a PyTorch data iterator."""
dataset = data.TensorDataset(*data_arrays)
return data.DataLoader(dataset, batch_size, shuffle=is_train)
batch_size = 10
data_iter = load_array((features, labels), batch_size)
```
Now we can use `data_iter` in much the same way as we called
the `data_iter` function in :numref:`sec_linear_scratch`.
To verify that it is working, we can read and print
the first minibatch of examples.
Comparing with :numref:`sec_linear_scratch`,
here we use `iter` to construct a Python iterator and use `next` to obtain the first item from the iterator.
```
next(iter(data_iter))
```
## Defining the Model
When we implemented linear regression from scratch
in :numref:`sec_linear_scratch`,
we defined our model parameters explicitly
and coded up the calculations to produce output
using basic linear algebra operations.
You *should* know how to do this.
But once your models get more complex,
and once you have to do this nearly every day,
you will be glad for the assistance.
The situation is similar to coding up your own blog from scratch.
Doing it once or twice is rewarding and instructive,
but you would be a lousy web developer
if every time you needed a blog you spent a month
reinventing the wheel.
For standard operations, we can [**use a framework's predefined layers,**]
which allow us to focus especially
on the layers used to construct the model
rather than having to focus on the implementation.
We will first define a model variable `net`,
which will refer to an instance of the `Sequential` class.
The `Sequential` class defines a container
for several layers that will be chained together.
Given input data, a `Sequential` instance passes it through
the first layer, in turn passing the output
as the second layer's input and so forth.
In the following example, our model consists of only one layer,
so we do not really need `Sequential`.
But since nearly all of our future models
will involve multiple layers,
we will use it anyway just to familiarize you
with the most standard workflow.
Recall the architecture of a single-layer network as shown in :numref:`fig_single_neuron`.
The layer is said to be *fully-connected*
because each of its inputs is connected to each of its outputs
by means of a matrix-vector multiplication.
In PyTorch, the fully-connected layer is defined in the `Linear` class. Note that we passed two arguments into `nn.Linear`. The first one specifies the input feature dimension, which is 2, and the second one is the output feature dimension, which is a single scalar and therefore 1.
```
# `nn` is an abbreviation for neural networks
from torch import nn
net = nn.Sequential(nn.Linear(2, 1))
```
## Initializing Model Parameters
Before using `net`, we need to (**initialize the model parameters,**)
such as the weights and bias in the linear regression model.
Deep learning frameworks often have a predefined way to initialize the parameters.
Here we specify that each weight parameter
should be randomly sampled from a normal distribution
with mean 0 and standard deviation 0.01.
The bias parameter will be initialized to zero.
As we have specified the input and output dimensions when constructing `nn.Linear`,
now we can access the parameters directly to specify their initial values.
We first locate the layer by `net[0]`, which is the first layer in the network,
and then use the `weight.data` and `bias.data` methods to access the parameters.
Next we use the replace methods `normal_` and `fill_` to overwrite parameter values.
```
net[0].weight.data.normal_(0, 0.01)
net[0].bias.data.fill_(0)
```
## Defining the Loss Function
[**The `MSELoss` class computes the mean squared error (without the $1/2$ factor in :eqref:`eq_mse`).**]
By default it returns the average loss over examples.
```
loss = nn.MSELoss()
```
## Defining the Optimization Algorithm
Minibatch stochastic gradient descent is a standard tool
for optimizing neural networks
and thus PyTorch supports it alongside a number of
variations on this algorithm in the `optim` module.
When we (**instantiate an `SGD` instance,**)
we will specify the parameters to optimize over
(obtainable from our net via `net.parameters()`), with a dictionary of hyperparameters
required by our optimization algorithm.
Minibatch stochastic gradient descent just requires that
we set the value `lr`, which is set to 0.03 here.
```
trainer = torch.optim.SGD(net.parameters(), lr=0.03)
```
## Training
You might have noticed that expressing our model through
high-level APIs of a deep learning framework
requires comparatively few lines of code.
We did not have to individually allocate parameters,
define our loss function, or implement minibatch stochastic gradient descent.
Once we start working with much more complex models,
advantages of high-level APIs will grow considerably.
However, once we have all the basic pieces in place,
[**the training loop itself is strikingly similar
to what we did when implementing everything from scratch.**]
To refresh your memory: for some number of epochs,
we will make a complete pass over the dataset (`train_data`),
iteratively grabbing one minibatch of inputs
and the corresponding ground-truth labels.
For each minibatch, we go through the following ritual:
* Generate predictions by calling `net(X)` and calculate the loss `l` (the forward propagation).
* Calculate gradients by running the backpropagation.
* Update the model parameters by invoking our optimizer.
For good measure, we compute the loss after each epoch and print it to monitor progress.
```
num_epochs = 3
for epoch in range(num_epochs):
for X, y in data_iter:
l = loss(net(X) ,y)
trainer.zero_grad()
l.backward()
trainer.step()
l = loss(net(features), labels)
print(f'epoch {epoch + 1}, loss {l:f}')
```
Below, we [**compare the model parameters learned by training on finite data
and the actual parameters**] that generated our dataset.
To access parameters,
we first access the layer that we need from `net`
and then access that layer's weights and bias.
As in our from-scratch implementation,
note that our estimated parameters are
close to their ground-truth counterparts.
```
w = net[0].weight.data
print('error in estimating w:', true_w - w.reshape(true_w.shape))
b = net[0].bias.data
print('error in estimating b:', true_b - b)
```
## Summary
* Using PyTorch's high-level APIs, we can implement models much more concisely.
* In PyTorch, the `data` module provides tools for data processing, the `nn` module defines a large number of neural network layers and common loss functions.
* We can initialize the parameters by replacing their values with methods ending with `_`.
## Exercises
1. If we replace `nn.MSELoss(reduction='sum')` with `nn.MSELoss()`, how can we change the learning rate for the code to behave identically. Why?
1. Review the PyTorch documentation to see what loss functions and initialization methods are provided. Replace the loss by Huber's loss.
1. How do you access the gradient of `net[0].weight`?
[Discussions](https://discuss.d2l.ai/t/45)
| github_jupyter |
---

---
# Estrutura de controles de fluxo
Servem para alterar a ordem dos passos de um algoritmo/programa.
<img src="img/conditionals/control-flow.png" width="600px"/>
## Tipos de estruturas de controle
- Estruturas condicionais
- Laços de repetição
- Funções
## Estruturas condicionais
### Estrutura condicional `if`
A estrutura `if` é utilizada sempre que precisarmos desviar o curso do programa de acordo com um ou mais **condições**.
<img src="img/conditionals/if-statement.png" width="250px"/>
#### IF
```py
if <expr>:
<statement>
```
No trecho acima:
- `<expr>` representa uma expressão Booleana
- `<statement>` representa um bloco de código qualquer em Python
- Este bloco deve ser válido; e
- Deve estar identado
#### Indentação
Indentação é um recuo dado em uma ou mais linhas do código
#### Exemplo
*Se chover hoje eu vou lavar o meu carro!*
#### Exemplo
*Se a temperatura estiver boa eu vou ao parque relaxar e levar o cachoro para caminhar*
### Estrutura condicional `else`
Esta estutura é utilizada em conjunto com o `if` e serve para especificar uma alternativa para quando a condição do `if` **não for aceita**.
> O `else` pode ser utilizado com outras estruturas de controle de fluxo
<img src="img/conditionals/if-else-statement.png" width="290px"/>
#### IF-ELSE
```py
if <expr>:
<statement>
else:
<statement>
```
#### Exemplo
*Se a nota do aluno for maior ou igual à média o sistema imprime "aprovado", caso contrário o sistema escreve "reprovado"*
### Estrutura condicional `elif`
Esta estutura é utilizada em conjunto com o `if` e serve para testar uma nova condição para quando a condição do `if` **não for aceita**.
<img src="img/conditionals/if-elif-statement.png" width="290px"/>
#### IF-ELIF
```py
if <expr>:
<statement>
elif:
<statement>
```
#### IF-ELIF-ELSE
```py
if <expr>:
<statement>
elif:
<statement>
else:
<statement>
```
#### IF-ELIF-ELIF-ELIF-...-ELIF-ELSE
```py
if <expr>:
<statement>
elif:
<statement>
elif:
<statement>
.
.
.
else:
<statement>
```
#### Exemplo
*Se a nota do aluno for maior ou igual à média o sistema imprime "aprovado", caso contrário se a nota for maior ou igual à nota mínima da recuperação o sistema escreve "recuperado"*
#### Exemplo
Crie um programa que pergunta o ano de nascimento de uma pessoa e informa o grupo etário baseado na tabela abaixo:
| Grupo | Idade |
| ----- | ----- |
| Jóvem | até 14 anos |
| PIA* | até 64 anos |
| Idóso | acima de 65 anos |
> *PIA: População em Idade Ativa
#### Desafio
Crie um programa que realiza até 5 perguntas e determina a qual dos cinco reinos um determado ser vivo pertence (Reino Monera, Reino Protista, Reino Fungi, Reino Animalia ou Reino Plantae).
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
pd.set_option('display.float_format', lambda x: '%.4f' % x)
import seaborn as sns
sns.set_context("paper", font_scale=1.3)
sns.set_style('white')
import warnings
warnings.filterwarnings('ignore')
from time import time
import matplotlib.ticker as tkr
from scipy import stats
from statsmodels.tsa.stattools import adfuller
from sklearn import preprocessing
from statsmodels.tsa.stattools import pacf
%matplotlib inline
import math
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
from keras.layers import *
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from keras.callbacks import EarlyStopping
df=pd.read_csv('3contrat.csv')
print('Number of rows and columns:', df.shape)
df.head(5)
df.dtypes
df.head(5)
df['Time'] = pd.to_datetime(df['Time'])
df['year'] = df['Time'].apply(lambda x: x.year)
df['quarter'] = df['Time'].apply(lambda x: x.quarter)
df['month'] = df['Time'].apply(lambda x: x.month)
df['day'] = df['Time'].apply(lambda x: x.day)
df=df.loc[:,['Time','Close', 'year','quarter','month','day']]
df.sort_values('Time', inplace=True, ascending=True)
df = df.reset_index(drop=True)
df["weekday"]=df.apply(lambda row: row["Time"].weekday(),axis=1)
df["weekday"] = (df["weekday"] < 5).astype(int)
print('The time series starts from: ', df.Time.min())
print('The time series ends on: ', df.Time.max())
stat, p = stats.normaltest(df.Close)
print('Statistics=%.3f, p=%.3f' % (stat, p))
alpha = 0.05
if p > alpha:
print('Data looks Gaussian (fail to reject H0)')
else:
print('Data does not look Gaussian (reject H0)')
sns.distplot(df.Close);
print( 'Kurtosis of normal distribution: {}'.format(stats.kurtosis(df.Close)))
print( 'Skewness of normal distribution: {}'.format(stats.skew(df.Close)))
```
Kurtosis: describes heaviness of the tails of a distribution.
If the kurtosis is less than zero, then the distribution is light tails.
Skewness: measures asymmetry of the distribution.
If the skewness is between -0.5 and 0.5, the data are fairly symmetrical.
If the skewness is between -1 and — 0.5 or between 0.5 and 1, the data are moderately skewed.
If the skewness is less than -1 or greater than 1, the data are highly skewed.
```
df1=df.loc[:,['Time','Close']]
df1.set_index('Time',inplace=True)
df1.plot(figsize=(12,5))
plt.ylabel('Close')
plt.legend().set_visible(False)
plt.tight_layout()
plt.title('Close Price Time Series')
sns.despine(top=True)
plt.show();
plt.figure(figsize=(14,5))
plt.subplot(1,2,1)
plt.subplots_adjust(wspace=0.2)
sns.boxplot(x="year", y="Close", data=df)
plt.xlabel('year')
plt.title('Box plot of Yearly Close Price')
sns.despine(left=True)
plt.tight_layout()
plt.subplot(1,2,2)
sns.boxplot(x="quarter", y="Close", data=df)
plt.xlabel('quarter')
plt.title('Box plot of Quarterly Close Price')
sns.despine(left=True)
plt.tight_layout();
plt.figure(figsize=(14,6))
plt.subplot(1,2,1)
df['Close'].hist(bins=50)
plt.title('Close Price Distribution')
plt.subplot(1,2,2)
stats.probplot(df['Close'], plot=plt);
df1.describe().T
df.index = df.Time
fig = plt.figure(figsize=(18,16))
fig.subplots_adjust(hspace=.4)
ax1 = fig.add_subplot(5,1,1)
ax1.plot(df['Close'].resample('D').mean(),linewidth=1)
ax1.set_title('Mean Close Price resampled over day')
ax1.tick_params(axis='both', which='major')
ax2 = fig.add_subplot(5,1,2, sharex=ax1)
ax2.plot(df['Close'].resample('W').mean(),linewidth=1)
ax2.set_title('Mean Close Price resampled over week')
ax2.tick_params(axis='both', which='major')
ax3 = fig.add_subplot(5,1,3, sharex=ax1)
ax3.plot(df['Close'].resample('M').mean(),linewidth=1)
ax3.set_title('Mean Close Price resampled over month')
ax3.tick_params(axis='both', which='major')
ax4 = fig.add_subplot(5,1,4, sharex=ax1)
ax4.plot(df['Close'].resample('Q').mean(),linewidth=1)
ax4.set_title('Mean Close Price resampled over quarter')
ax4.tick_params(axis='both', which='major')
ax5 = fig.add_subplot(5,1,5, sharex=ax1)
ax5.plot(df['Close'].resample('A').mean(),linewidth=1)
ax5.set_title('Mean Close Price resampled over year')
ax5.tick_params(axis='both', which='major');
plt.figure(figsize=(14,8))
plt.subplot(2,2,1)
df.groupby('year').Close.agg('mean').plot()
plt.xlabel('')
plt.title('Mean Close Price by Year')
plt.subplot(2,2,2)
df.groupby('quarter').Close.agg('mean').plot()
plt.xlabel('')
plt.title('Mean Close Price by Quarter')
plt.subplot(2,2,3)
df.groupby('month').Close.agg('mean').plot()
plt.xlabel('')
plt.title('Mean Close Price by Month')
plt.subplot(2,2,4)
df.groupby('day').Close.agg('mean').plot()
plt.xlabel('')
plt.title('Mean Close Price by Day');
pd.pivot_table(df.loc[df['year'] != 2017], values = "Close",
columns = "year", index = "month").plot(subplots = True, figsize=(12, 12), layout=(3, 5), sharey=True);
dic={0:'Weekend',1:'Weekday'}
df['Day'] = df.weekday.map(dic)
a=plt.figure(figsize=(9,4))
plt1=sns.boxplot('year','Close',hue='Day',width=0.6,fliersize=3,
data=df)
a.legend(loc='upper center', bbox_to_anchor=(0.5, 1.00), shadow=True, ncol=2)
sns.despine(left=True, bottom=True)
plt.xlabel('')
plt.tight_layout()
plt.legend().set_visible(False);
plt1=sns.factorplot('year','Close',hue='Day',
data=df, size=4, aspect=1.5, legend=False)
plt.title('Factor Plot of Close Price by Weekday')
plt.tight_layout()
sns.despine(left=True, bottom=True)
plt.legend(loc='upper right');
df2=df1.resample('D', how=np.mean)
def test_stationarity(timeseries):
rolmean = timeseries.rolling(window=30).mean()
rolstd = timeseries.rolling(window=30).std()
plt.figure(figsize=(14,5))
sns.despine(left=True)
orig = plt.plot(timeseries, color='blue',label='Original')
mean = plt.plot(rolmean, color='red', label='Rolling Mean')
std = plt.plot(rolstd, color='black', label = 'Rolling Std')
plt.legend(loc='best'); plt.title('Rolling Mean & Standard Deviation')
plt.show()
print ('<Results of Dickey-Fuller Test>')
dftest = adfuller(timeseries, autolag='AIC')
dfoutput = pd.Series(dftest[0:4],
index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
for key,value in dftest[4].items():
dfoutput['Critical Value (%s)'%key] = value
print(dfoutput)
test_stationarity(df2.Close.dropna())
```
### Dickey-Fuller test
Null Hypothesis (H0): It suggests the time series has a unit root, meaning it is non-stationary. It has some time dependent structure.
Alternate Hypothesis (H1): It suggests the time series does not have a unit root, meaning it is stationary. It does not have time-dependent structure.
p-value > 0.05: Accept the null hypothesis (H0), the data has a unit root and is non-stationary.
```
dataset = df.Close.values #numpy.ndarray
dataset = dataset.astype('float32') #arrary of close price
dataset = np.reshape(dataset, (-1, 1)) #make each close price a list [839,],[900,]
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(dataset)
# 80% 20% split test set and training set
train_size = int(len(dataset) * 0.80) # 396
test_size = len(dataset) - train_size # 99
train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]
def create_dataset(dataset, look_back=1):
X, Y = [], []
for i in range(len(dataset)-look_back-1):
a = dataset[i:(i+look_back), 0]
X.append(a)
Y.append(dataset[i + look_back, 0])
return np.array(X), np.array(Y)
look_back = 7
X_train, Y_train = create_dataset(train, look_back) # training
X_test, Y_test = create_dataset(test, look_back) # testing
create_dataset(train, look_back)
# reshape input to be [samples, time steps, features]
X_train = np.reshape(X_train, (X_train.shape[0], 1, X_train.shape[1]))
X_test = np.reshape(X_test, (X_test.shape[0], 1, X_test.shape[1]))
X_train.shape
X_train
create_dataset(train, look_back)
X_train.shape
model = Sequential()
model.add(LSTM(100, input_shape=(X_train.shape[1], X_train.shape[2])))
model.add(Dropout(0.2))
model.add(Dense(1))
model.compile(loss='mean_absolute_error', optimizer='adam')
history = model.fit(X_train, Y_train, epochs=120, batch_size=15, validation_data=(X_test, Y_test),
callbacks=[EarlyStopping(monitor='val_loss', patience=10)], verbose=1, shuffle=False)
model.summary()
# train data
#make prediction
train_predict = model.predict(X_train)
test_predict = model.predict(X_test)
# invert predictions
train_predict = scaler.inverse_transform(train_predict)
Y_train = scaler.inverse_transform([Y_train])
test_predict = scaler.inverse_transform(test_predict)
Y_test = scaler.inverse_transform([Y_test])
print('Train Mean Absolute Error:', mean_absolute_error(Y_train[0], train_predict[:,0]))
print('Train Root Mean Squared Error:',np.sqrt(mean_squared_error(Y_train[0], train_predict[:,0])))
print('Test Mean Absolute Error:', mean_absolute_error(Y_test[0], test_predict[:,0]))
print('Test Root Mean Squared Error:',np.sqrt(mean_squared_error(Y_test[0], test_predict[:,0])))
Y_test
plt.figure(figsize=(8,4))
plt.plot(history.history['loss'], label='Train Loss')
plt.plot(history.history['val_loss'], label='Test Loss')
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epochs')
plt.legend(loc='upper right')
plt.show();
aa=[x for x in range(48)]
plt.figure(figsize=(8,4))
plt.plot(aa, Y_test[0][:48], marker='.', label="actual")
plt.plot(aa, test_predict[:,0][:48], 'r', label="prediction")
# plt.tick_params(left=False, labelleft=True) #remove ticks
plt.tight_layout()
sns.despine(top=True)
plt.subplots_adjust(left=0.07)
plt.ylabel('Close', size=15)
plt.xlabel('Time step', size=15)
plt.legend(fontsize=15)
plt.show();
Y_test
```
| github_jupyter |
# libCEED for Python examples
This is a tutorial to illustrate the main feautures of the Python interface for [libCEED](https://github.com/CEED/libCEED/), the low-level API library for efficient high-order discretization methods developed by the co-design [Center for Efficient Exascale Discretizations](https://ceed.exascaleproject.org/) (CEED) of the [Exascale Computing Project](https://www.exascaleproject.org/) (ECP).
While libCEED's focus is on high-order finite/spectral element method implementations, the approach is mostly algebraic and thus applicable to other discretizations in factored form, as explained in the [user manual](https://libceed.readthedocs.io/).
## Setting up libCEED for Python
Install libCEED for Python by running
```
! python -m pip install libceed
```
## CeedBasis
Here we show some basic examples to illustrate the `libceed.Basis` class. In libCEED, a `libceed.Basis` defines the finite element basis and associated quadrature rule (see [the API documentation](https://libceed.readthedocs.io/en/latest/libCEEDapi.html#finite-element-operator-decomposition)).
First we declare some auxiliary functions needed in the following examples
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
def eval(dim, x):
result, center = 1, 0.1
for d in range(dim):
result *= np.tanh(x[d] - center)
center += 0.1
return result
def feval(x1, x2):
return x1*x1 + x2*x2 + x1*x2 + 1
def dfeval(x1, x2):
return 2*x1 + x2
```
## $H^1$ Lagrange bases in 1D
The Lagrange interpolation nodes are at the Gauss-Lobatto points, so interpolation to Gauss-Lobatto quadrature points is the identity.
```
import libceed
ceed = libceed.Ceed()
b = ceed.BasisTensorH1Lagrange(
dim=1, # topological dimension
ncomp=1, # number of components
P=4, # number of basis functions (nodes) per dimension
Q=4, # number of quadrature points per dimension
qmode=libceed.GAUSS_LOBATTO)
print(b)
```
Although a `libceed.Basis` is fully discrete, we can use the Lagrange construction to extend the basis to continuous functions by applying `EVAL_INTERP` to the identity. This is the Vandermonde matrix of the continuous basis.
```
P = b.get_num_nodes()
nviz = 50
bviz = ceed.BasisTensorH1Lagrange(1, 1, P, nviz, libceed.GAUSS_LOBATTO)
# Construct P "elements" with one node activated
I = ceed.Vector(P * P)
with I.array(P, P) as x:
x[...] = np.eye(P)
Bvander = ceed.Vector(P * nviz)
bviz.apply(4, libceed.EVAL_INTERP, I, Bvander)
qviz, _weight = b.lobatto_quadrature(nviz)
with Bvander.array_read(nviz, P) as B:
plt.plot(qviz, B)
# Mark tho Lobatto nodes
qb, _weight = b.lobatto_quadrature(P)
plt.plot(qb, 0*qb, 'ok');
```
In contrast, the Gauss quadrature points are not collocated, and thus all basis functions are generally nonzero at every quadrature point.
```
b = ceed.BasisTensorH1Lagrange(1, 1, 4, 4, libceed.GAUSS)
print(b)
with Bvander.array_read(nviz, P) as B:
plt.plot(qviz, B)
# Mark tho Gauss quadrature points
qb, _weight = b.gauss_quadrature(P)
plt.plot(qb, 0*qb, 'ok');
```
Although the underlying functions are not an intrinsic property of a `libceed.Basis` in libCEED, the sizes are.
Here, we create a 3D tensor product element with more quadrature points than Lagrange interpolation nodes.
```
b = ceed.BasisTensorH1Lagrange(3, 1, 4, 5, libceed.GAUSS_LOBATTO)
p = libceed.Basis.get_num_nodes(b)
print('p =', p)
q = libceed.Basis.get_num_quadrature_points(b)
print('q =', q)
```
* In the following example, we demonstrate the application of an interpolatory basis in multiple dimensions
```
for dim in range(1, 4):
Q = 4
Qdim = Q**dim
Xdim = 2**dim
x = np.empty(Xdim*dim, dtype="float64")
uq = np.empty(Qdim, dtype="float64")
for d in range(dim):
for i in range(Xdim):
x[d*Xdim + i] = 1 if (i % (2**(dim-d))) // (2**(dim-d-1)) else -1
X = ceed.Vector(Xdim*dim)
X.set_array(x, cmode=libceed.USE_POINTER)
Xq = ceed.Vector(Qdim*dim)
Xq.set_value(0)
U = ceed.Vector(Qdim)
U.set_value(0)
Uq = ceed.Vector(Qdim)
bxl = ceed.BasisTensorH1Lagrange(dim, dim, 2, Q, libceed.GAUSS_LOBATTO)
bul = ceed.BasisTensorH1Lagrange(dim, 1, Q, Q, libceed.GAUSS_LOBATTO)
bxl.apply(1, libceed.EVAL_INTERP, X, Xq)
with Xq.array_read() as xq:
for i in range(Qdim):
xx = np.empty(dim, dtype="float64")
for d in range(dim):
xx[d] = xq[d*Qdim + i]
uq[i] = eval(dim, xx)
Uq.set_array(uq, cmode=libceed.USE_POINTER)
# This operation is the identity because the quadrature is collocated
bul.T.apply(1, libceed.EVAL_INTERP, Uq, U)
bxg = ceed.BasisTensorH1Lagrange(dim, dim, 2, Q, libceed.GAUSS)
bug = ceed.BasisTensorH1Lagrange(dim, 1, Q, Q, libceed.GAUSS)
bxg.apply(1, libceed.EVAL_INTERP, X, Xq)
bug.apply(1, libceed.EVAL_INTERP, U, Uq)
with Xq.array_read() as xq, Uq.array_read() as u:
#print('xq =', xq)
#print('u =', u)
if dim == 2:
# Default ordering is contiguous in x direction, but
# pyplot expects meshgrid convention, which is transposed.
x, y = xq.reshape(2, Q, Q).transpose(0, 2, 1)
plt.scatter(x, y, c=np.array(u).reshape(Q, Q))
plt.xlim(-1, 1)
plt.ylim(-1, 1)
plt.colorbar(label='u')
```
* In the following example, we demonstrate the application of the gradient of the shape functions in multiple dimensions
```
for dim in range (1, 4):
P, Q = 8, 10
Pdim = P**dim
Qdim = Q**dim
Xdim = 2**dim
sum1 = sum2 = 0
x = np.empty(Xdim*dim, dtype="float64")
u = np.empty(Pdim, dtype="float64")
for d in range(dim):
for i in range(Xdim):
x[d*Xdim + i] = 1 if (i % (2**(dim-d))) // (2**(dim-d-1)) else -1
X = ceed.Vector(Xdim*dim)
X.set_array(x, cmode=libceed.USE_POINTER)
Xq = ceed.Vector(Pdim*dim)
Xq.set_value(0)
U = ceed.Vector(Pdim)
Uq = ceed.Vector(Qdim*dim)
Uq.set_value(0)
Ones = ceed.Vector(Qdim*dim)
Ones.set_value(1)
Gtposeones = ceed.Vector(Pdim)
Gtposeones.set_value(0)
# Get function values at quadrature points
bxl = ceed.BasisTensorH1Lagrange(dim, dim, 2, P, libceed.GAUSS_LOBATTO)
bxl.apply(1, libceed.EVAL_INTERP, X, Xq)
with Xq.array_read() as xq:
for i in range(Pdim):
xx = np.empty(dim, dtype="float64")
for d in range(dim):
xx[d] = xq[d*Pdim + i]
u[i] = eval(dim, xx)
U.set_array(u, cmode=libceed.USE_POINTER)
# Calculate G u at quadrature points, G' * 1 at dofs
bug = ceed.BasisTensorH1Lagrange(dim, 1, P, Q, libceed.GAUSS)
bug.apply(1, libceed.EVAL_GRAD, U, Uq)
bug.T.apply(1, libceed.EVAL_GRAD, Ones, Gtposeones)
# Check if 1' * G * u = u' * (G' * 1)
with Gtposeones.array_read() as gtposeones, Uq.array_read() as uq:
for i in range(Pdim):
sum1 += gtposeones[i]*u[i]
for i in range(dim*Qdim):
sum2 += uq[i]
# Check that (1' * G * u - u' * (G' * 1)) is numerically zero
print('1T * G * u - uT * (GT * 1) =', np.abs(sum1 - sum2))
```
### Advanced topics
* In the following example, we demonstrate the QR factorization of a basis matrix.
The representation is similar to LAPACK's [`dgeqrf`](https://www.netlib.org/lapack/explore-html/dd/d9a/group__double_g_ecomputational_ga3766ea903391b5cf9008132f7440ec7b.html#ga3766ea903391b5cf9008132f7440ec7b), with elementary reflectors in the lower triangular block, scaled by `tau`.
```
qr = np.array([1, -1, 4, 1, 4, -2, 1, 4, 2, 1, -1, 0], dtype="float64")
tau = np.empty(3, dtype="float64")
qr, tau = libceed.Basis.qr_factorization(ceed, qr, tau, 4, 3)
print('qr =')
print(qr.reshape(4, 3))
print('tau =')
print(tau)
```
* In the following example, we demonstrate the symmetric Schur decomposition of a basis matrix
```
A = np.array([0.19996678, 0.0745459, -0.07448852, 0.0332866,
0.0745459, 1., 0.16666509, -0.07448852,
-0.07448852, 0.16666509, 1., 0.0745459,
0.0332866, -0.07448852, 0.0745459, 0.19996678], dtype="float64")
lam = libceed.Basis.symmetric_schur_decomposition(ceed, A, 4)
print("Q =")
for i in range(4):
for j in range(4):
if A[j+4*i] <= 1E-14 and A[j+4*i] >= -1E-14:
A[j+4*i] = 0
print("%12.8f"%A[j+4*i])
print("lambda =")
for i in range(4):
if lam[i] <= 1E-14 and lam[i] >= -1E-14:
lam[i] = 0
print("%12.8f"%lam[i])
```
* In the following example, we demonstrate the simultaneous diagonalization of a basis matrix
```
M = np.array([0.19996678, 0.0745459, -0.07448852, 0.0332866,
0.0745459, 1., 0.16666509, -0.07448852,
-0.07448852, 0.16666509, 1., 0.0745459,
0.0332866, -0.07448852, 0.0745459, 0.19996678], dtype="float64")
K = np.array([3.03344425, -3.41501767, 0.49824435, -0.11667092,
-3.41501767, 5.83354662, -2.9167733, 0.49824435,
0.49824435, -2.9167733, 5.83354662, -3.41501767,
-0.11667092, 0.49824435, -3.41501767, 3.03344425], dtype="float64")
x, lam = libceed.Basis.simultaneous_diagonalization(ceed, K, M, 4)
print("x =")
for i in range(4):
for j in range(4):
if x[j+4*i] <= 1E-14 and x[j+4*i] >= -1E-14:
x[j+4*i] = 0
print("%12.8f"%x[j+4*i])
print("lambda =")
for i in range(4):
if lam[i] <= 1E-14 and lam[i] >= -1E-14:
lam[i] = 0
print("%12.8f"%lam[i])
```
| github_jupyter |
```
from pyalink.alink import *
useLocalEnv(1)
from utils import *
import os
import pandas as pd
pd.set_option('display.max_colwidth', 5000)
pd.set_option('display.html.use_mathjax', False)
DATA_DIR = ROOT_DIR + "mnist" + os.sep
DENSE_TRAIN_FILE = "dense_train.ak";
DENSE_TEST_FILE = "dense_test.ak";
SPARSE_TRAIN_FILE = "sparse_train.ak";
SPARSE_TEST_FILE = "sparse_test.ak";
TABLE_TRAIN_FILE = "table_train.ak";
TABLE_TEST_FILE = "table_test.ak";
VECTOR_COL_NAME = "vec";
LABEL_COL_NAME = "label";
PREDICTION_COL_NAME = "id_cluster";
#c_1
import numpy as np
import gzip, struct
def get_df(image_path, label_path):
with gzip.open(label_path) as flbl:
magic, num = struct.unpack(">II", flbl.read(8))
label = np.frombuffer(flbl.read(), dtype=np.int8)
label = label.reshape(len(label), 1)
with gzip.open(image_path, 'rb') as fimg:
magic, num, rows, cols = struct.unpack(">IIII", fimg.read(16))
image = np.frombuffer(fimg.read(), dtype=np.uint8).reshape(len(label), rows * cols)
return pd.DataFrame(np.hstack((label, image)))
schema_str = "label int"
for i in range(0, 784):
schema_str = schema_str + ", c_" + str(i) + " double"
if not(os.path.exists(DATA_DIR + TABLE_TRAIN_FILE)) :
BatchOperator\
.fromDataframe(
get_df(DATA_DIR + 'train-images-idx3-ubyte.gz',
DATA_DIR + 'train-labels-idx1-ubyte.gz'),
schema_str
)\
.link(
AkSinkBatchOp().setFilePath(DATA_DIR + TABLE_TRAIN_FILE)
)
BatchOperator.execute()
if not(os.path.exists(DATA_DIR + TABLE_TEST_FILE)) :
BatchOperator\
.fromDataframe(
get_df(DATA_DIR + 't10k-images-idx3-ubyte.gz',
DATA_DIR + 't10k-labels-idx1-ubyte.gz'),
schema_str
)\
.link(
AkSinkBatchOp().setFilePath(DATA_DIR + TABLE_TEST_FILE)
)
BatchOperator.execute()
feature_cols = []
for i in range(0, 784) :
feature_cols.append("c_" + str(i))
if not(os.path.exists(DATA_DIR + DENSE_TRAIN_FILE)) :
AkSourceBatchOp()\
.setFilePath(DATA_DIR + TABLE_TRAIN_FILE)\
.lazyPrint(3)\
.link(
ColumnsToVectorBatchOp()\
.setSelectedCols(feature_cols)\
.setVectorCol(VECTOR_COL_NAME)\
.setReservedCols([LABEL_COL_NAME])
)\
.lazyPrint(3)\
.link(
AkSinkBatchOp().setFilePath(DATA_DIR + DENSE_TRAIN_FILE)
);
BatchOperator.execute();
if not(os.path.exists(DATA_DIR + DENSE_TEST_FILE)) :
AkSourceBatchOp()\
.setFilePath(DATA_DIR + TABLE_TEST_FILE)\
.lazyPrint(3)\
.link(
ColumnsToVectorBatchOp()\
.setSelectedCols(feature_cols)\
.setVectorCol(VECTOR_COL_NAME)\
.setReservedCols([LABEL_COL_NAME])
)\
.lazyPrint(3)\
.link(
AkSinkBatchOp().setFilePath(DATA_DIR + DENSE_TEST_FILE)
);
BatchOperator.execute();
if not(os.path.exists(DATA_DIR + SPARSE_TEST_FILE)) :
source = AkSourceBatchOp()\
.setFilePath(DATA_DIR + TABLE_TEST_FILE)\
.link(
AppendIdBatchOp().setIdCol("row_id")
);
row_id_label = source\
.select("row_id AS id, " + LABEL_COL_NAME)\
.lazyPrint(3, "row_id_label");
row_id_vec = source\
.lazyPrint(3)\
.link(
ColumnsToTripleBatchOp()\
.setSelectedCols(feature_cols)\
.setTripleColumnValueSchemaStr("col string, val double")\
.setReservedCols(["row_id"])
)\
.filter("val<>0")\
.lazyPrint(3)\
.select("row_id, val, CAST(SUBSTRING(col FROM 3) AS INT) AS col")\
.lazyPrint(3)\
.link(
TripleToVectorBatchOp()\
.setTripleRowCol("row_id")\
.setTripleColumnCol("col")\
.setTripleValueCol("val")\
.setVectorCol(VECTOR_COL_NAME)\
.setVectorSize(784)
)\
.lazyPrint(3);
JoinBatchOp()\
.setJoinPredicate("row_id = id")\
.setSelectClause(LABEL_COL_NAME + ", " + VECTOR_COL_NAME)\
.linkFrom(row_id_vec, row_id_label)\
.lazyPrint(3)\
.link(
AkSinkBatchOp().setFilePath(DATA_DIR + SPARSE_TEST_FILE)
);
BatchOperator.execute();
if not(os.path.exists(DATA_DIR + SPARSE_TRAIN_FILE)) :
source = AkSourceBatchOp()\
.setFilePath(DATA_DIR + TABLE_TRAIN_FILE)\
.link(
AppendIdBatchOp().setIdCol("row_id")
);
row_id_label = source\
.select("row_id AS id, " + LABEL_COL_NAME)\
.lazyPrint(3, "row_id_label");
row_id_vec = source\
.lazyPrint(3)\
.link(
ColumnsToTripleBatchOp()\
.setSelectedCols(feature_cols)\
.setTripleColumnValueSchemaStr("col string, val double")\
.setReservedCols(["row_id"])
)\
.filter("val<>0")\
.lazyPrint(3)\
.select("row_id, val, CAST(SUBSTRING(col FROM 3) AS INT) AS col")\
.lazyPrint(3)\
.link(
TripleToVectorBatchOp()\
.setTripleRowCol("row_id")\
.setTripleColumnCol("col")\
.setTripleValueCol("val")\
.setVectorCol(VECTOR_COL_NAME)\
.setVectorSize(784)
)\
.lazyPrint(3);
JoinBatchOp()\
.setJoinPredicate("row_id = id")\
.setSelectClause(LABEL_COL_NAME + ", " + VECTOR_COL_NAME)\
.linkFrom(row_id_vec, row_id_label)\
.lazyPrint(3)\
.link(
AkSinkBatchOp().setFilePath(DATA_DIR + SPARSE_TRAIN_FILE)
);
BatchOperator.execute();
AkSourceBatchOp()\
.setFilePath(DATA_DIR + DENSE_TRAIN_FILE)\
.lazyPrint(1, "MNIST data")\
.link(
VectorSummarizerBatchOp()\
.setSelectedCol(VECTOR_COL_NAME)\
.lazyPrintVectorSummary()
);
AkSourceBatchOp()\
.setFilePath(DATA_DIR + SPARSE_TRAIN_FILE)\
.lazyPrint(1, "MNIST data")\
.link(
VectorSummarizerBatchOp()\
.setSelectedCol(VECTOR_COL_NAME)\
.lazyPrintVectorSummary()
);
AkSourceBatchOp()\
.setFilePath(DATA_DIR + SPARSE_TRAIN_FILE)\
.lazyPrintStatistics()\
.groupBy(LABEL_COL_NAME, LABEL_COL_NAME + ", COUNT(*) AS cnt")\
.orderBy("cnt", 100)\
.lazyPrint(-1);
BatchOperator.execute()
#c_2
train_data = AkSourceBatchOp().setFilePath(DATA_DIR + SPARSE_TRAIN_FILE);
test_data = AkSourceBatchOp().setFilePath(DATA_DIR + SPARSE_TEST_FILE);
Softmax()\
.setVectorCol(VECTOR_COL_NAME)\
.setLabelCol(LABEL_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)\
.enableLazyPrintTrainInfo()\
.enableLazyPrintModelInfo()\
.fit(train_data)\
.transform(test_data)\
.link(
EvalMultiClassBatchOp()\
.setLabelCol(LABEL_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)\
.lazyPrintMetrics("Softmax")
);
BatchOperator.execute()
#c_3
train_data = AkSourceBatchOp().setFilePath(DATA_DIR + SPARSE_TRAIN_FILE);
test_data = AkSourceBatchOp().setFilePath(DATA_DIR + SPARSE_TEST_FILE);
OneVsRest()\
.setClassifier(
LogisticRegression()\
.setVectorCol(VECTOR_COL_NAME)\
.setLabelCol(LABEL_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)
)\
.setNumClass(10)\
.fit(train_data)\
.transform(test_data)\
.link(
EvalMultiClassBatchOp()\
.setLabelCol(LABEL_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)\
.lazyPrintMetrics("OneVsRest - LogisticRegression")
);
OneVsRest()\
.setClassifier(
LinearSvm()\
.setVectorCol(VECTOR_COL_NAME)\
.setLabelCol(LABEL_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)
)\
.setNumClass(10)\
.fit(train_data)\
.transform(test_data)\
.link(
EvalMultiClassBatchOp()\
.setLabelCol(LABEL_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)\
.lazyPrintMetrics("OneVsRest - LinearSvm")
);
BatchOperator.execute();
#c_4
useLocalEnv(4)
train_data = AkSourceBatchOp().setFilePath(DATA_DIR + SPARSE_TRAIN_FILE);
test_data = AkSourceBatchOp().setFilePath(DATA_DIR + SPARSE_TEST_FILE);
MultilayerPerceptronClassifier()\
.setLayers([784, 10])\
.setVectorCol(VECTOR_COL_NAME)\
.setLabelCol(LABEL_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)\
.fit(train_data)\
.transform(test_data)\
.link(
EvalMultiClassBatchOp()\
.setLabelCol(LABEL_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)\
.lazyPrintMetrics("MultilayerPerceptronClassifier {784, 10}")
);
BatchOperator.execute();
MultilayerPerceptronClassifier()\
.setLayers([784, 256, 128, 10])\
.setVectorCol(VECTOR_COL_NAME)\
.setLabelCol(LABEL_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)\
.fit(train_data)\
.transform(test_data)\
.link(
EvalMultiClassBatchOp()\
.setLabelCol(LABEL_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)\
.lazyPrintMetrics("MultilayerPerceptronClassifier {784, 256, 128, 10}")
);
BatchOperator.execute();
#c_5
useLocalEnv(4)
train_data = AkSourceBatchOp().setFilePath(DATA_DIR + TABLE_TRAIN_FILE)
test_data = AkSourceBatchOp().setFilePath(DATA_DIR + TABLE_TEST_FILE)
featureColNames = train_data.getColNames()
featureColNames.remove(LABEL_COL_NAME)
train_data.lazyPrint(5)
BatchOperator.execute()
sw = Stopwatch()
for treeType in ['GINI', 'INFOGAIN', 'INFOGAINRATIO'] :
sw.reset()
sw.start()
DecisionTreeClassifier()\
.setTreeType(treeType)\
.setFeatureCols(featureColNames)\
.setLabelCol(LABEL_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)\
.enableLazyPrintModelInfo()\
.fit(train_data)\
.transform(test_data)\
.link(
EvalMultiClassBatchOp()\
.setLabelCol(LABEL_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)\
.lazyPrintMetrics("DecisionTreeClassifier " + treeType)
);
BatchOperator.execute()
sw.stop()
print(sw.getElapsedTimeSpan())
for numTrees in [2, 4, 8, 16, 32, 64, 128] :
sw.reset();
sw.start();
RandomForestClassifier()\
.setSubsamplingRatio(0.6)\
.setNumTreesOfInfoGain(numTrees)\
.setFeatureCols(featureColNames)\
.setLabelCol(LABEL_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)\
.enableLazyPrintModelInfo()\
.fit(train_data)\
.transform(test_data)\
.link(
EvalMultiClassBatchOp()\
.setLabelCol(LABEL_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)\
.lazyPrintMetrics("RandomForestClassifier : " + str(numTrees))
);
BatchOperator.execute();
sw.stop();
print(sw.getElapsedTimeSpan());
#c_6
useLocalEnv(4)
train_data = AkSourceBatchOp().setFilePath(DATA_DIR + SPARSE_TRAIN_FILE);
test_data = AkSourceBatchOp().setFilePath(DATA_DIR + SPARSE_TEST_FILE);
KnnClassifier()\
.setK(3)\
.setVectorCol(VECTOR_COL_NAME)\
.setLabelCol(LABEL_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)\
.fit(train_data)\
.transform(test_data)\
.link(
EvalMultiClassBatchOp()\
.setLabelCol(LABEL_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)\
.lazyPrintMetrics("KnnClassifier - 3 - EUCLIDEAN")
);
BatchOperator.execute();
KnnClassifier()\
.setDistanceType('COSINE')\
.setK(3)\
.setVectorCol(VECTOR_COL_NAME)\
.setLabelCol(LABEL_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)\
.fit(train_data)\
.transform(test_data)\
.link(
EvalMultiClassBatchOp()\
.setLabelCol(LABEL_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)\
.lazyPrintMetrics("KnnClassifier - 3 - COSINE")
);
BatchOperator.execute();
KnnClassifier()\
.setK(7)\
.setVectorCol(VECTOR_COL_NAME)\
.setLabelCol(LABEL_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)\
.fit(train_data)\
.transform(test_data)\
.link(
EvalMultiClassBatchOp()\
.setLabelCol(LABEL_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)\
.lazyPrintMetrics("KnnClassifier - 7 - EUCLIDEAN")
);
BatchOperator.execute();
```
| github_jupyter |
```
import spark
%reload_ext spark
%%ignite
# Happy path:
# fill_style accepts strings, rgb and rgba inputs
# fill_style caps out-of-bound numbers to respective ranges
def setup():
size(200, 200)
print("fill_style renders fill color ")
def draw():
global color
clear()
with_string()
with_rgb_in_bounds()
with_rgb_out_of_bounds()
with_rgba_in_bounds()
with_rgba_out_of_bounds()
def expect_fill_style(expected):
global canvas
if canvas.fill_style != expected:
print("FAIL:\n\tExpected canvas.fill_style to be:\n\t\t" + expected)
print("\tbut received:\n\t\t" + str(canvas.fill_style))
def with_string():
print("with string input")
fill_style('green') # Expected colour: green
fill_rect(0, 0, 30, 30)
expect_fill_style('green')
def with_rgb_in_bounds():
print("with rgb in bounds")
fill_style(0, 0, 255) # Expected colour: blue
fill_rect(0, 40, 30, 30)
expect_fill_style('rgb(0, 0, 255)')
def with_rgb_out_of_bounds():
print("with rgb out of bounds")
fill_style(-100, -200, 500) # Expected colour: blue
fill_rect(40, 40, 30, 30)
expect_fill_style('rgb(0, 0, 255)')
def with_rgba_in_bounds():
print("with rgba in bounds")
fill_style(255, 0, 0, 0.3) # Expected colour: translucent red
fill_rect(0, 80, 30, 30)
expect_fill_style('rgba(255, 0, 0, 0.3)')
def with_rgba_out_of_bounds():
print("with rgba out of bounds")
fill_style(500, -1, -1000, 2) # Expected colour: solid red. Note sending 2 instead of 2.0
fill_rect(40, 80, 30, 30)
expect_fill_style('rgba(255, 0, 0, 1.0)')
%%ignite
# Unhappy path
# Incorrect number of args is rejected
# Non-ints are rejected for RGB
# None as arg is rejected
def setup():
print("fill_style throws exceptions")
size(100, 100)
expect_type_error(with_missing_args, "fill_style expected 1, 3 or 4 arguments, got 0")
expect_type_error(with_none_in_rgba, "fill_style expected None to be an int")
expect_type_error(with_string_in_rgb, "fill_style expected 'x' to be an int")
expect_type_error(with_float_in_rgb, "fill_style expected 128.0 to be an int")
# TODO: This test expects a different error type
# expect_type_error(with_none_in_string, "The 'fill_style' trait of a Canvas instance expected a valid HTML color, not the NoneType None")
def expect_type_error(func, expected_error):
try:
func()
except TypeError as e:
if str(e) != expected_error:
print("FAIL:\n\tExpected " + str(func.__name__) + " to raise error:\n\t\t" + expected_error)
print("\tbut received:\n\t\t" + str(e))
def with_missing_args():
print("with missing args")
fill_style()
def with_none_in_string():
print("with None in string")
fill_style(None)
def with_none_in_rgba():
print("with None-types in rgba")
fill_style(None, None, None, None)
def with_string_in_rgb():
print("with string in rgb")
fill_style('x', 'y', 'z')
def with_float_in_rgb():
print("with float in rgb")
fill_style(128.0, 128, 128)
```
| github_jupyter |
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = ''
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = '/home/husein/t5/prepare/mesolitica-tpu.json'
import malaya_speech.train.model.conformer as conformer
import malaya_speech.train.model.transducer as transducer
import malaya_speech
import tensorflow as tf
import numpy as np
import json
from glob import glob
import pandas as pd
subwords = malaya_speech.subword.load('transducer-singlish.subword')
featurizer = malaya_speech.tf_featurization.STTFeaturizer(
normalize_per_feature = True
)
n_mels = 80
sr = 16000
maxlen = 18
minlen_text = 1
def mp3_to_wav(file, sr = sr):
audio = AudioSegment.from_file(file)
audio = audio.set_frame_rate(sr).set_channels(1)
sample = np.array(audio.get_array_of_samples())
return malaya_speech.astype.int_to_float(sample), sr
def generate(file):
print(file)
with open(file) as fopen:
audios = json.load(fopen)
for i in range(len(audios)):
try:
audio = audios[i][0]
wav_data, _ = malaya_speech.load(audio, sr = sr)
if (len(wav_data) / sr) > maxlen:
# print(f'skipped audio too long {audios[i]}')
continue
if len(audios[i][1]) < minlen_text:
# print(f'skipped text too short {audios[i]}')
continue
t = malaya_speech.subword.encode(
subwords, audios[i][1], add_blank = False
)
back = np.zeros(shape=(2000,))
front = np.zeros(shape=(200,))
wav_data = np.concatenate([front, wav_data, back], axis=-1)
yield {
'waveforms': wav_data,
'targets': t,
'targets_length': [len(t)],
}
except Exception as e:
print(e)
def preprocess_inputs(example):
s = featurizer.vectorize(example['waveforms'])
mel_fbanks = tf.reshape(s, (-1, n_mels))
length = tf.cast(tf.shape(mel_fbanks)[0], tf.int32)
length = tf.expand_dims(length, 0)
example['inputs'] = mel_fbanks
example['inputs_length'] = length
example.pop('waveforms', None)
example['targets'] = tf.cast(example['targets'], tf.int32)
example['targets_length'] = tf.cast(example['targets_length'], tf.int32)
return example
def get_dataset(
file,
batch_size = 3,
shuffle_size = 20,
thread_count = 24,
maxlen_feature = 1800,
):
def get():
dataset = tf.data.Dataset.from_generator(
generate,
{
'waveforms': tf.float32,
'targets': tf.int32,
'targets_length': tf.int32,
},
output_shapes = {
'waveforms': tf.TensorShape([None]),
'targets': tf.TensorShape([None]),
'targets_length': tf.TensorShape([None]),
},
args = (file,),
)
dataset = dataset.prefetch(tf.contrib.data.AUTOTUNE)
dataset = dataset.map(
preprocess_inputs, num_parallel_calls = thread_count
)
dataset = dataset.padded_batch(
batch_size,
padded_shapes = {
'inputs': tf.TensorShape([None, n_mels]),
'inputs_length': tf.TensorShape([None]),
'targets': tf.TensorShape([None]),
'targets_length': tf.TensorShape([None]),
},
padding_values = {
'inputs': tf.constant(0, dtype = tf.float32),
'inputs_length': tf.constant(0, dtype = tf.int32),
'targets': tf.constant(0, dtype = tf.int32),
'targets_length': tf.constant(0, dtype = tf.int32),
},
)
return dataset
return get
dev_dataset = get_dataset('test-set-imda.json', batch_size = 3)()
features = dev_dataset.make_one_shot_iterator().get_next()
features
training = True
config = malaya_speech.config.conformer_base_encoder_config
config['dropout'] = 0.0
conformer_model = conformer.Model(
kernel_regularizer = None, bias_regularizer = None, **config
)
decoder_config = malaya_speech.config.conformer_base_decoder_config
decoder_config['embed_dropout'] = 0.0
transducer_model = transducer.rnn.Model(
conformer_model, vocabulary_size = subwords.vocab_size, **decoder_config
)
targets_length = features['targets_length'][:, 0]
v = tf.expand_dims(features['inputs'], -1)
z = tf.zeros((tf.shape(features['targets'])[0], 1), dtype = tf.int32)
c = tf.concat([z, features['targets']], axis = 1)
logits = transducer_model([v, c, targets_length + 1], training = training)
decoded = transducer_model.greedy_decoder(v, features['inputs_length'][:, 0], training = training)
decoded
sess = tf.Session()
sess.run(tf.global_variables_initializer())
var_list = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)
saver = tf.train.Saver(var_list = var_list)
saver.restore(sess, 'asr-base-conformer-transducer-singlish/model.ckpt-800000')
wer, cer = [], []
index = 0
while True:
try:
r = sess.run([decoded, features['targets']])
for no, row in enumerate(r[0]):
d = malaya_speech.subword.decode(subwords, row[row > 0])
t = malaya_speech.subword.decode(subwords, r[1][no])
wer.append(malaya_speech.metrics.calculate_wer(t, d))
cer.append(malaya_speech.metrics.calculate_cer(t, d))
index += 1
except Exception as e:
break
np.mean(wer), np.mean(cer)
for no, row in enumerate(r[0]):
d = malaya_speech.subword.decode(subwords, row[row > 0])
t = malaya_speech.subword.decode(subwords, r[1][no])
print(no, d)
print(t)
print()
```
| github_jupyter |
```
import datafaucet as dfc
# start the engine
project = dfc.project.load()
spark = dfc.context()
df = spark.range(100)
df.data.grid()
(df
.cols.get('name').obscure(alias='enc')
.cols.get('enc').unravel(alias='dec')
).data.grid()
df.data.grid().groupby(['id', 'name'])\
.agg({'fight':[max, 'min'], 'trade': 'count'}).stack(0)
from pyspark.sql import functions as F
df.cols.get('groupby('id', 'name')\
.agg({'fight':[F.max, 'min'], 'trade': 'min'}).data.grid()
df.cols.groupby('id', 'name')\
.agg({'fight':[F.max, 'min'], 'trade': 'count'}, stack=True).data.grid()
from pyspark.sql import functions as F
df.groupby('id', 'name').agg(
F.lit('fight').alias('colname'),
F.min('fight').alias('min'),
F.max('fight').alias('max'),
F.lit(None).alias('count')).union(
df.groupby('id', 'name').agg(
F.lit('trade').alias('colname'),
F.lit(None).alias('min'),
F.lit(None).alias('max'),
F.count('trade').alias('count'))
).data.grid()
def string2func(func):
if isinstance(func, str):
f = A.all.get(func)
if f:
return (func,f)
else:
raise ValueError(f'function {func} not found')
elif isinstance(func, (type(lambda x: x), type(max))):
return (func.__name__, func)
else:
raise ValueError('Invalid aggregation function')
def parse_single_func(func):
if isinstance(func, (str, type(lambda x: x), type(max))):
return string2func(func)
elif isinstance(func, (tuple)):
if len(func)==2:
return (func[0], string2func(func[1])[1])
else:
raise ValueError('Invalid list/tuple')
else:
raise ValueError(f'Invalid aggregation item {func}')
def parse_list_func(func):
func = [func] if type(func)!=list else func
return [parse_single_func(x) for x in func]
def parse_dict_func(func):
func = {0: func} if not isinstance(func, dict) else func
return {x[0]:parse_list_func(x[1]) for x in func.items()}
lst = [
F.max,
'max',
('maxx', F.max),
('maxx', 'max'),
['max', F.max, ('maxx', F.max)],
{'a': F.max},
{'a': 'max'},
{'a': ('maxx', F.max)},
{'a': ('maxx', 'max')},
{'a': ['max', F.max, ('maxx', F.max)]},
{'a': F.max, 'b': F.max},
{'a': 'max', 'b': 'max'},
{'a': ('maxx', F.max), 'b': ('maxx', F.max)},
{'a': ('maxx', 'max'), 'b': ('maxx', 'max')},
{'a': ['max', F.max, ('maxx', F.max)], 'b': ['min', F.min, ('minn', F.min)]}
]
for i in lst:
print('=====')
print(i)
funcs = parse_dict_func(i)
all_cols = set()
for k, v in funcs.items():
all_cols = all_cols.union(( x[0] for x in v ))
print('all_cols:', all_cols)
for c in ['a', 'b']:
print('-----', c, '-----')
agg_funcs = funcs.get(0, funcs.get(c))
if agg_funcs is None:
continue
agg_cols = set([x[0] for x in agg_funcs])
null_cols = all_cols - agg_cols
print('column',c)
print('all ',all_cols)
print('agg ',agg_cols)
print('null ', null_cols)
for n,f in agg_funcs:
print(c, n,f)
df.cols.groupby('id', 'name').agg({
'fight':['sum', 'min', 'max'],
'trade':['max', 'count']}).data.grid()
pdf = df.data.grid()
help(pdf.agg)
# hash / rand columns which you wish to protect during ingest
df = (df
.cols.find('greedy').rand()
.cols.get('name').hashstr(salt='foobar')
.rows.sample(3)
)
df.data.grid()
from pyspark.sql import functions as F
df.cols.agg({'type':'type', 'sample':'first'}).data.grid()
df.save('races', 'minio')
dfc.list('minio', 'races').data.grid()
```
| github_jupyter |
# Sonar - Decentralized Model Training Simulation (local)
DISCLAIMER: This is a proof-of-concept implementation. It does not represent a remotely product ready implementation or follow proper conventions for security, convenience, or scalability. It is part of a broader proof-of-concept demonstrating the vision of the OpenMined project, its major moving parts, and how they might work together.
# Getting Started: Installation
##### Step 1: install IPFS
- https://ipfs.io/docs/install/
##### Step 2: Turn on IPFS Daemon
Execute on command line:
> ipfs daemon
##### Step 3: Install Ethereum testrpc
- https://github.com/ethereumjs/testrpc
##### Step 4: Turn on testrpc with 1000 initialized accounts (each with some money)
Execute on command line:
> testrpc -a 1000
##### Step 5: install openmined/sonar and all dependencies (truffle)
##### Step 6: Locally Deploy Smart Contracts in openmined/sonar
From the OpenMined/Sonar repository root run
> truffle compile
> truffle migrate
you should see something like this when you run migrate:
```
Using network 'development'.
Running migration: 1_initial_migration.js
Deploying Migrations...
Migrations: 0xf06039885460a42dcc8db5b285bb925c55fbaeae
Saving successful migration to network...
Saving artifacts...
Running migration: 2_deploy_contracts.js
Deploying ConvertLib...
ConvertLib: 0x6cc86f0a80180a491f66687243376fde45459436
Deploying ModelRepository...
ModelRepository: 0xe26d32efe1c573c9f81d68aa823dcf5ff3356946
Linking ConvertLib to MetaCoin
Deploying MetaCoin...
MetaCoin: 0x6d3692bb28afa0eb37d364c4a5278807801a95c5
```
The address after 'ModelRepository' is something you'll need to copy paste into the code
below when you initialize the "ModelRepository" object. In this case the address to be
copy pasted is `0xe26d32efe1c573c9f81d68aa823dcf5ff3356946`.
##### Step 7: execute the following code
# The Simulation: Diabetes Prediction
In this example, a diabetes research center (Cure Diabetes Inc) wants to train a model to try to predict the progression of diabetes based on several indicators. They have collected a small sample (42 patients) of data but it's not enough to train a model. So, they intend to offer up a bounty of $5,000 to the OpenMined commmunity to train a high quality model.
As it turns out, there are 400 diabetics in the network who are candidates for the model (are collecting the relevant fields). In this simulation, we're going to faciliate the training of Cure Diabetes Inc incentivizing these 400 anonymous contributors to train the model using the Ethereum blockchain.
Note, in this simulation we're only going to use the sonar and syft packages (and everything is going to be deployed locally on a test blockchain). Future simulations will incorporate mine and capsule for greater anonymity and automation.
### Imports and Convenience Functions
```
import warnings
import numpy as np
import phe as paillier
from sonar.contracts import ModelRepository,Model
from syft.he.paillier.keys import KeyPair
from syft.nn.linear import LinearClassifier
from sklearn.datasets import load_diabetes
def get_balance(account):
return repo.web3.fromWei(repo.web3.eth.getBalance(account),'ether')
warnings.filterwarnings('ignore')
```
### Setting up the Experiment
```
# for the purpose of the simulation, we're going to split our dataset up amongst
# the relevant simulated users
diabetes = load_diabetes()
y = diabetes.target
X = diabetes.data
validation = (X[0:5],y[0:5])
anonymous_diabetes_users = (X[6:],y[6:])
# we're also going to initialize the model trainer smart contract, which in the
# real world would already be on the blockchain (managing other contracts) before
# the simulation begins
# ATTENTION: copy paste the correct address (NOT THE DEFAULT SEEN HERE) from truffle migrate output.
repo = ModelRepository('0x6c7a23081b37e64adc5500c12ee851894d9fd500', ipfs_host='localhost', web3_host='localhost') # blockchain hosted model repository
# we're going to set aside 10 accounts for our 42 patients
# Let's go ahead and pair each data point with each patient's
# address so that we know we don't get them confused
patient_addresses = repo.web3.eth.accounts[1:10]
anonymous_diabetics = list(zip(patient_addresses,
anonymous_diabetes_users[0],
anonymous_diabetes_users[1]))
# we're going to set aside 1 account for Cure Diabetes Inc
cure_diabetes_inc = repo.web3.eth.accounts[1]
```
## Step 1: Cure Diabetes Inc Initializes a Model and Provides a Bounty
```
pubkey,prikey = KeyPair().generate(n_length=1024)
diabetes_classifier = LinearClassifier(desc="DiabetesClassifier",n_inputs=10,n_labels=1)
initial_error = diabetes_classifier.evaluate(validation[0],validation[1])
diabetes_classifier.encrypt(pubkey)
diabetes_model = Model(owner=cure_diabetes_inc,
syft_obj = diabetes_classifier,
bounty = 1,
initial_error = initial_error,
target_error = 10000
)
model_id = repo.submit_model(diabetes_model)
```
## Step 2: An Anonymous Patient Downloads the Model and Improves It
```
model_id
model = repo[model_id]
diabetic_address,input_data,target_data = anonymous_diabetics[0]
repo[model_id].submit_gradient(diabetic_address,input_data,target_data)
```
## Step 3: Cure Diabetes Inc. Evaluates the Gradient
```
repo[model_id]
old_balance = get_balance(diabetic_address)
print(old_balance)
new_error = repo[model_id].evaluate_gradient(cure_diabetes_inc,repo[model_id][0],prikey,pubkey,validation[0],validation[1])
new_error
new_balance = get_balance(diabetic_address)
incentive = new_balance - old_balance
print(incentive)
```
## Step 4: Rinse and Repeat
```
model
for i,(addr, input, target) in enumerate(anonymous_diabetics):
try:
model = repo[model_id]
# patient is doing this
model.submit_gradient(addr,input,target)
# Cure Diabetes Inc does this
old_balance = get_balance(addr)
new_error = model.evaluate_gradient(cure_diabetes_inc,model[i+1],prikey,pubkey,validation[0],validation[1],alpha=2)
print("new error = "+str(new_error))
incentive = round(get_balance(addr) - old_balance,5)
print("incentive = "+str(incentive))
except:
"Connection Reset"
```
| github_jupyter |
```
from urllib2 import Request, urlopen
from urlparse import urlparse, urlunparse
import requests, requests_cache
import pandas as pd
import json
import os
import numpy as np
from matplotlib import pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
from urllib2 import Request, urlopen
import requests
# In terminal: conda install requests
import requests_cache
# In terminal: pip install requests_cache
pwd
os.chdir('/Users/kaijin/Downloads')
KAWEAH=pd.read_csv('San_Joaquin_Valley.csv')
KAWEAH['count'] = pd.Series(1, index =KAWEAH.index )
f = {'Acres':['sum'], 'WaterUsage':['mean'], 'UsageTotal':['sum'], 'count':['sum']}
KAWEAH.groupby(['Subbasin_N', 'County_N', 'Year', 'CropName']).agg(f).head()
county_name=np.unique(KAWEAH["County_N"])
```
Lets extract the zipcode according to the county name
```
for i in range(9):
print county_name[i]
zipcode=[93210,93263,93202,93638,93620,95641,95242,95326,93201]
ZipcodeList=[{ "County_N":county_name[i], "zipcode":zipcode[i] } for i in range(len(zipcode))]
COUNTYZIP=pd.DataFrame(ZipcodeList, columns=["County_N", "zipcode"])
COUNTYZIP
```
Lets extract the zipcode and precipetation data from California Department of Water Resources
http://et.water.ca.gov/Rest/Index
```
start="2010-01-01"
def ndb_search(term,start,end,verbose = False):
"""
This takes all of the necessary parameters to form a query
Input: key (data.gov API key, string), term (type, string)
Output: JSON object
"""
url = "http://et.water.ca.gov/api/data"
response = requests.get(url, params = {
"targets": term,
"appKey":"90e36c84-3f23-48a3-becd-1865076a04fd",
"startDate":start,
"EndDate":end,
"dataItems": "day-precip"
})
response.raise_for_status() # check for errors
if verbose:
print response.url
return response.json() # parse JSON
Tulare2010_Recode=Tulare2010["Data"]["Providers"][0]['Records']
len(Tulare2010_Recode)
#note inside a county there may be multilple station that recode the data
# we take the mean then times 365 to get one year rain
# note the value is inches
precip=[ Tulare2010_Recode[i]['DayPrecip']['Value'] for i in range(len(Tulare2010_Recode))]
precip2=np.array(precip).astype(np.float)
#precip2
#WRITE INTO FUNCTIONS
def precip_cal(term,year,verbose = False):
"""
This takes zipcode and year gives precipitaion of a year
Input: term (zipcode, int), year (year, int)
Output: precipitation of a year and a certain county
"""
start="{}-01-01".format("".join(str(year)))
end="{}-12-31".format("".join(str(year)))
Tulare2010=ndb_search(term,start,end,verbose = False)
Tulare2010_Recode=Tulare2010["Data"]["Providers"][0]['Records']
precip=[ Tulare2010_Recode[i]['DayPrecip']['Value'] for i in range(len(Tulare2010_Recode))]
precip2=np.array(precip).astype(np.float)
return np.nanmean(precip2)*365 # parse JSON
year=[2010,2011,2012,2013,2014,2015]
ZipcodeList=[{ "County_N":county_name[i], "zipcode":zipcode[i],"year":year[j]} for i in range(len(zipcode)) for j in range(6) ]
ZipcodeList
COUNTYYear=pd.DataFrame(ZipcodeList, columns=["County_N", "zipcode","year"])
x=[precip_cal(COUNTYYear["zipcode"][i],COUNTYYear["year"][i]) for i in xrange(54) ]
COUNTYYear=pd.DataFrame(ZipcodeList, columns=["County_N", "zipcode","year"])
COUNTYYear["Precip"]=x
COUNTYYear
COUNTYYear
# unit for precip is inch
newtable=pd.merge(KAWEAH, COUNTYYear,how="right")
f = {'Acres':['sum'], 'WaterUsage':['mean'], 'UsageTotal':['sum'], 'count':['sum'],"Precip":['mean']}
grouped_data=newtable.groupby(['Subbasin_N', 'County_N', 'Year', 'CropName']).agg(f)
```
Crop value extract from
https://www.nass.usda.gov/Statistics_by_State/California/Publications/California_Ag_Statistics/CAFieldCrops.pdf
cwt is unit 100pounds
https://www.nass.usda.gov/Statistics_by_State/California/Publications/
```
cropname=np.unique(KAWEAH["CropName"])
cropname
for i in range(len(cropname)):
print corpname[i]
len(cropname)
def avg(l):
return sum(l, 0.0) / len(l)
avg([1*3,2*5])
1628*140.00
avg([ 8.88*466 ,5.73*682 ,2.48*3390 ,19.00*391,8.33*780,14.10*429 ,5.30*664 , 1.76 *3710,1750*2.06 ])
# data from price value in 2013
# econ value is dollar per acers
Econ_dict = { "Al Pist":2360*3.21,
"Alfalfa":7.0*206.00,
"Corn": 26.50*48.23,
"Cotton":1628*140.00,
"Cucurb":avg([260*20.20, 180*35.40, 200*25.90,580*13.00,300*16.00,330*15.60]),
#Honeydew Melons 260 2,730,000 20.20 Cwt. Cwt. $/Cwt.
#"Squash" 180 1,224,000 35.40 Cwt. Cwt. $/Cwt.
#"Cucumbers" 200 760,000 25.90 Cwt. Cwt. $/Cwt.
#"Watermelons" 580 5,800,000 13.00 Cwt. Cwt. $/Cwt.
#"Cantaloupes" 300 12,750,000 16.00 Cwt. Cwt. $/Cwt.
#"Pumpkins 330 1,947,000 15.60 Cwt. Cwt. $/Cwt.
"DryBean": 2320*56.80,
"Grain":5.35*190.36,
"On Gar":avg([ 400*13.20,165*60.30 ]),
#"Onions" spring 400 2,720,000 13.20 summer 490 3,822,000 6.40 Onions, Summer Storage 399 11,700,000 9.11
# "Garlic" 165 3,795,000 60.30
"Oth Dec":avg([ 8.88*466 ,5.73*682 ,2.48*3390 ,19.00*391,8.33*780,14.10*429 ,5.30*664 , 1.76 *3710,1750*2.06 ]),
#"Apples" 8.88 135,000 466 Tons Tons $/Ton
#"Apricots" 5.73 54,400 682 Tons Tons $/Ton
#"Cherries", 2.48 82,000 3,390 Tons Tons $/Ton
#"Pears", 19.00 220,000 391 Tons Tons $/Ton
#"Nectarines" 8.33 150,000 780 Tons Tons $/Ton
#"Peaches", 14.10 648,000 429 Tons Tons $/Ton
#"Plums", 5.30 95,400 664 Tons Tons $/Ton
#"Walnuts" 1.76 492,000 3,710 #tones Tons $/Ton
#"Pecans" 1,750 5,000 2.06 Pounds 1000pounds $/Pound
"Oth Fld":avg([1296.00* 27.1, 17.00*37.56]),
# sunflowers 1,296.00 751,500 27.1 Tons Tons $/Ton
# Sorghum2009 17.00 646,000 37.56 Tons Tons $/Ton
"Oth Trk":avg([320*29.60, 350*24.90, 32*152.00, 180*42.70, 107*248.00,425*41.70,385* 38.70 ,165*42.10,405*21.70 ]),
#"Carrots" 320 20,000,000 29.60 Cwt. Cwt. $/Cwt.
#"Lettuce" 350 33,600,000 24.90 Cwt. Cwt. $/Cwt.
#"Asparagus" 32 368,000 152.00 Cwt. Cwt. $/Cwt.
#"Cauliflower" 180 5,868,000 42.70 Cwt. Cwt. $/Cwt.
# berries 107 514,000 248.00 Cwt. Cwt. $/Cwt.
# "Peppers Bell", 425 8,465,000 41.70 Cwt. Cwt. $/Cwt.
# pepers Chile 385 2,640,000 38.70 Cwt. Cwt. $/Cwt.
# "Broccoli", 165 20,460,000 42.10 8 Cwt. Cwt. $/Cwt.
# "Cabbage", 405 5,670,000 21.70 Cwt. Cwt. $/Cwt.
"Pasture":0,
"Potato":425*17.1, # Cwt. Cwt. $/Cwt.
"Pro Tom":300*36.20, # Cwt. Cwt. $/Cwt
"Rice":84.80*20.9, # Cwt. Cwt. $/Cwt
"Safflwr": 2000.00*26.5, # Pounds Cwt. $/Cwt.
"SgrBeet": 43.40*52.1, # Tons Tons $/Ton
"Subtrop":avg([622*6.52,4.15*813 ]),
# orange 622 109000000 6.52
# Olives 4.15 166000 813 Tons Tons $/Ton
"Vine":900*5.07}# Cartons 3/ Cartons $/Carton
Econ_dict
```
find 33 perentile 66 percentile of the water usage
| github_jupyter |
# Session 3
---
```
import numpy as np
ar = np.arange(3, 32)
ar
```
np.any
np.all
np.where
```
help(np.where)
ar
np.where(ar < 10, 15, 18)
```
```python
x if condition else y
```
```
np.where(ar < 10, 'Y', 'N')
np.where(ar < 10, 'Y', 18)
np.where(ar < 10, 15)
np.where(ar < 10)
type(np.where(ar < 10))
help(np.any)
np.any([0, 1, 1, 1, 0])
np.any([False, True, False, True, True])
np.any([[True, False], [True, False]])
np.any([[True, True], [False, False]])
np.any([[True, True], [False, False]], axis=1)
np.any([[True, True], [False, False]], axis=0)
np.all([[True, False], [True, False]])
np.all([[True, False], [True, False]], axis=0)
np.all([[True, False], [True, False]], axis=1)
np.all(~np.array([False, False, False]))
not np.all(np.array([False, False, False]))
~np.array([False, False, False])
np.all(np.array([False, True, False]))
not np.all(np.array([False, True, False]))
np.all(~np.array([False, True, False]))
np.all(not np.array([False, True, False]))
~np.all(np.array([False, False, False]))
ar2 = np.arange(0, 12).reshape((3, 4))
ar2
ar2.sum()
ar2.sum(axis=1)
ar2.sum(axis=0)
ar3 = np.arange(0,12).reshape((2,2,3))
ar3
ar3.sum(axis=0)
ar3.sum(axis=1)
ar3.sum(axis=2)
ar4 = np.arange(0, 16).reshape((2,2,2,2))
ar4
ar4.sum(axis=0)
ar4.sum(axis=1)
ar4.sum(axis=2)
ar4
ar4.sum(axis=3)
ar4.sum(axis=3).shape
np.add.reduce(np.array([1, 2, 3]))
np.add(np.array([1, 2, 3]), 2)
help(np.prod)
help(np.log10)
np.info(np.log10)
np.log10([567])
b = np.float64(567)
b.log10
np.log10(1287648359798234792387492384923849243)
np.log10(np.float64(1287648359798234792387492384923849243))
np.int64
np.log10(np.float64(29348792384921384792384921387492834928374928734928734928734928734928734987234987234987239487293487293487293847))
def add_numbers(x):
return x + x
(lambda x: x + x)(2)
add_two_numbers = lambda x: x + x
add_two_numbers(2)
np.sin(np.array([2, 3]))
add_three = lambda x: x + 3
add_three(np.array([2, 3]))
add_number = lambda x, y: x + y
add_number(np.array([2, 3]), 5)
```
### Notes
numpy has __vectorize__ function just like __map__ python function
```python
import numpy as np
vfunc = np.vectorize(lambda x, y: x % y == 0)
vfunc(np.array([2, 3, 4, 5, 6, 7]), 2)
>>> np.array([True, False, True, False, True, False])
```
Above example vfunc returns array, *vfunc is much like np.sin/np.cos etc*
```python
def sum(a, b):
return a + b
lambda a, b: a + b
```
**reduce**
```python
np.add.reduce(np.array([1, 2, 3]))
>>> 6
```
### Find primes numbers between 1 and 1000, ( use only numpy array, slicing, masking etc. )
| github_jupyter |
# NSCI 801 - Quantitative Neuroscience
## Reproducibility, reliability, validity
Gunnar Blohm
### Outline
* statistical considerations
* multiple comparisons
* exploratory analyses vs hypothesis testing
* Open Science
* general steps toward transparency
* pre-registration / registered report
* Open science vs. patents
### Multiple comparisons
In [2009, Bennett et al.](https://teenspecies.github.io/pdfs/NeuralCorrelates.pdf) studies the brain of a salmon using fMRI and found and found significant activation despite the salmon being dead... (IgNobel Prize 2012)
Why did they find this?
They images 140 volumes (samples) of the brain and ran a standard preprocessing pipeline, including spatial realignment, co-registration of functional and anatomical volumes, and 8mm full-width at half maximum (FWHM) Gaussian smoothing.
They computed voxel-wise statistics.
<img style="float: center; width:750px;" src="stuff/salmon.png">
This is a prime example of what's known as the **multiple comparison problem**!
“the problem that occurs when one considers a set of statistical inferences simultaneously or infers a subset of parameters selected based on the observed values” (Wikipedia)
* problem that arises when implementing a large number of statistical tests in the same experiment
* the more tests we do, the higher probability of obtaining, at least, one test with statistical significance
### Probability(false positive) = f(number comparisons)
If you repeat a statistical test over and over again, the false positive ($FP$) rate ($P$) evolves as follows:
$$P(FP)=1-(1-\alpha)^N$$
* $\alpha$ is the confidence level for each individual test (e.g. 0.05)
* $N$ is the number of comparisons
Let's see how this works...
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
plt.style.use('dark_background')
```
Let's create some random data...
```
rvs = stats.norm.rvs(loc=0, scale=10, size=1000)
sns.displot(rvs)
```
Now let's run a t-test to see if it's different from 0
```
statistic, pvalue = stats.ttest_1samp(rvs, 0)
print(pvalue)
```
Now let's do this many times for different samples, e.g. different voxels of our salmon...
```
def t_test_function(alp, N):
"""computes t-test statistics on N random samples and returns number of significant tests"""
counter = 0
for i in range(N):
rvs = stats.norm.rvs(loc=0, scale=10, size=1000)
statistic, pvalue = stats.ttest_1samp(rvs, 0)
if pvalue <= alp:
counter = counter + 1
print(counter)
return counter
N = 100
counter = t_test_function(0.05, N)
print("The false positve rate was", counter/N*100, "%")
```
Well, we wanted a $\alpha=0.05$, so what's the problem?
The problem is that we have hugely increased the likelihood of finding something significant by chance! (**p-hacking**)
Take the above example:
* running 100 independent tests with $\alpha=0.05$ resulted in a few positives
* well, that's good right? Now we can see if there is astory here we can publish...
* dead salmon!
* remember, our data was just noise!!! There was NO signal!
This is why we have corrections for multiple comparisons that adjust the p-value so that the **overall chance** to find a false positive stays at $\alpha$!
Why does this matter?
### Exploratory analyses vs hypothesis testing
Why do we distinguish between them?
<img style="float: center; width:750px;" src="stuff/ExploreConfirm1.png">
But in science, confirmatory analyses that are hypothesis-driven are often much more valued.
There is a temptation to frame *exploratory* analyses and *confirmatory*...
**This leads to disaster!!!**
* science is not solid
* replication crisis (psychology, social science, medicine, marketing, economics, sports science, etc, etc...)
* shaken trust in science
<img style="float: center; width:750px;" src="stuff/crisis.jpeg">
([Baker 2016](https://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970))
### Quick excursion: survivorship bias
"Survivorship bias or survival bias is the logical error of concentrating on the people or things that made it past some selection process and overlooking those that did not, typically because of their lack of visibility." (Wikipedia)
<img style="float: center; width:750px;" src="stuff/SurvivorshipBias.png">
**How does survivorship bias affect neuroscience?**
Think about it...
E.g.
* people select neurons to analyze
* profs say it's absolutely achievable to become a prof
Just keep it in mind...
### Open science - transparency
Open science can hugely help increasing transparency in many different ways so that findings and data can be evaluated for what they are:
* publish data acquisition protocol and code: increases data reproducibility & credibility
* publish data: data get second, third, etc... lives
* publish data processing / analyses: increases reproducibility of results
* publish figures code and stats: increases reproducibility and credibility of conclusions
* pre-register hypotheses and analyses: ensures *confirmatory* analyses are not *exploratory* (HARKing)
For more info, see NSCI800 lectures about Open Science: [OS1](http://www.compneurosci.com/NSCI800/OpenScienceI.pdf), [OS2](http://www.compneurosci.com/NSCI800/OpenScienceII.pdf)
### Pre-registration / registered reports
<img style="float:right; width:500px;" src="stuff/RR.png">
* IPA guarantees publication
* If original methods are followed
* Main conclusions need to come from originally proposed analyses
* Does not prevent exploratory analyses
* Need to be labeled as such
[https://Cos.io/rr](https://Cos.io/rr)
Please follow **Stage 1** instructions of [the registered report intrustions from eNeuro](https://www.eneuro.org/sites/default/files/additional_assets/pdf/eNeuro%20Registered%20Reports%20Author%20Guidelines.pdf) for the course evaluation...
Questions???
### Open science vs. patents
The goal of Open Science is to share all aspects of research with the public!
* because knowledge should be freely available
* because the public paid for the science to happen in the first place
However, this prevents from patenting scientific results!
* this is good for science, because patents obstruct research
* prevents full privitazation of research: companies driving research is biased by private interest
Turns out open science is good for business!
* more people contribute
* wider adoption
* e.g. Github = Microsoft, Android = Google, etc
* better for society
* e.g. nonprofit pharma
**Why are patents still a thing?**
Well, some people think it's an outdated and morally corrupt concept.
* goal: maximum profit
* enabler: capitalism
* victims: general public
Think about it abd decide for yourself what to do with your research!!!
### THANK YOU!!!
<img style="float:center; width:750px;" src="stuff/empower.jpg">
| github_jupyter |
<a href="https://colab.research.google.com/github/krmiddlebrook/intro_to_deep_learning/blob/master/machine_learning/mini_lessons/image_data.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Processing Image Data
Computer vision is a field of machine learning that trains computers to interpret and understand the visual world. It is one of the most popular fields in deep learning (neural networks). In computer vision, it is common to use digital images from cameras and videos to train models to accurately identify and classify objects.
Before we can solve computer vision tasks, it is important to understand how to handle image data. To this end, we will demonstrate how to process (prepare) image data for machine learning models.
We will use the MNIST digits dataset, which is provided by Kera Datasets--a collection of ready-to-use datasets for machine learning. All datasets are available through the `tf.keras.datasets` API endpoint.
Here is the lesson roadmap:
- Load the dataset
- Visualize the data
- Transform the data
- Normalize the data
```
# TensorFlow and tf.keras and TensorFlow datasets
import tensorflow as tf
from tensorflow import keras
# Commonly used modules
import numpy as np
# Images, plots, display, and visualization
import matplotlib.pyplot as plt
```
# Load the dataset
When we want to solve a problem with machine learning methods, the first step is almost always to find a good dataset. As we mentioned above, we will retrieve the MNIST dataset using the `tf.keras.datasets` module.
The MNIST dataset contains 70k grayscale images of handwritten digits (i.e., numbers between 0 and 9). Let's load the dataset into our notebook.
```
# the data, split between train and test sets
(train_features, train_labels), (test_features, test_labels) = keras.datasets.mnist.load_data()
print(f"training set shape: {train_features.shape}")
print(f"test set shape: {test_features.shape}")
print(f'dtypes of training and test set tensors: {train_features.dtype}, {test_features.dtype}')
```
We see that TensorFlow Datasets takes care of most of the processing we need to do. The `training_features` object tells us that there are 60k training images, and the `test_features` indicates there are 10k test images, so 70k total. We also see that the images are tensors of shape ($28 \times 28$) with integers of type uint8.
## Visualize the dataset
Now that we have the dataset, let's visualize some samples.
We will use the matplotlib plotting framework to display the images. Here are the first 5 images in the training dataset.
```
plt.figure(figsize=(10, 10))
for i in range(5):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(train_features[i], cmap=plt.cm.binary)
plt.title(int(train_labels[i]))
plt.axis("off")
```
The above images give us a sense of the data, including samples belonging to different classes.
# Transforming the data
Before we start transforming data, let's discuss *tensors*--a key part of the machine learning (ML) process, particularly for deep learning methods.
As we learned in previous lessons, data, whether it be categorical or numerical in nature, is converted to a numerical representation. This process makes the data useful for machine learning models. In deep learning (neural networks), the numerical data is often stored in objects called *tensors*. A tensor is a container that can house data in $N$ dimensions. ML researchers sometimes use the term "tensor" and "matrix" interchangeably because a matrix is a 2-dimensional tensor. But, tensors are generalizations of matrices to $N$-dimensional space.
<figure>
<img src='https://www.kdnuggets.com/wp-content/uploads/scalar-vector-matrix-tensor.jpg' width='75%'>
<figcaption>A scalar, vector ($2 \times 1$), matrix ($2 \times 1$), and tensor ($2 \times 2 \times 2$) .</figcaption>
</figure>
```
# a (2 x 2 x 2) tensor
my_tensor = np.array([
[[1, 2], [3, 2]],
[[1, 7],[5, 4]]
])
print('my_tensor shape:', my_tensor.shape)
```
Now let's discuss how images are stored in tensors. Computer screens are composed of pixels. Each pixel generates three colors of light (red, green, and blue) and the different colors we see are due to different combinations and intensities of these three primary colors.
<figure>
<img src='https://www.chem.purdue.edu/gchelp/cchem/RGBColors/BlackWhiteGray.gif' width='75%'>
<figcaption>The colors black, white, and gray with a sketch of a pixel from each.</figcaption>
</figure>
We use tensors to store the pixel intensities for a given image. Colorized pictures have 3 different *channels*. Each channel contains a matrix that represents the intensity values that correspond to the pixels of a particular color (red, green, and blue; RGB for short). For instance, consider a small colorized $28 \times 28$ pixel image of a dog. Because the dog image is colorize, it has 3 channels, so its tensor shape is ($28 \times 28 \times 3$).
Let's have a look at the shape of the images in the MNIST dataset.
```
train_features[0, :, :].shape
```
Using the `train_features.shape` method, we can extract the image shape and see that images are in the tensor shape $28 \times 28$. The returned shape has no 3rd dimension, this indicates that we are working with grayscale images. By grayscale, we mean the pixels don't have intensities for red, green, and blue channels but rather for one grayscale channel, which describes an image using combinations of various shades of gray. Pixel intensities range between $0$ and $255$, and in our case, they correspond to black $0$ to white $255$.
Now let's reshape the images into $784 \times 1$ dimensional tensors. We call converting an image into an $n \times 1$ tensor "flattening" the tensor.
```
# get a subset of 5 images from the dataset
original_shape = train_features.shape
# Flatten the images.
input_shape = (-1, 28*28)
train_features = train_features.reshape(input_shape)
test_features = test_features.reshape(input_shape)
print(f'original shape: {original_shape}, flattened shape: {train_features.shape}')
```
We flattened all the images by using the NumPy `reshape` method. Since one shape dimension can be -1, and we may not always know the number of samples in the dataset we used $(-1,784)$ as the parameters to `reshape`. In our example, this means that each $28 \times 28$ image gets flattened into a $28 \cdot 28 = 784$ feature array. Then the images are stacked (because of the -1) to produce a final large tensor with shape $(\text{num samples}, 784$).
# Normalize the data
Another important transformation technique is *normalization*. We normalize data before training the model with it to encourage the model to learn generalizable features, which should lead to better results on unseen data.
At a high level, normalization makes the data more, well...normal. There are various ways to normalize data. Perhaps the most common normalization approach for image data is to subtract the mean pixel value and divide by the standard deviation (this method is applied to every pixel).
Before we can do any normalization, we have to cast the "uint8" tensors to the "float32" numeric type.
```
# convert to float32 type
train_features = train_features.astype('float32')
test_features = test_features.astype('float32')
```
Now we can normalize the data. We should mention that you always use the training set data to calculate normalization statistics like mean, standard deviation, etc.. Consequently, the test set is always normalized with the training set statistics.
```
# normalize the reshaped images
mean = train_features.mean()
std = train_features.std()
train_features -= mean
train_features /= std
test_features -= mean
test_features /= std
print(f'pre-normalization mean and std: {round(mean, 4)}, {round(std, 4)}')
print(f'normalized images mean and std: {round(train_features.mean(), 4)}, {round(train_features.std(), 4)}')
```
As the output above indicates, the normalized pixel values are now centered around 0 (i.e., mean = 0) and have a standard deviation of 1.
# Summary
In this lesson we learned:
- Keras offers ready-to-use datasets.
- Images are represented by *tensors*
- Tensors can be transformed (reshaped) and normalized easily using NumPy (or any other frameworks that enable tensor operations).
```
```
| github_jupyter |
<i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
<i>Licensed under the MIT License.</i>
# Evaluation
Evaluation with offline metrics is pivotal to assess the quality of a recommender before it goes into production. Usually, evaluation metrics are carefully chosen based on the actual application scenario of a recommendation system. It is hence important to data scientists and AI developers that build recommendation systems to understand how each evaluation metric is calculated and what it is for.
This notebook deep dives into several commonly used evaluation metrics, and illustrates how these metrics are used in practice. The metrics covered in this notebook are merely for off-line evaluations.
## 0 Global settings
Most of the functions used in the notebook can be found in the `reco_utils` directory.
```
# set the environment path to find Recommenders
import sys
sys.path.append("../../")
import pandas as pd
import pyspark
from sklearn.preprocessing import minmax_scale
from reco_utils.common.spark_utils import start_or_get_spark
from reco_utils.evaluation.spark_evaluation import SparkRankingEvaluation, SparkRatingEvaluation
from reco_utils.evaluation.python_evaluation import auc, logloss
from reco_utils.recommender.sar.sar_singlenode import SARSingleNode
from reco_utils.dataset.download_utils import maybe_download
from reco_utils.dataset.python_splitters import python_random_split
print("System version: {}".format(sys.version))
print("Pandas version: {}".format(pd.__version__))
print("PySpark version: {}".format(pyspark.__version__))
```
Note to successfully run Spark codes with the Jupyter kernel, one needs to correctly set the environment variables of `PYSPARK_PYTHON` and `PYSPARK_DRIVER_PYTHON` that point to Python executables with the desired version. Detailed information can be found in the setup instruction document [SETUP.md](../../SETUP.md).
```
COL_USER = "UserId"
COL_ITEM = "MovieId"
COL_RATING = "Rating"
COL_PREDICTION = "Rating"
HEADER = {
"col_user": COL_USER,
"col_item": COL_ITEM,
"col_rating": COL_RATING,
"col_prediction": COL_PREDICTION,
}
```
## 1 Prepare data
### 1.1 Prepare dummy data
For illustration purpose, a dummy data set is created for demonstrating how different evaluation metrics work.
The data has the schema that can be frequently found in a recommendation problem, that is, each row in the dataset is a (user, item, rating) tuple, where "rating" can be an ordinal rating score (e.g., discrete integers of 1, 2, 3, etc.) or an numerical float number that quantitatively indicates the preference of the user towards that item.
For simplicity reason, the column of rating in the dummy dataset we use in the example represent some ordinal ratings.
```
df_true = pd.DataFrame(
{
COL_USER: [1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3],
COL_ITEM: [1, 2, 3, 1, 4, 5, 6, 7, 2, 5, 6, 8, 9, 10, 11, 12, 13, 14],
COL_RATING: [5, 4, 3, 5, 5, 3, 3, 1, 5, 5, 5, 4, 4, 3, 3, 3, 2, 1],
}
)
df_pred = pd.DataFrame(
{
COL_USER: [1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3],
COL_ITEM: [3, 10, 12, 10, 3, 5, 11, 13, 4, 10, 7, 13, 1, 3, 5, 2, 11, 14],
COL_PREDICTION: [14, 13, 12, 14, 13, 12, 11, 10, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5]
}
)
```
Take a look at ratings of the user with ID "1" in the dummy dataset.
```
df_true[df_true[COL_USER] == 1]
df_pred[df_pred[COL_USER] == 1]
```
### 1.2 Prepare Spark data
Spark framework is sometimes used to evaluate metrics given datasets that are hard to fit into memory. In our example, Spark DataFrames can be created from the Python dummy dataset.
```
spark = start_or_get_spark("EvaluationTesting", "local")
dfs_true = spark.createDataFrame(df_true)
dfs_pred = spark.createDataFrame(df_pred)
dfs_true.filter(dfs_true[COL_USER] == 1).show()
dfs_pred.filter(dfs_pred[COL_USER] == 1).show()
```
## 2 Evaluation metrics
### 2.1 Rating metrics
Rating metrics are similar to regression metrics used for evaluating a regression model that predicts numerical values given input observations. In the context of recommendation system, rating metrics are to evaluate how accurate a recommender is to predict ratings that users may give to items. Therefore, the metrics are **calculated exactly on the same group of (user, item) pairs that exist in both ground-truth dataset and prediction dataset** and **averaged by the total number of users**.
#### 2.1.1 Use cases
Rating metrics are effective in measuring the model accuracy. However, in some cases, the rating metrics are limited if
* **the recommender is to predict ranking instead of explicit rating**. For example, if the consumer of the recommender cares about the ranked recommended items, rating metrics do not apply directly. Usually a relevancy function such as top-k will be applied to generate the ranked list from predicted ratings in order to evaluate the recommender with other metrics.
* **the recommender is to generate recommendation scores that have different scales with the original ratings (e.g., the SAR algorithm)**. In this case, the difference between the generated scores and the original scores (or, ratings) is not valid for measuring accuracy of the model.
#### 2.1.2 How-to with the evaluation utilities
A few notes about the interface of the Rating evaluator class:
1. The columns of user, item, and rating (prediction) should be present in the ground-truth DataFrame (prediction DataFrame).
2. There should be no duplicates of (user, item) pairs in the ground-truth and the prediction DataFrames, othewise there may be unexpected behavior in calculating certain metrics.
3. Default column names for user, item, rating, and prediction are "UserId", "ItemId", "Rating", and "Prediciton", respectively.
In our examples below, to calculate rating metrics for input data frames in Spark, a Spark object, `SparkRatingEvaluation` is initialized. The input data schemas for the ground-truth dataset and the prediction dataset are
* Ground-truth dataset.
|Column|Data type|Description|
|-------------|------------|-------------|
|`COL_USER`|<int\>|User ID|
|`COL_ITEM`|<int\>|Item ID|
|`COL_RATING`|<float\>|Rating or numerical value of user preference.|
* Prediction dataset.
|Column|Data type|Description|
|-------------|------------|-------------|
|`COL_USER`|<int\>|User ID|
|`COL_ITEM`|<int\>|Item ID|
|`COL_RATING`|<float\>|Predicted rating or numerical value of user preference.|
```
spark_rate_eval = SparkRatingEvaluation(dfs_true, dfs_pred, **HEADER)
```
#### 2.1.3 Root Mean Square Error (RMSE)
RMSE is for evaluating the accuracy of prediction on ratings. RMSE is the most widely used metric to evaluate a recommendation algorithm that predicts missing ratings. The benefit is that RMSE is easy to explain and calculate.
```
print("The RMSE is {}".format(spark_rate_eval.rmse()))
```
#### 2.1.4 R Squared (R2)
R2 is also called "coefficient of determination" in some context. It is a metric that evaluates how well a regression model performs, based on the proportion of total variations of the observed results.
```
print("The R2 is {}".format(spark_rate_eval.rsquared()))
```
#### 2.1.5 Mean Absolute Error (MAE)
MAE evaluates accuracy of prediction. It computes the metric value from ground truths and prediction in the same scale. Compared to RMSE, MAE is more explainable.
```
print("The MAE is {}".format(spark_rate_eval.mae()))
```
#### 2.1.6 Explained Variance
Explained variance is usually used to measure how well a model performs with regard to the impact from the variation of the dataset.
```
print("The explained variance is {}".format(spark_rate_eval.exp_var()))
```
#### 2.1.7 Summary
|Metric|Range|Selection criteria|Limitation|Reference|
|------|-------------------------------|---------|----------|---------|
|RMSE|$> 0$|The smaller the better.|May be biased, and less explainable than MSE|[link](https://en.wikipedia.org/wiki/Root-mean-square_deviation)|
|R2|$\leq 1$|The closer to $1$ the better.|Depend on variable distributions.|[link](https://en.wikipedia.org/wiki/Coefficient_of_determination)|
|MSE|$\geq 0$|The smaller the better.|Dependent on variable scale.|[link](https://en.wikipedia.org/wiki/Mean_absolute_error)|
|Explained variance|$\leq 1$|The closer to $1$ the better.|Depend on variable distributions.|[link](https://en.wikipedia.org/wiki/Explained_variation)|
### 2.2 Ranking metrics
"Beyond-accuray evaluation" was proposed to evaluate how relevant recommendations are for users. In this case, a recommendation system is a treated as a ranking system. Given a relency definition, recommendation system outputs a list of recommended items to each user, which is ordered by relevance. The evaluation part takes ground-truth data, the actual items that users interact with (e.g., liked, purchased, etc.), and the recommendation data, as inputs, to calculate ranking evaluation metrics.
#### 2.2.1 Use cases
Ranking metrics are often used when hit and/or ranking of the items are considered:
* **Hit** - defined by relevancy, a hit usually means whether the recommended "k" items hit the "relevant" items by the user. For example, a user may have clicked, viewed, or purchased an item for many times, and a hit in the recommended items indicate that the recommender performs well. Metrics like "precision", "recall", etc. measure the performance of such hitting accuracy.
* **Ranking** - ranking metrics give more explanations about, for the hitted items, whether they are ranked in a way that is preferred by the users whom the items will be recommended to. Metrics like "mean average precision", "ndcg", etc., evaluate whether the relevant items are ranked higher than the less-relevant or irrelevant items.
#### 2.2.2 How-to with evaluation utilities
A few notes about the interface of the Rating evaluator class:
1. The columns of user, item, and rating (prediction) should be present in the ground-truth DataFrame (prediction DataFrame). The column of timestamp is optional, but it is required if certain relevanc function is used. For example, timestamps will be used if the most recent items are defined as the relevant one.
2. There should be no duplicates of (user, item) pairs in the ground-truth and the prediction DataFrames, othewise there may be unexpected behavior in calculating certain metrics.
3. Default column names for user, item, rating, and prediction are "UserId", "ItemId", "Rating", and "Prediciton", respectively.
#### 2.2.1 Relevancy of recommendation
Relevancy of recommendation can be measured in different ways:
* **By ranking** - In this case, relevant items in the recommendations are defined as the top ranked items, i.e., top k items, which are taken from the list of the recommended items that is ordered by the predicted ratings (or other numerical scores that indicate preference of a user to an item).
* **By timestamp** - Relevant items are defined as the most recently viewed k items, which are obtained from the recommended items ranked by timestamps.
* **By rating** - Relevant items are defined as items with ratings (or other numerical scores that indicate preference of a user to an item) that are above a given threshold.
Similarly, a ranking metric object can be initialized as below. The input data schema is
* Ground-truth dataset.
|Column|Data type|Description|
|-------------|------------|-------------|
|`COL_USER`|<int\>|User ID|
|`COL_ITEM`|<int\>|Item ID|
|`COL_RATING`|<float\>|Rating or numerical value of user preference.|
|`COL_TIMESTAMP`|<string\>|Timestamps.|
* Prediction dataset.
|Column|Data type|Description|
|-------------|------------|-------------|
|`COL_USER`|<int\>|User ID|
|`COL_ITEM`|<int\>|Item ID|
|`COL_RATING`|<float\>|Predicted rating or numerical value of user preference.|
|`COL_TIMESTAM`|<string\>|Timestamps.|
In this case, in addition to the input datasets, there are also other arguments used for calculating the ranking metrics:
|Argument|Data type|Description|
|------------|------------|--------------|
|`k`|<int\>|Number of items recommended to user.|
|`revelancy_method`|<string\>|Methonds that extract relevant items from the recommendation list|
For example, the following code initializes a ranking metric object that calculates the metrics.
```
spark_rank_eval = SparkRankingEvaluation(dfs_true, dfs_pred, k=3, relevancy_method="top_k", **HEADER)
```
A few ranking metrics can then be calculated.
#### 2.2.1 Precision
Precision@k is a metric that evaluates how many items in the recommendation list are relevant (hit) in the ground-truth data. For each user the precision score is normalized by `k` and then the overall precision scores are averaged by the total number of users.
Note it is apparent that the precision@k metric grows with the number of `k`.
```
print("The precision at k is {}".format(spark_rank_eval.precision_at_k()))
```
#### 2.2.2 Recall
Recall@k is a metric that evaluates how many relevant items in the ground-truth data are in the recommendation list. For each user the recall score is normalized by the total number of ground-truth items and then the overall recall scores are averaged by the total number of users.
```
print("The recall at k is {}".format(spark_rank_eval.recall_at_k()))
```
#### 2.2.3 Normalized Discounted Cumulative Gain (NDCG)
NDCG is a metric that evaluates how well the recommender performs in recommending ranked items to users. Therefore both hit of relevant items and correctness in ranking of these items matter to the NDCG evaluation. The total NDCG score is normalized by the total number of users.
```
print("The ndcg at k is {}".format(spark_rank_eval.ndcg_at_k()))
```
#### 2.2.4 Mean Average Precision (MAP)
MAP is a metric that evaluates the average precision for each user in the datasets. It also penalizes ranking correctness of the recommended items. The overall MAP score is normalized by the total number of users.
```
print("The map at k is {}".format(spark_rank_eval.map_at_k()))
```
#### 2.2.5 ROC and AUC
ROC, as well as AUC, is a well known metric that is used for evaluating binary classification problem. It is similar in the case of binary rating typed recommendation algorithm where the "hit" accuracy on the relevant items is used for measuring the recommender's performance.
To demonstrate the evaluation method, the original data for testing is manipuldated in a way that the ratings in the testing data are arranged as binary scores, whilst the ones in the prediction are scaled in 0 to 1.
```
# Convert the original rating to 0 and 1.
df_true_bin = df_true.copy()
df_true_bin[COL_RATING] = df_true_bin[COL_RATING].apply(lambda x: 1 if x > 3 else 0)
df_true_bin
# Convert the predicted ratings into a [0, 1] scale.
df_pred_bin = df_pred.copy()
df_pred_bin[COL_PREDICTION] = minmax_scale(df_pred_bin[COL_PREDICTION].astype(float))
df_pred_bin
# Calculate the AUC metric
auc_score = auc(
df_true_bin,
df_pred_bin,
col_user = COL_USER,
col_item = COL_ITEM,
col_rating = COL_RATING,
col_prediction = COL_RATING
)
print("The auc score is {}".format(auc_score))
```
It is worth mentioning that in some literature there are variants of the original AUC metric, that considers the effect of **the number of the recommended items (k)**, **grouping effect of users (compute AUC for each user group, and take the average across different groups)**. These variants are applicable to various different scenarios, and choosing an appropriate one depends on the context of the use case itself.
#### 2.3.2 Logistic loss
Logistic loss (sometimes it is called simply logloss, or cross-entropy loss) is another useful metric to evaluate the hit accuracy. It is defined as the negative log-likelihood of the true labels given the predictions of a classifier.
```
# Calculate the logloss metric
logloss_score = logloss(
df_true_bin,
df_pred_bin,
col_user = COL_USER,
col_item = COL_ITEM,
col_rating = COL_RATING,
col_prediction = COL_RATING
)
print("The logloss score is {}".format(logloss_score))
```
It is worth noting that logloss may be sensitive to the class balance of datasets, as it penalizes heavily classifiers that are confident about incorrect classifications. To demonstrate, the ground truth data set for testing is manipulated purposely to unbalance the binary labels. For example, the following binarizes the original rating data by using a lower threshold, i.e., 2, to create more positive feedback from the user.
```
df_true_bin_pos = df_true.copy()
df_true_bin_pos[COL_RATING] = df_true_bin_pos[COL_RATING].apply(lambda x: 1 if x > 2 else 0)
df_true_bin_pos
```
By using threshold of 2, the labels in the ground truth data is not balanced, and the ratio of 1 over 0 is
```
one_zero_ratio = df_true_bin_pos[COL_PREDICTION].sum() / (df_true_bin_pos.shape[0] - df_true_bin_pos[COL_PREDICTION].sum())
print('The ratio between label 1 and label 0 is {}'.format(one_zero_ratio))
```
Another prediction data is also created, where the probabilities for label 1 and label 0 are fixed. Without loss of generity, the probability of predicting 1 is 0.6. The data set is purposely created to make the precision to be 100% given an presumption of cut-off equal to 0.5.
```
prob_true = 0.6
df_pred_bin_pos = df_true_bin_pos.copy()
df_pred_bin_pos[COL_PREDICTION] = df_pred_bin_pos[COL_PREDICTION].apply(lambda x: prob_true if x==1 else 1-prob_true)
df_pred_bin_pos
```
Then the logloss is calculated as follows.
```
# Calculate the logloss metric
logloss_score_pos = logloss(
df_true_bin_pos,
df_pred_bin_pos,
col_user = COL_USER,
col_item = COL_ITEM,
col_rating = COL_RATING,
col_prediction = COL_RATING
)
print("The logloss score is {}".format(logloss_score))
```
For comparison, a similar process is used with a threshold value of 3 to create a more balanced dataset. Another prediction dataset is also created by using the balanced dataset. Again, the probabilities of predicting label 1 and label 0 are fixed as 0.6 and 0.4, respectively. **NOTE**, same as above, in this case, the prediction also gives us a 100% precision. The only difference is the proportion of binary labels.
```
prob_true = 0.6
df_pred_bin_balanced = df_true_bin.copy()
df_pred_bin_balanced[COL_PREDICTION] = df_pred_bin_balanced[COL_PREDICTION].apply(lambda x: prob_true if x==1 else 1-prob_true)
df_pred_bin_balanced
```
The ratio of label 1 and label 0 is
```
one_zero_ratio = df_true_bin[COL_PREDICTION].sum() / (df_true_bin.shape[0] - df_true_bin[COL_PREDICTION].sum())
print('The ratio between label 1 and label 0 is {}'.format(one_zero_ratio))
```
It is perfectly balanced.
Applying the logloss function to calculate the metric gives us a more promising result, as shown below.
```
# Calculate the logloss metric
logloss_score = logloss(
df_true_bin,
df_pred_bin_balanced,
col_user = COL_USER,
col_item = COL_ITEM,
col_rating = COL_RATING,
col_prediction = COL_RATING
)
print("The logloss score is {}".format(logloss_score))
```
It can be seen that the score is more close to 0, and, by definition, it means that the predictions are generating better results than the one before where binary labels are more biased.
#### 2.2.5 Summary
|Metric|Range|Selection criteria|Limitation|Reference|
|------|-------------------------------|---------|----------|---------|
|Precision|$\geq 0$ and $\leq 1$|The closer to $1$ the better.|Only for hits in recommendations.|[link](https://spark.apache.org/docs/2.3.0/mllib-evaluation-metrics.html#ranking-systems)|
|Recall|$\geq 0$ and $\leq 1$|The closer to $1$ the better.|Only for hits in the ground truth.|[link](https://en.wikipedia.org/wiki/Precision_and_recall)|
|NDCG|$\geq 0$ and $\leq 1$|The closer to $1$ the better.|Does not penalize for bad/missing items, and does not perform for several equally good items.|[link](https://spark.apache.org/docs/2.3.0/mllib-evaluation-metrics.html#ranking-systems)|
|MAP|$\geq 0$ and $\leq 1$|The closer to $1$ the better.|Depend on variable distributions.|[link](https://spark.apache.org/docs/2.3.0/mllib-evaluation-metrics.html#ranking-systems)|
|AUC|$\geq 0$ and $\leq 1$|The closer to $1$ the better. 0.5 indicates an uninformative classifier|Depend on the number of recommended items (k).|[link](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve)|
|Logloss|$0$ to $\infty$|The closer to $0$ the better.|Logloss can be sensitive to imbalanced datasets.|[link](https://en.wikipedia.org/wiki/Cross_entropy#Relation_to_log-likelihood)|
## References
1. Guy Shani and Asela Gunawardana, "Evaluating Recommendation Systems", Recommender Systems Handbook, Springer, 2015.
2. PySpark MLlib evaluation metrics, url: https://spark.apache.org/docs/2.3.0/mllib-evaluation-metrics.html.
3. Dimitris Paraschakis et al, "Comparative Evaluation of Top-N Recommenders in e-Commerce: An Industrial Perspective", IEEE ICMLA, 2015, Miami, FL, USA.
4. Yehuda Koren and Robert Bell, "Advances in Collaborative Filtering", Recommender Systems Handbook, Springer, 2015.
5. Chris Bishop, "Pattern Recognition and Machine Learning", Springer, 2006.
| github_jupyter |
# Overview
## © [Omkar Mehta](omehta2@illinois.edu) ##
### Industrial and Enterprise Systems Engineering, The Grainger College of Engineering, UIUC ###
<hr style="border:2px solid blue"> </hr>
This notebook will show you how to create and query a table or DataFrame that you uploaded to DBFS. [DBFS](https://docs.databricks.com/user-guide/dbfs-databricks-file-system.html) is a Databricks File System that allows you to store data for querying inside of Databricks. This notebook assumes that you have a file already inside of DBFS that you would like to read from.
This notebook is written in **Python** so the default cell type is Python. However, you can use different languages by using the `%LANGUAGE` syntax. Python, Scala, SQL, and R are all supported.
```
# File location and type
file_location = "/FileStore/tables/game_skater_stats.csv"
file_type = "csv"
# CSV options
infer_schema = "false"
first_row_is_header = "false"
delimiter = ","
# The applied options are for CSV files. For other file types, these will be ignored.
df = spark.read.format(file_type) \
.option("inferSchema", infer_schema) \
.option("header", first_row_is_header) \
.option("sep", delimiter) \
.load(file_location)
display(df)
# Create a view or table
temp_table_name = "game_skater_stats_csv"
df.createOrReplaceTempView(temp_table_name)
%sql
/* Query the created temp table in a SQL cell */
select * from `game_skater_stats_csv`
# With this registered as a temp view, it will only be available to this particular notebook. If you'd like other users to be able to query this table, you can also create a table from the DataFrame.
# Once saved, this table will persist across cluster restarts as well as allow various users across different notebooks to query this data.
# To do so, choose your table name and uncomment the bottom line.
permanent_table_name = "game_skater_stats_csv"
# df.write.format("parquet").saveAsTable(permanent_table_name)
```
# Read Data
```
# Load data from a CSV
file_location = "/FileStore/tables/game_skater_stats.csv"
df = spark.read.format("CSV").option("inferSchema", True).option("header", True).load(file_location)
display(df.take(5))
```
## Write Data
```
# Save as CSV and parquet
# DBFS
df.write.save('/FileStore/parquet/game__stats', format='parquet')
# S3
#df.write.parquet("s3a://my_bucket/game_skater_stats", mode="overwrite")
# DBFS
df.write.save('/FileStore/parquet/game__stats.csv', format='csv')
# S3
#df.coalesce(1).write.format("com.databricks.spark.csv")
# .option("header", "true").save("s3a://my_bucket/game_skater_stats.csv")
```
## Transforming Data
```
df.createOrReplaceTempView("stats")
display(spark.sql("""
select player_id, sum(1) as games, sum(goals) as goals
from stats
group by 1
order by 3 desc
limit 5
"""))
# player names
file_location = "/FileStore/tables/player_info.csv"
names = spark.read.format("CSV").option("inferSchema", True).option("header", True).load(file_location)
#display(names)
df.createOrReplaceTempView("stats")
top_players = spark.sql("""
select player_id, sum(1) as games, sum(goals) as goals
from stats
group by 1
order by 3 desc
limit 5
""")
top_players.createOrReplaceTempView("top_players")
names.createOrReplaceTempView("names")
display(spark.sql("""
select p.player_id, goals, firstName, lastName
from top_players p
join names n
on p.player_id = n.player_id
order by 2 desc
"""))
display(spark.sql("""
select cast(substring(game_id, 1, 4) || '-'
|| substring(game_id, 5, 2) || '-01' as Date) as month
, sum(goals)/count(distinct game_id) as goals_per_goal
from stats
group by 1
order by 1
"""))
display(spark.sql("""
select cast(goals/shots * 50 as int)/50.0 as Goals_per_shot, sum(1) as Players
from (
select player_id, sum(shots) as shots, sum(goals) as goals
from stats
group by 1
having goals >= 5
)
group by 1
order by 1
"""))
display(spark.sql("""
select cast(substring(game_id, 1, 4) || '-'
|| substring(game_id, 5, 2) || '-01' as Date) as month
, sum(goals)/count(distinct game_id) as goals_per_goal
from stats
group by 1
order by 1
"""))
display(spark.sql("""
select cast(goals/shots * 50 as int)/50.0 as Goals_per_shot
,sum(1) as Players
from (
select player_id, sum(shots) as shots, sum(goals) as goals
from stats
group by 1
having goals >= 5
)
group by 1
order by 1
"""))
```
## MLlib: Linear Regression
```
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.regression import LinearRegression
assembler = VectorAssembler(inputCols=['shots', 'assists', 'penaltyMinutes', 'timeOnIce'], outputCol="features" )
train_df = assembler.transform(df)
lr = LinearRegression(featuresCol = 'features', labelCol='goals')
lr_model = lr.fit(train_df)
trainingSummary = lr_model.summary
print("Coefficients: " + str(lr_model.coefficients))
print("RMSE: %f" % trainingSummary.rootMeanSquaredError)
print("R2: %f" % trainingSummary.r2)
```
## Pandas UDFs
```
# creating a linear fit for a single player
df.createOrReplaceTempView("stats")
sample_pd = spark.sql("""
select * from stats
where player_id = 8471214
""").toPandas()
from scipy.optimize import leastsq
import numpy as np
def fit(params, x, y):
return (y - (params[0] + x * params[1] ))
result = leastsq(fit, [1, 0], args=(sample_pd.shots, sample_pd.hits))
print(result)
from pyspark.sql.functions import pandas_udf, PandasUDFType
from pyspark.sql.types import *
import pandas as pd
schema = StructType([StructField('ID', LongType(), True),
StructField('p0', DoubleType(), True),
StructField('p1', DoubleType(), True)])
@pandas_udf(schema, PandasUDFType.GROUPED_MAP)
def analyze_player(sample_pd):
if (len(sample_pd.shots) <= 1):
return pd.DataFrame({'ID': [sample_pd.player_id[0]], 'p0': [ 0 ], 'p1': [ 0 ]})
result = leastsq(fit, [1, 0], args=(sample_pd.shots, sample_pd.hits))
return pd.DataFrame({'ID': [sample_pd.player_id[0]], 'p0': [result[0][0]], 'p1': [result[0][1]]})
player_df = df.groupby('player_id').apply(analyze_player)
display(player_df.take(5))
```
| github_jupyter |
# 📃 Solution for Exercise M3.01
The goal is to write an exhaustive search to find the best parameters
combination maximizing the model generalization performance.
Here we use a small subset of the Adult Census dataset to make the code
faster to execute. Once your code works on the small subset, try to
change `train_size` to a larger value (e.g. 0.8 for 80% instead of
20%).
```
import pandas as pd
from sklearn.model_selection import train_test_split
adult_census = pd.read_csv("../datasets/adult-census.csv")
target_name = "class"
target = adult_census[target_name]
data = adult_census.drop(columns=[target_name, "education-num"])
data_train, data_test, target_train, target_test = train_test_split(
data, target, train_size=0.2, random_state=42)
from sklearn.compose import ColumnTransformer
from sklearn.compose import make_column_selector as selector
from sklearn.preprocessing import OrdinalEncoder
categorical_preprocessor = OrdinalEncoder(handle_unknown="use_encoded_value",
unknown_value=-1)
preprocessor = ColumnTransformer(
[('cat_preprocessor', categorical_preprocessor,
selector(dtype_include=object))],
remainder='passthrough', sparse_threshold=0)
from sklearn.ensemble import HistGradientBoostingClassifier
from sklearn.pipeline import Pipeline
model = Pipeline([
("preprocessor", preprocessor),
("classifier", HistGradientBoostingClassifier(random_state=42))
])
```
Use the previously defined model (called `model`) and using two nested `for`
loops, make a search of the best combinations of the `learning_rate` and
`max_leaf_nodes` parameters. In this regard, you will need to train and test
the model by setting the parameters. The evaluation of the model should be
performed using `cross_val_score` on the training set. We will use the
following parameters search:
- `learning_rate` for the values 0.01, 0.1, 1 and 10. This parameter controls
the ability of a new tree to correct the error of the previous sequence of
trees
- `max_leaf_nodes` for the values 3, 10, 30. This parameter controls the
depth of each tree.
```
# solution
from sklearn.model_selection import cross_val_score
learning_rate = [0.01, 0.1, 1, 10]
max_leaf_nodes = [3, 10, 30]
best_score = 0
best_params = {}
for lr in learning_rate:
for mln in max_leaf_nodes:
print(f"Evaluating model with learning rate {lr:.3f}"
f" and max leaf nodes {mln}... ", end="")
model.set_params(
classifier__learning_rate=lr,
classifier__max_leaf_nodes=mln
)
scores = cross_val_score(model, data_train, target_train, cv=2)
mean_score = scores.mean()
print(f"score: {mean_score:.3f}")
if mean_score > best_score:
best_score = mean_score
best_params = {'learning-rate': lr, 'max leaf nodes': mln}
print(f"Found new best model with score {best_score:.3f}!")
print(f"The best accuracy obtained is {best_score:.3f}")
print(f"The best parameters found are:\n {best_params}")
```
Now use the test set to score the model using the best parameters
that we found using cross-validation in the training set.
```
# solution
best_lr = best_params['learning-rate']
best_mln = best_params['max leaf nodes']
model.set_params(classifier__learning_rate=best_lr,
classifier__max_leaf_nodes=best_mln)
model.fit(data_train, target_train)
test_score = model.score(data_test, target_test)
print(f"Test score after the parameter tuning: {test_score:.3f}")
```
| github_jupyter |
## Look at supply and demand of available datasets and ML models
```
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
dataset_regular = '../datasets/Dataset_evolution_regular.csv'
dataset_consortia = '../datasets/Dataset_evolution_consortia.csv'
freesurfer_papers = '../datasets/FreeSurfer_papers.csv'
DL_papers = '../datasets/DL_papers.csv'
datasets_regular_df = pd.read_csv(dataset_regular)
datasets_regular_df['Number of datasets'] = 1
datasets_regular_df['Dataset Type'] = 'regular'
datasets_consortia_df = pd.read_csv(dataset_consortia)
datasets_consortia_df['Dataset'] = datasets_consortia_df['Working group']
datasets_consortia_df['Dataset Type'] = 'consortia'
useful_cols = ['Dataset','Sample size', 'Year', 'Number of datasets', 'Dataset Type']
datasets_df = datasets_regular_df[useful_cols].append(datasets_consortia_df[useful_cols])
datasets_df.head()
plot_df = datasets_df.copy()
plot_df['Year'] = plot_df['Year'].astype(int)
plot_df['Sample size'] = plot_df['Sample size'].astype(int)
sns.set(font_scale = 6)
palette = ['deepskyblue','navy'] #['tomato','firebrick'] # sns.color_palette("husl", 2)
with sns.axes_style("whitegrid"):
fig, ax1 = plt.subplots(figsize=(40,25),sharex=True,sharey=True)
g = sns.scatterplot(x='Year',y='Sample size', hue='Dataset Type', size='Sample size', sizes=(1000,5000), data=plot_df, palette=palette,ax=ax1)
g.grid(True,which="both",ls="--",c='lightgray')
plt.title('Dataset Sizes Over the Years')
g.set(yscale='log')
# g.set(xlim=(1e6, 1e8))
# EXTRACT CURRENT HANDLES AND LABELS
h,l = ax1.get_legend_handles_labels()
col_lgd = plt.legend(h[:3], l[:3], loc='upper left')
col_lgd.legendHandles[1]._sizes = [1000]
col_lgd.legendHandles[2]._sizes = [1000]
# add model names as bubble labels
def label_point(x, y, val, ax, x_shift=0, y_shift=0):
a = pd.concat({'x': x, 'y': y, 'val': val}, axis=1)
for i, point in a.iterrows():
x_shift, y_shift = 0.1*np.random.randint(-3,3, 2)
ax.text(point['x']+x_shift, point['y']+y_shift, str(point['val']), fontsize=32)
label_point(plot_df['Year'], plot_df['Sample size'], plot_df['Dataset'], plt.gca())
freesurfer_papers_df = pd.read_csv(freesurfer_papers)
DL_papers_df = pd.read_csv(DL_papers)
citation_df = pd.merge(freesurfer_papers_df, DL_papers_df, on='Year', how='left')
citation_df.head()
plot_df = citation_df[citation_df['Year']!=2021].copy()
pal = sns.color_palette("husl", 2)
with sns.axes_style("whitegrid"):
fig, ax1 = plt.subplots(figsize=(40,10),sharex=True,sharey=True)
sns.lineplot(x='Year',y='Total', marker='d', markersize=40, data=plot_df, linewidth = 20, color=pal[1], label='FreeSurfer')
sns.lineplot(x='Year',y='N_AI-papers',marker='d', markersize=40, data=plot_df, linewidth = 20, color=pal[0], label='Machine-learning')
plt.title('Number of Citations in Neuroimaging Studies', fontsize=80)
```
| github_jupyter |
# Using the same code as before, please solve the following exercises
4. Examine the code where we plot the data. Study how we managed to get the value of the outputs.
In a similar way, find get the value of the weights and the biases and print it. This exercise will help you comprehend the TensorFlow syntax
Useful tip: When you change something, don't forget to RERUN all cells. This can be done easily by clicking:
Kernel -> Restart & Run All
If you don't do that, your algorithm will keep the OLD values of all parameters.
## Solution
Similar to the code for the outputs:
out = sess.run([outputs],
feed_dict={inputs: training_data['inputs']})
We can "catch" the values of the weights and the biases following the code:
w = sess.run([weights],
feed_dict={inputs: training_data['inputs']})
b = sess.run([biases],
feed_dict={inputs: training_data['inputs']})
Note that we don't need to feed targets, as we just need to feed input data. We can include the targets if we want to, but the result will be the same.
At the end we print the w and b to be able to observe their values.
print (w)
print (b)
Solution at the bottom of the file.
### Import the relevant libraries
```
# We must always import the relevant libraries for our problem at hand. NumPy and TensorFlow are required for this example.
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
```
### Data generation
We generate data using the exact same logic and code as the example from the previous notebook. The only difference now is that we save it to an npz file. Npz is numpy's file type which allows you to save numpy arrays into a single .npz file. We introduce this change because in machine learning most often:
* you are given some data (csv, database, etc.)
* you preprocess it into a desired format (later on we will see methods for preprocesing)
* you save it into npz files (if you're working in Python) to access later
Nothing to worry about - this is literally saving your NumPy arrays into a file that you can later access, nothing more.
```
# First, we should declare a variable containing the size of the training set we want to generate.
observations = 1000
# We will work with two variables as inputs. You can think about them as x1 and x2 in our previous examples.
# We have picked x and z, since it is easier to differentiate them.
# We generate them randomly, drawing from an uniform distribution. There are 3 arguments of this method (low, high, size).
# The size of xs and zs is observations x 1. In this case: 1000 x 1.
xs = np.random.uniform(low=-10, high=10, size=(observations,1))
zs = np.random.uniform(-10, 10, (observations,1))
# Combine the two dimensions of the input into one input matrix.
# This is the X matrix from the linear model y = x*w + b.
# column_stack is a Numpy method, which combines two matrices (vectors) into one.
generated_inputs = np.column_stack((xs,zs))
# We add a random small noise to the function i.e. f(x,z) = 2x - 3z + 5 + <small noise>
noise = np.random.uniform(-1, 1, (observations,1))
# Produce the targets according to our f(x,z) = 2x - 3z + 5 + noise definition.
# In this way, we are basically saying: the weights should be 2 and -3, while the bias is 5.
generated_targets = 2*xs - 3*zs + 5 + noise
# save into an npz file called "TF_intro"
np.savez('TF_intro', inputs=generated_inputs, targets=generated_targets)
```
## Solving with TensorFlow
<i/>Note: This intro is just the basics of TensorFlow which has way more capabilities and depth than that.<i>
```
# The shape of the data we've prepared above. Think about it as: number of inputs, number of outputs.
input_size = 2
output_size = 1
```
### Outlining the model
```
# Here we define a basic TensorFlow object - the placeholder.
# As before, we will feed the inputs and targets to the model.
# In the TensorFlow context, we feed the data to the model THROUGH the placeholders.
# The particular inputs and targets are contained in our .npz file.
# The first None parameter of the placeholders' shape means that
# this dimension could be of any length. That's since we are mainly interested in
# the input size, i.e. how many input variables we have and not the number of samples (observations)
# The number of input variables changes the MODEL itself, while the number of observations doesn't.
# Remember that the weights and biases were independent of the number of samples, so the MODEL is independent.
# Important: NO calculation happens at this point.
inputs = tf.placeholder(tf.float32, [None, input_size])
targets = tf.placeholder(tf.float32, [None, output_size])
# As before, we define our weights and biases.
# They are the other basic TensorFlow object - a variable.
# We feed data into placeholders and they have a different value for each iteration
# Variables, however, preserve their values across iterations.
# To sum up, data goes into placeholders; parameters go into variables.
# We use the same random uniform initialization in [-0.1,0.1] as in the minimal example but using the TF syntax
# Important: NO calculation happens at this point.
weights = tf.Variable(tf.random_uniform([input_size, output_size], minval=-0.1, maxval=0.1))
biases = tf.Variable(tf.random_uniform([output_size], minval=-0.1, maxval=0.1))
# We get the outputs following our linear combination: y = xw + b
# Important: NO calculation happens at this point.
# This line simply tells TensorFlow what rule to apply when we feed in the training data (below).
outputs = tf.matmul(inputs, weights) + biases
```
### Choosing the objective function and the optimization method
```
# Again, we use a loss function, this time readily available, though.
# mean_squared_error is the scaled L2-norm (per observation)
# We divide by two to follow our earlier definitions. That doesn't really change anything.
mean_loss = tf.losses.mean_squared_error(labels=targets, predictions=outputs) / 2.
# Note that there also exists a function tf.nn.l2_loss.
# tf.nn.l2_loss calculates the loss over all samples, instead of the average loss per sample.
# Practically it's the same, a matter of preference.
# The difference would be a smaller or larger learning rate to achieve the exact same result.
# Instead of implementing Gradient Descent on our own, in TensorFlow we can simply state
# "Minimize the mean loss by using Gradient Descent with a given learning rate"
# Simple as that.
optimize = tf.train.GradientDescentOptimizer(learning_rate=0.05).minimize(mean_loss)
```
### Prepare for execution
```
# So far we've defined the placeholders, variables, the loss function and the optimization method.
# We have the structure for training, but we haven't trained anything yet.
# The actual training (and subsequent implementation of the ML algorithm) happens inside sessions.
sess = tf.InteractiveSession()
```
### Initializing variables
```
# Before we start training, we need to initialize our variables: the weights and biases.
# There is a specific method for initializing called global_variables_initializer().
# Let's declare a variable "initializer" that will do that.
initializer = tf.global_variables_initializer()
# Time to initialize the variables.
sess.run(initializer)
```
### Loading training data
```
# We finally load the training data we created above.
training_data = np.load('TF_intro.npz')
```
### Learning
```
# As in the previous example, we train for a set number (100) of iterations over the dataset
for i in range(100):
# This expression is a bit more complex but you'll learn to appreciate its power and
# flexibility in the following lessons.
# sess.run is the session's function to actually do something, anything.
# Above, we used it to initialize the variables.
# Here, we use it to feed the training data to the computational graph, defined by the feed_dict parameter
# and run operations (already defined above), given as the first parameter (optimize, mean_loss).
# So the line of code means: "Run the optimize and mean_loss operations by filling the placeholder
# objects with data from the feed_dict parameter".
# Curr_loss catches the output from the two operations.
# Using "_," we omit the first one, because optimize has no output (it's always "None").
# The second one catches the value of the mean_loss for the current run, thus curr_loss actually = mean_loss
_, curr_loss = sess.run([optimize, mean_loss],
feed_dict={inputs: training_data['inputs'], targets: training_data['targets']})
# We print the current average loss
print(curr_loss)
```
### Plotting the data
```
# As before, we want to plot the last output vs targets after the training is supposedly over.
# Same notation as above but this time we don't want to train anymore, and we are not interested
# in the loss function value.
# What we want, however, are the outputs.
# Therefore, instead of the optimize and mean_loss operations, we pass the "outputs" as the only parameter.
out = sess.run([outputs],
feed_dict={inputs: training_data['inputs']})
# The model is optimized, so the outputs are calculated based on the last form of the model
# We have to np.squeeze the arrays in order to fit them to what the plot function expects.
# Doesn't change anything as we cut dimensions of size 1 - just a technicality.
plt.plot(np.squeeze(out), np.squeeze(training_data['targets']))
plt.xlabel('outputs')
plt.ylabel('targets')
plt.show()
# Voila - what you see should be exactly the same as in the previous notebook!
# You probably don't see the point of TensorFlow now - it took us more lines of code
# to achieve this simple result. However, once we go deeper in the next chapter,
# TensorFlow will save us hundreds of lines of code.
w = sess.run([weights],
feed_dict={inputs: training_data['inputs']})
b = sess.run([biases],
feed_dict={inputs: training_data['inputs']})
print (w)
print (b)
```
| github_jupyter |
## <div style="text-align: center"> 20 ML Algorithms from start to Finish for Iris</div>
<div style="text-align: center"> I want to solve<b> iris problem</b> a popular machine learning Dataset as a comprehensive workflow with python packages.
After reading, you can use this workflow to solve other real problems and use it as a template to deal with <b>machine learning</b> problems.</div>

<div style="text-align:center">last update: <b>10/28/2018</b></div>
>###### you may be interested have a look at it: [**10-steps-to-become-a-data-scientist**](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist)
---------------------------------------------------------------------
you can Fork and Run this kernel on Github:
> ###### [ GitHub](https://github.com/mjbahmani/Machine-Learning-Workflow-with-Python)
-------------------------------------------------------------------------------------------------------------
**I hope you find this kernel helpful and some <font color="red"><b>UPVOTES</b></font> would be very much appreciated**
-----------
## Notebook Content
* 1- [Introduction](#1)
* 2- [Machine learning workflow](#2)
* 2-1 [Real world Application Vs Competitions](#2)
* 3- [Problem Definition](#3)
* 3-1 [Problem feature](#4)
* 3-2 [Aim](#5)
* 3-3 [Variables](#6)
* 4-[ Inputs & Outputs](#7)
* 4-1 [Inputs ](#8)
* 4-2 [Outputs](#9)
* 5- [Installation](#10)
* 5-1 [ jupyter notebook](#11)
* 5-2[ kaggle kernel](#12)
* 5-3 [Colab notebook](#13)
* 5-4 [install python & packages](#14)
* 5-5 [Loading Packages](#15)
* 6- [Exploratory data analysis](#16)
* 6-1 [Data Collection](#17)
* 6-2 [Visualization](#18)
* 6-2-1 [Scatter plot](#19)
* 6-2-2 [Box](#20)
* 6-2-3 [Histogram](#21)
* 6-2-4 [Multivariate Plots](#22)
* 6-2-5 [Violinplots](#23)
* 6-2-6 [Pair plot](#24)
* 6-2-7 [Kde plot](#25)
* 6-2-8 [Joint plot](#26)
* 6-2-9 [Andrews curves](#27)
* 6-2-10 [Heatmap](#28)
* 6-2-11 [Radviz](#29)
* 6-3 [Data Preprocessing](#30)
* 6-4 [Data Cleaning](#31)
* 7- [Model Deployment](#32)
* 7-1[ KNN](#33)
* 7-2 [Radius Neighbors Classifier](#34)
* 7-3 [Logistic Regression](#35)
* 7-4 [Passive Aggressive Classifier](#36)
* 7-5 [Naive Bayes](#37)
* 7-6 [MultinomialNB](#38)
* 7-7 [BernoulliNB](#39)
* 7-8 [SVM](#40)
* 7-9 [Nu-Support Vector Classification](#41)
* 7-10 [Linear Support Vector Classification](#42)
* 7-11 [Decision Tree](#43)
* 7-12 [ExtraTreeClassifier](#44)
* 7-13 [Neural network](#45)
* 7-13-1 [What is a Perceptron?](#45)
* 7-14 [RandomForest](#46)
* 7-15 [Bagging classifier ](#47)
* 7-16 [AdaBoost classifier](#48)
* 7-17 [Gradient Boosting Classifier](#49)
* 7-18 [Linear Discriminant Analysis](#50)
* 7-19 [Quadratic Discriminant Analysis](#51)
* 7-20 [Kmeans](#52)
* 7-21 [Backpropagation](#53)
* 8- [Conclusion](#54)
* 10- [References](#55)
<a id="1"></a> <br>
## 1- Introduction
This is a **comprehensive ML techniques with python** , that I have spent for more than two months to complete it.
it is clear that everyone in this community is familiar with IRIS dataset but if you need to review your information about the dataset please visit this [link](https://archive.ics.uci.edu/ml/datasets/iris).
I have tried to help **beginners** in Kaggle how to face machine learning problems. and I think it is a great opportunity for who want to learn machine learning workflow with python completely.
I have covered most of the methods that are implemented for iris until **2018**, you can start to learn and review your knowledge about ML with a simple dataset and try to learn and memorize the workflow for your journey in Data science world.
## 1-1 Courses
There are alot of Online courses that can help you develop your knowledge, here I have just listed some of them:
1. [Machine Learning Certification by Stanford University (Coursera)](https://www.coursera.org/learn/machine-learning/)
2. [Machine Learning A-Z™: Hands-On Python & R In Data Science (Udemy)](https://www.udemy.com/machinelearning/)
3. [Deep Learning Certification by Andrew Ng from deeplearning.ai (Coursera)](https://www.coursera.org/specializations/deep-learning)
4. [Python for Data Science and Machine Learning Bootcamp (Udemy)](Python for Data Science and Machine Learning Bootcamp (Udemy))
5. [Mathematics for Machine Learning by Imperial College London](https://www.coursera.org/specializations/mathematics-machine-learning)
6. [Deep Learning A-Z™: Hands-On Artificial Neural Networks](https://www.udemy.com/deeplearning/)
7. [Complete Guide to TensorFlow for Deep Learning Tutorial with Python](https://www.udemy.com/complete-guide-to-tensorflow-for-deep-learning-with-python/)
8. [Data Science and Machine Learning Tutorial with Python – Hands On](https://www.udemy.com/data-science-and-machine-learning-with-python-hands-on/)
9. [Machine Learning Certification by University of Washington](https://www.coursera.org/specializations/machine-learning)
10. [Data Science and Machine Learning Bootcamp with R](https://www.udemy.com/data-science-and-machine-learning-bootcamp-with-r/)
5- [https://www.kaggle.com/startupsci/titanic-data-science-solutions](https://www.kaggle.com/startupsci/titanic-data-science-solutions)
I am open to getting your feedback for improving this **kernel**
<a id="2"></a> <br>
## 2- Machine Learning Workflow
Field of study that gives computers the ability to learn without being
explicitly programmed.
Arthur Samuel, 1959
If you have already read some [machine learning books](https://towardsdatascience.com/list-of-free-must-read-machine-learning-books-89576749d2ff). You have noticed that there are different ways to stream data into machine learning.
most of these books share the following steps (checklist):
* Define the Problem(Look at the big picture)
* Specify Inputs & Outputs
* Data Collection
* Exploratory data analysis
* Data Preprocessing
* Model Design, Training, and Offline Evaluation
* Model Deployment, Online Evaluation, and Monitoring
* Model Maintenance, Diagnosis, and Retraining
**You can see my workflow in the below image** :
<img src="http://s9.picofile.com/file/8338227634/workflow.png" />
**you should feel free to adapt this checklist to your needs**
## 2-1 Real world Application Vs Competitions
<img src="http://s9.picofile.com/file/8339956300/reallife.png" height="600" width="500" />
<a id="3"></a> <br>
## 3- Problem Definition
I think one of the important things when you start a new machine learning project is Defining your problem. that means you should understand business problem.( **Problem Formalization**)
Problem Definition has four steps that have illustrated in the picture below:
<img src="http://s8.picofile.com/file/8338227734/ProblemDefination.png">
<a id="4"></a> <br>
### 3-1 Problem Feature
we will use the classic Iris data set. This dataset contains information about three different types of Iris flowers:
* Iris Versicolor
* Iris Virginica
* Iris Setosa
The data set contains measurements of four variables :
* sepal length
* sepal width
* petal length
* petal width
The Iris data set has a number of interesting features:
1. One of the classes (Iris Setosa) is linearly separable from the other two. However, the other two classes are not linearly separable.
2. There is some overlap between the Versicolor and Virginica classes, so it is unlikely to achieve a perfect classification rate.
3. There is some redundancy in the four input variables, so it is possible to achieve a good solution with only three of them, or even (with difficulty) from two, but the precise choice of best variables is not obvious.
**Why am I using iris dataset:**
1- This is a good project because it is so well understood.
2- Attributes are numeric so you have to figure out how to load and handle data.
3- It is a classification problem, allowing you to practice with perhaps an easier type of supervised learning algorithm.
4- It is a multi-class classification problem (multi-nominal) that may require some specialized handling.
5- It only has 4 attributes and 150 rows, meaning it is small and easily fits into memory (and a screen or A4 page).
6- All of the numeric attributes are in the same units and the same scale, not requiring any special scaling or transforms to get started.[5]
7- we can define problem as clustering(unsupervised algorithm) project too.
<a id="5"></a> <br>
### 3-2 Aim
The aim is to classify iris flowers among three species (setosa, versicolor or virginica) from measurements of length and width of sepals and petals
<a id="6"></a> <br>
### 3-3 Variables
The variables are :
**sepal_length**: Sepal length, in centimeters, used as input.
**sepal_width**: Sepal width, in centimeters, used as input.
**petal_length**: Petal length, in centimeters, used as input.
**petal_width**: Petal width, in centimeters, used as input.
**setosa**: Iris setosa, true or false, used as target.
**versicolour**: Iris versicolour, true or false, used as target.
**virginica**: Iris virginica, true or false, used as target.
**<< Note >>**
> You must answer the following question:
How does your company expact to use and benfit from your model.
<a id="7"></a> <br>
## 4- Inputs & Outputs
<a id="8"></a> <br>
### 4-1 Inputs
**Iris** is a very popular **classification** and **clustering** problem in machine learning and it is such as "Hello world" program when you start learning a new programming language. then I decided to apply Iris on 20 machine learning method on it.
The Iris flower data set or Fisher's Iris data set is a **multivariate data set** introduced by the British statistician and biologist Ronald Fisher in his 1936 paper The use of multiple measurements in taxonomic problems as an example of linear discriminant analysis. It is sometimes called Anderson's Iris data set because Edgar Anderson collected the data to quantify the morphologic variation of Iris flowers in three related species. Two of the three species were collected in the Gaspé Peninsula "all from the same pasture, and picked on the same day and measured at the same time by the same person with the same apparatus".
The data set consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica, and Iris versicolor). Four features were measured from each sample: the length and the width of the sepals and petals, in centimeters. Based on the combination of these four features, Fisher developed a linear discriminant model to distinguish the species from each other.
As a result, **iris dataset is used as the input of all algorithms**.
<a id="9"></a> <br>
### 4-2 Outputs
the outputs for our algorithms totally depend on the type of classification or clustering algorithms.
the outputs can be the number of clusters or predict for new input.
**setosa**: Iris setosa, true or false, used as target.
**versicolour**: Iris versicolour, true or false, used as target.
**virginica**: Iris virginica, true or false, used as a target.
<a id="10"></a> <br>
## 5-Installation
#### Windows:
* Anaconda (from https://www.continuum.io) is a free Python distribution for SciPy stack. It is also available for Linux and Mac.
* Canopy (https://www.enthought.com/products/canopy/) is available as free as well as commercial distribution with full SciPy stack for Windows, Linux and Mac.
* Python (x,y) is a free Python distribution with SciPy stack and Spyder IDE for Windows OS. (Downloadable from http://python-xy.github.io/)
#### Linux
Package managers of respective Linux distributions are used to install one or more packages in SciPy stack.
For Ubuntu Users:
sudo apt-get install python-numpy python-scipy python-matplotlibipythonipythonnotebook
python-pandas python-sympy python-nose
<a id="11"></a> <br>
## 5-1 Jupyter notebook
I strongly recommend installing **Python** and **Jupyter** using the **[Anaconda Distribution](https://www.anaconda.com/download/)**, which includes Python, the Jupyter Notebook, and other commonly used packages for scientific computing and data science.
First, download Anaconda. We recommend downloading Anaconda’s latest Python 3 version.
Second, install the version of Anaconda which you downloaded, following the instructions on the download page.
Congratulations, you have installed Jupyter Notebook! To run the notebook, run the following command at the Terminal (Mac/Linux) or Command Prompt (Windows):
> jupyter notebook
>
<a id="12"></a> <br>
## 5-2 Kaggle Kernel
Kaggle kernel is an environment just like you use jupyter notebook, it's an **extension** of the where in you are able to carry out all the functions of jupyter notebooks plus it has some added tools like forking et al.
<a id="13"></a> <br>
## 5-3 Colab notebook
**Colaboratory** is a research tool for machine learning education and research. It’s a Jupyter notebook environment that requires no setup to use.
### 5-3-1 What browsers are supported?
Colaboratory works with most major browsers, and is most thoroughly tested with desktop versions of Chrome and Firefox.
### 5-3-2 Is it free to use?
Yes. Colaboratory is a research project that is free to use.
### 5-3-3 What is the difference between Jupyter and Colaboratory?
Jupyter is the open source project on which Colaboratory is based. Colaboratory allows you to use and share Jupyter notebooks with others without having to download, install, or run anything on your own computer other than a browser.
<a id="15"></a> <br>
## 5-5 Loading Packages
In this kernel we are using the following packages:
<img src="http://s8.picofile.com/file/8338227868/packages.png">
### 5-5-1 Import
```
from sklearn.cross_validation import train_test_split
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
from pandas import get_dummies
import plotly.graph_objs as go
from sklearn import datasets
import plotly.plotly as py
import seaborn as sns
import pandas as pd
import numpy as np
import matplotlib
import warnings
import sklearn
import scipy
import numpy
import json
import sys
import csv
import os
```
### 5-5-2 Print
```
print('matplotlib: {}'.format(matplotlib.__version__))
print('sklearn: {}'.format(sklearn.__version__))
print('scipy: {}'.format(scipy.__version__))
print('seaborn: {}'.format(sns.__version__))
print('pandas: {}'.format(pd.__version__))
print('numpy: {}'.format(np.__version__))
print('Python: {}'.format(sys.version))
#show plot inline
%matplotlib inline
```
<a id="16"></a> <br>
## 6- Exploratory Data Analysis(EDA)
In this section, you'll learn how to use graphical and numerical techniques to begin uncovering the structure of your data.
* Which variables suggest interesting relationships?
* Which observations are unusual?
By the end of the section, you'll be able to answer these questions and more, while generating graphics that are both insightful and beautiful. then We will review analytical and statistical operations:
* 5-1 Data Collection
* 5-2 Visualization
* 5-3 Data Preprocessing
* 5-4 Data Cleaning
<img src="http://s9.picofile.com/file/8338476134/EDA.png">
<a id="17"></a> <br>
## 6-1 Data Collection
**Data collection** is the process of gathering and measuring data, information or any variables of interest in a standardized and established manner that enables the collector to answer or test hypothesis and evaluate outcomes of the particular collection.[techopedia]
**Iris dataset** consists of 3 different types of irises’ (Setosa, Versicolour, and Virginica) petal and sepal length, stored in a 150x4 numpy.ndarray
The rows being the samples and the columns being: Sepal Length, Sepal Width, Petal Length and Petal Width.[6]
```
# import Dataset to play with it
dataset = pd.read_csv('../input/Iris.csv')
```
**<< Note 1 >>**
* Each row is an observation (also known as : sample, example, instance, record)
* Each column is a feature (also known as: Predictor, attribute, Independent Variable, input, regressor, Covariate)
After loading the data via **pandas**, we should checkout what the content is, description and via the following:
```
type(dataset)
```
<a id="18"></a> <br>
## 6-2 Visualization
**Data visualization** is the presentation of data in a pictorial or graphical format. It enables decision makers to see analytics presented visually, so they can grasp difficult concepts or identify new patterns.
With interactive visualization, you can take the concept a step further by using technology to drill down into charts and graphs for more detail, interactively changing what data you see and how it’s processed.[SAS]
In this section I show you **11 plots** with **matplotlib** and **seaborn** that is listed in the blew picture:
<img src="http://s8.picofile.com/file/8338475500/visualization.jpg" />
<a id="19"></a> <br>
### 6-2-1 Scatter plot
Scatter plot Purpose To identify the type of relationship (if any) between two quantitative variables
```
# Modify the graph above by assigning each species an individual color.
sns.FacetGrid(dataset, hue="Species", size=5) \
.map(plt.scatter, "SepalLengthCm", "SepalWidthCm") \
.add_legend()
plt.show()
```
<a id="20"></a> <br>
### 6-2-2 Box
In descriptive statistics, a **box plot** or boxplot is a method for graphically depicting groups of numerical data through their quartiles. Box plots may also have lines extending vertically from the boxes (whiskers) indicating variability outside the upper and lower quartiles, hence the terms box-and-whisker plot and box-and-whisker diagram.[wikipedia]
```
dataset.plot(kind='box', subplots=True, layout=(2,3), sharex=False, sharey=False)
plt.figure()
#This gives us a much clearer idea of the distribution of the input attributes:
# To plot the species data using a box plot:
sns.boxplot(x="Species", y="PetalLengthCm", data=dataset )
plt.show()
# Use Seaborn's striplot to add data points on top of the box plot
# Insert jitter=True so that the data points remain scattered and not piled into a verticle line.
# Assign ax to each axis, so that each plot is ontop of the previous axis.
ax= sns.boxplot(x="Species", y="PetalLengthCm", data=dataset)
ax= sns.stripplot(x="Species", y="PetalLengthCm", data=dataset, jitter=True, edgecolor="gray")
plt.show()
# Tweek the plot above to change fill and border color color using ax.artists.
# Assing ax.artists a variable name, and insert the box number into the corresponding brackets
ax= sns.boxplot(x="Species", y="PetalLengthCm", data=dataset)
ax= sns.stripplot(x="Species", y="PetalLengthCm", data=dataset, jitter=True, edgecolor="gray")
boxtwo = ax.artists[2]
boxtwo.set_facecolor('red')
boxtwo.set_edgecolor('black')
boxthree=ax.artists[1]
boxthree.set_facecolor('yellow')
boxthree.set_edgecolor('black')
plt.show()
```
<a id="21"></a> <br>
### 6-2-3 Histogram
We can also create a **histogram** of each input variable to get an idea of the distribution.
```
# histograms
dataset.hist(figsize=(15,20))
plt.figure()
```
It looks like perhaps two of the input variables have a Gaussian distribution. This is useful to note as we can use algorithms that can exploit this assumption.
```
dataset["PetalLengthCm"].hist();
```
<a id="22"></a> <br>
### 6-2-4 Multivariate Plots
Now we can look at the interactions between the variables.
First, let’s look at scatterplots of all pairs of attributes. This can be helpful to spot structured relationships between input variables.
```
# scatter plot matrix
pd.plotting.scatter_matrix(dataset,figsize=(10,10))
plt.figure()
```
Note the diagonal grouping of some pairs of attributes. This suggests a high correlation and a predictable relationship.
<a id="23"></a> <br>
### 6-2-5 violinplots
```
# violinplots on petal-length for each species
sns.violinplot(data=dataset,x="Species", y="PetalLengthCm")
```
<a id="24"></a> <br>
### 6-2-6 pairplot
```
# Using seaborn pairplot to see the bivariate relation between each pair of features
sns.pairplot(dataset, hue="Species")
```
From the plot, we can see that the species setosa is separataed from the other two across all feature combinations
We can also replace the histograms shown in the diagonal of the pairplot by kde.
```
# updating the diagonal elements in a pairplot to show a kde
sns.pairplot(dataset, hue="Species",diag_kind="kde")
```
<a id="25"></a> <br>
### 6-2-7 kdeplot
```
# seaborn's kdeplot, plots univariate or bivariate density estimates.
#Size can be changed by tweeking the value used
sns.FacetGrid(dataset, hue="Species", size=5).map(sns.kdeplot, "PetalLengthCm").add_legend()
plt.show()
```
<a id="26"></a> <br>
### 6-2-8 jointplot
```
# Use seaborn's jointplot to make a hexagonal bin plot
#Set desired size and ratio and choose a color.
sns.jointplot(x="SepalLengthCm", y="SepalWidthCm", data=dataset, size=10,ratio=10, kind='hex',color='green')
plt.show()
```
<a id="27"></a> <br>
### 6-2-9 andrews_curves
```
#In Pandas use Andrews Curves to plot and visualize data structure.
#Each multivariate observation is transformed into a curve and represents the coefficients of a Fourier series.
#This useful for detecting outliers in times series data.
#Use colormap to change the color of the curves
from pandas.tools.plotting import andrews_curves
andrews_curves(dataset.drop("Id", axis=1), "Species",colormap='rainbow')
plt.show()
# we will use seaborn jointplot shows bivariate scatterplots and univariate histograms with Kernel density
# estimation in the same figure
sns.jointplot(x="SepalLengthCm", y="SepalWidthCm", data=dataset, size=6, kind='kde', color='#800000', space=0)
```
<a id="28"></a> <br>
### 6-2-10 Heatmap
```
plt.figure(figsize=(7,4))
sns.heatmap(dataset.corr(),annot=True,cmap='cubehelix_r') #draws heatmap with input as the correlation matrix calculted by(iris.corr())
plt.show()
```
<a id="29"></a> <br>
### 6-2-11 radviz
```
# A final multivariate visualization technique pandas has is radviz
# Which puts each feature as a point on a 2D plane, and then simulates
# having each sample attached to those points through a spring weighted
# by the relative value for that feature
from pandas.tools.plotting import radviz
radviz(dataset.drop("Id", axis=1), "Species")
```
### 6-2-12 Bar Plot
```
dataset['Species'].value_counts().plot(kind="bar");
```
### 6-2-14 visualization with Plotly
```
import plotly.offline as py
import plotly.graph_objs as go
py.init_notebook_mode(connected=True)
from plotly import tools
import plotly.figure_factory as ff
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features.
Y = iris.target
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
trace = go.Scatter(x=X[:, 0],
y=X[:, 1],
mode='markers',
marker=dict(color=np.random.randn(150),
size=10,
colorscale='Viridis',
showscale=False))
layout = go.Layout(title='Training Points',
xaxis=dict(title='Sepal length',
showgrid=False),
yaxis=dict(title='Sepal width',
showgrid=False),
)
fig = go.Figure(data=[trace], layout=layout)
py.iplot(fig)
```
**<< Note >>**
**Yellowbrick** is a suite of visual diagnostic tools called “Visualizers” that extend the Scikit-Learn API to allow human steering of the model selection process. In a nutshell, Yellowbrick combines scikit-learn with matplotlib in the best tradition of the scikit-learn documentation, but to produce visualizations for your models!
### 6-2-13 Conclusion
we have used Python to apply data visualization tools to the Iris dataset. Color and size changes were made to the data points in scatterplots. I changed the border and fill color of the boxplot and violin, respectively.
<a id="30"></a> <br>
## 6-3 Data Preprocessing
**Data preprocessing** refers to the transformations applied to our data before feeding it to the algorithm.
Data Preprocessing is a technique that is used to convert the raw data into a clean data set. In other words, whenever the data is gathered from different sources it is collected in raw format which is not feasible for the analysis.
there are plenty of steps for data preprocessing and we just listed some of them :
* removing Target column (id)
* Sampling (without replacement)
* Making part of iris unbalanced and balancing (with undersampling and SMOTE)
* Introducing missing values and treating them (replacing by average values)
* Noise filtering
* Data discretization
* Normalization and standardization
* PCA analysis
* Feature selection (filter, embedded, wrapper)
## 6-3-1 Features
Features:
* numeric
* categorical
* ordinal
* datetime
* coordinates
find the type of features in titanic dataset
<img src="http://s9.picofile.com/file/8339959442/titanic.png" height="700" width="600" />
### 6-3-2 Explorer Dataset
1- Dimensions of the dataset.
2- Peek at the data itself.
3- Statistical summary of all attributes.
4- Breakdown of the data by the class variable.[7]
Don’t worry, each look at the data is **one command**. These are useful commands that you can use again and again on future projects.
```
# shape
print(dataset.shape)
#columns*rows
dataset.size
```
how many NA elements in every column
```
dataset.isnull().sum()
# remove rows that have NA's
dataset = dataset.dropna()
```
We can get a quick idea of how many instances (rows) and how many attributes (columns) the data contains with the shape property.
You should see 150 instances and 5 attributes:
for getting some information about the dataset you can use **info()** command
```
print(dataset.info())
```
you see number of unique item for Species with command below:
```
dataset['Species'].unique()
dataset["Species"].value_counts()
```
to check the first 5 rows of the data set, we can use head(5).
```
dataset.head(5)
```
to check out last 5 row of the data set, we use tail() function
```
dataset.tail()
```
to pop up 5 random rows from the data set, we can use **sample(5)** function
```
dataset.sample(5)
```
to give a statistical summary about the dataset, we can use **describe()
```
dataset.describe()
```
to check out how many null info are on the dataset, we can use **isnull().sum()
```
dataset.isnull().sum()
dataset.groupby('Species').count()
```
to print dataset **columns**, we can use columns atribute
```
dataset.columns
```
**<< Note 2 >>**
in pandas's data frame you can perform some query such as "where"
```
dataset.where(dataset ['Species']=='Iris-setosa')
```
as you can see in the below in python, it is so easy perform some query on the dataframe:
```
dataset[dataset['SepalLengthCm']>7.2]
# Seperating the data into dependent and independent variables
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
```
**<< Note >>**
>**Preprocessing and generation pipelines depend on a model type**
<a id="31"></a> <br>
## 6-4 Data Cleaning
When dealing with real-world data, dirty data is the norm rather than the exception. We continuously need to predict correct values, impute missing ones, and find links between various data artefacts such as schemas and records. We need to stop treating data cleaning as a piecemeal exercise (resolving different types of errors in isolation), and instead leverage all signals and resources (such as constraints, available statistics, and dictionaries) to accurately predict corrective actions.
The primary goal of data cleaning is to detect and remove errors and **anomalies** to increase the value of data in analytics and decision making. While it has been the focus of many researchers for several years, individual problems have been addressed separately. These include missing value imputation, outliers detection, transformations, integrity constraints violations detection and repair, consistent query answering, deduplication, and many other related problems such as profiling and constraints mining.[8]
```
cols = dataset.columns
features = cols[0:4]
labels = cols[4]
print(features)
print(labels)
#Well conditioned data will have zero mean and equal variance
#We get this automattically when we calculate the Z Scores for the data
data_norm = pd.DataFrame(dataset)
for feature in features:
dataset[feature] = (dataset[feature] - dataset[feature].mean())/dataset[feature].std()
#Show that should now have zero mean
print("Averages")
print(dataset.mean())
print("\n Deviations")
#Show that we have equal variance
print(pow(dataset.std(),2))
#Shuffle The data
indices = data_norm.index.tolist()
indices = np.array(indices)
np.random.shuffle(indices)
# One Hot Encode as a dataframe
from sklearn.cross_validation import train_test_split
y = get_dummies(y)
# Generate Training and Validation Sets
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=.3)
# Convert to np arrays so that we can use with TensorFlow
X_train = np.array(X_train).astype(np.float32)
X_test = np.array(X_test).astype(np.float32)
y_train = np.array(y_train).astype(np.float32)
y_test = np.array(y_test).astype(np.float32)
#Check to make sure split still has 4 features and 3 labels
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
```
<a id="32"></a> <br>
## 7- Model Deployment
In this section have been applied more than **20 learning algorithms** that play an important rule in your experiences and improve your knowledge in case of ML technique.
> **<< Note 3 >>** : The results shown here may be slightly different for your analysis because, for example, the neural network algorithms use random number generators for fixing the initial value of the weights (starting points) of the neural networks, which often result in obtaining slightly different (local minima) solutions each time you run the analysis. Also note that changing the seed for the random number generator used to create the train, test, and validation samples can change your results.
## Families of ML algorithms
There are several categories for machine learning algorithms, below are some of these categories:
* Linear
* Linear Regression
* Logistic Regression
* Support Vector Machines
* Tree-Based
* Decision Tree
* Random Forest
* GBDT
* KNN
* Neural Networks
-----------------------------
And if we want to categorize ML algorithms with the type of learning, there are below type:
* Classification
* k-Nearest Neighbors
* LinearRegression
* SVM
* DT
* NN
* clustering
* K-means
* HCA
* Expectation Maximization
* Visualization and dimensionality reduction:
* Principal Component Analysis(PCA)
* Kernel PCA
* Locally -Linear Embedding (LLE)
* t-distributed Stochastic Neighbor Embedding (t-SNE)
* Association rule learning
* Apriori
* Eclat
* Semisupervised learning
* Reinforcement Learning
* Q-learning
* Batch learning & Online learning
* Ensemble Learning
**<< Note >>**
> Here is no method which outperforms all others for all tasks
<a id="33"></a> <br>
## Prepare Features & Targets
First of all seperating the data into dependent(Feature) and independent(Target) variables.
**<< Note 4 >>**
* X==>>Feature
* y==>>Target
```
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
# Splitting the dataset into the Training set and Test set
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
```
## Accuracy and precision
* **precision** :
In pattern recognition, information retrieval and binary classification, precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances,
* **recall** :
recall is the fraction of relevant instances that have been retrieved over the total amount of relevant instances.
* **F-score** :
the F1 score is a measure of a test's accuracy. It considers both the precision p and the recall r of the test to compute the score: p is the number of correct positive results divided by the number of all positive results returned by the classifier, and r is the number of correct positive results divided by the number of all relevant samples (all samples that should have been identified as positive). The F1 score is the harmonic average of the precision and recall, where an F1 score reaches its best value at 1 (perfect precision and recall) and worst at 0.
**What is the difference between accuracy and precision?
"Accuracy" and "precision" are general terms throughout science. A good way to internalize the difference are the common "bullseye diagrams". In machine learning/statistics as a whole, accuracy vs. precision is analogous to bias vs. variance.
<a id="33"></a> <br>
## 7-1 K-Nearest Neighbours
In **Machine Learning**, the **k-nearest neighbors algorithm** (k-NN) is a non-parametric method used for classification and regression. In both cases, the input consists of the k closest training examples in the feature space. The output depends on whether k-NN is used for classification or regression:
In k-NN classification, the output is a class membership. An object is classified by a majority vote of its neighbors, with the object being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor.
In k-NN regression, the output is the property value for the object. This value is the average of the values of its k nearest neighbors.
k-NN is a type of instance-based learning, or lazy learning, where the function is only approximated locally and all computation is deferred until classification. The k-NN algorithm is among the simplest of all machine learning algorithms.
```
# K-Nearest Neighbours
from sklearn.neighbors import KNeighborsClassifier
Model = KNeighborsClassifier(n_neighbors=8)
Model.fit(X_train, y_train)
y_pred = Model.predict(X_test)
# Summary of the predictions made by the classifier
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
# Accuracy score
print('accuracy is',accuracy_score(y_pred,y_test))
```
<a id="34"></a> <br>
## 7-2 Radius Neighbors Classifier
Classifier implementing a **vote** among neighbors within a given **radius**
In scikit-learn **RadiusNeighborsClassifier** is very similar to **KNeighborsClassifier** with the exception of two parameters. First, in RadiusNeighborsClassifier we need to specify the radius of the fixed area used to determine if an observation is a neighbor using radius. Unless there is some substantive reason for setting radius to some value, it is best to treat it like any other hyperparameter and tune it during model selection. The second useful parameter is outlier_label, which indicates what label to give an observation that has no observations within the radius - which itself can often be a useful tool for identifying outliers.
```
from sklearn.neighbors import RadiusNeighborsClassifier
Model=RadiusNeighborsClassifier(radius=8.0)
Model.fit(X_train,y_train)
y_pred=Model.predict(X_test)
#summary of the predictions made by the classifier
print(classification_report(y_test,y_pred))
print(confusion_matrix(y_test,y_pred))
#Accouracy score
print('accuracy is ', accuracy_score(y_test,y_pred))
```
<a id="35"></a> <br>
## 7-3 Logistic Regression
Logistic regression is the appropriate regression analysis to conduct when the dependent variable is **dichotomous** (binary). Like all regression analyses, the logistic regression is a **predictive analysis**.
In statistics, the logistic model (or logit model) is a widely used statistical model that, in its basic form, uses a logistic function to model a binary dependent variable; many more complex extensions exist. In regression analysis, logistic regression (or logit regression) is estimating the parameters of a logistic model; it is a form of binomial regression. Mathematically, a binary logistic model has a dependent variable with two possible values, such as pass/fail, win/lose, alive/dead or healthy/sick; these are represented by an indicator variable, where the two values are labeled "0" and "1"
```
# LogisticRegression
from sklearn.linear_model import LogisticRegression
Model = LogisticRegression()
Model.fit(X_train, y_train)
y_pred = Model.predict(X_test)
# Summary of the predictions made by the classifier
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
# Accuracy score
print('accuracy is',accuracy_score(y_pred,y_test))
```
<a id="36"></a> <br>
## 7-4 Passive Aggressive Classifier
```
from sklearn.linear_model import PassiveAggressiveClassifier
Model = PassiveAggressiveClassifier()
Model.fit(X_train, y_train)
y_pred = Model.predict(X_test)
# Summary of the predictions made by the classifier
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
# Accuracy score
print('accuracy is',accuracy_score(y_pred,y_test))
```
<a id="37"></a> <br>
## 7-5 Naive Bayes
In machine learning, naive Bayes classifiers are a family of simple "**probabilistic classifiers**" based on applying Bayes' theorem with strong (naive) independence assumptions between the features.
```
# Naive Bayes
from sklearn.naive_bayes import GaussianNB
Model = GaussianNB()
Model.fit(X_train, y_train)
y_pred = Model.predict(X_test)
# Summary of the predictions made by the classifier
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
# Accuracy score
print('accuracy is',accuracy_score(y_pred,y_test))
```
<a id="39"></a> <br>
## 7-7 BernoulliNB
Like MultinomialNB, this classifier is suitable for **discrete data**. The difference is that while MultinomialNB works with occurrence counts, BernoulliNB is designed for binary/boolean features.
```
# BernoulliNB
from sklearn.naive_bayes import BernoulliNB
Model = BernoulliNB()
Model.fit(X_train, y_train)
y_pred = Model.predict(X_test)
# Summary of the predictions made by the classifier
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
# Accuracy score
print('accuracy is',accuracy_score(y_pred,y_test))
```
<a id="40"></a> <br>
## 7-8 SVM
The advantages of support vector machines are:
* Effective in high dimensional spaces.
* Still effective in cases where number of dimensions is greater than the number of samples.
* Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient.
* Versatile: different Kernel functions can be specified for the decision function. Common kernels are provided, but it is also possible to specify custom kernels.
The disadvantages of support vector machines include:
* If the number of features is much greater than the number of samples, avoid over-fitting in choosing Kernel functions and regularization term is crucial.
* SVMs do not directly provide probability estimates, these are calculated using an expensive five-fold cross-validation
```
# Support Vector Machine
from sklearn.svm import SVC
Model = SVC()
Model.fit(X_train, y_train)
y_pred = Model.predict(X_test)
# Summary of the predictions made by the classifier
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
# Accuracy score
print('accuracy is',accuracy_score(y_pred,y_test))
```
<a id="41"></a> <br>
## 7-9 Nu-Support Vector Classification
> Similar to SVC but uses a parameter to control the number of support vectors.
```
# Support Vector Machine's
from sklearn.svm import NuSVC
Model = NuSVC()
Model.fit(X_train, y_train)
y_pred = Model.predict(X_test)
# Summary of the predictions made by the classifier
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
# Accuracy score
print('accuracy is',accuracy_score(y_pred,y_test))
```
<a id="42"></a> <br>
## 7-10 Linear Support Vector Classification
Similar to **SVC** with parameter kernel=’linear’, but implemented in terms of liblinear rather than libsvm, so it has more flexibility in the choice of penalties and loss functions and should scale better to large numbers of samples.
```
# Linear Support Vector Classification
from sklearn.svm import LinearSVC
Model = LinearSVC()
Model.fit(X_train, y_train)
y_pred = Model.predict(X_test)
# Summary of the predictions made by the classifier
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
# Accuracy score
print('accuracy is',accuracy_score(y_pred,y_test))
```
<a id="43"></a> <br>
## 7-11 Decision Tree
Decision Trees (DTs) are a non-parametric supervised learning method used for **classification** and **regression**. The goal is to create a model that predicts the value of a target variable by learning simple **decision rules** inferred from the data features.
```
# Decision Tree's
from sklearn.tree import DecisionTreeClassifier
Model = DecisionTreeClassifier()
Model.fit(X_train, y_train)
y_pred = Model.predict(X_test)
# Summary of the predictions made by the classifier
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
# Accuracy score
print('accuracy is',accuracy_score(y_pred,y_test))
```
<a id="44"></a> <br>
## 7-12 ExtraTreeClassifier
An extremely randomized tree classifier.
Extra-trees differ from classic decision trees in the way they are built. When looking for the best split to separate the samples of a node into two groups, random splits are drawn for each of the **max_features** randomly selected features and the best split among those is chosen. When max_features is set 1, this amounts to building a totally random decision tree.
**Warning**: Extra-trees should only be used within ensemble methods.
```
# ExtraTreeClassifier
from sklearn.tree import ExtraTreeClassifier
Model = ExtraTreeClassifier()
Model.fit(X_train, y_train)
y_pred = Model.predict(X_test)
# Summary of the predictions made by the classifier
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
# Accuracy score
print('accuracy is',accuracy_score(y_pred,y_test))
```
<a id="45"></a> <br>
## 7-13 Neural network
I have used multi-layer Perceptron classifier.
This model optimizes the log-loss function using **LBFGS** or **stochastic gradient descent**.
## 7-13-1 What is a Perceptron?
There are many online examples and tutorials on perceptrons and learning. Here is a list of some articles:
- [Wikipedia on Perceptrons](https://en.wikipedia.org/wiki/Perceptron)
- Jurafsky and Martin (ed. 3), Chapter 8
This is an example that I have taken from a draft of the 3rd edition of Jurafsky and Martin, with slight modifications:
We import *numpy* and use its *exp* function. We could use the same function from the *math* module, or some other module like *scipy*. The *sigmoid* function is defined as in the textbook:
```
import numpy as np
def sigmoid(z):
return 1 / (1 + np.exp(-z))
```
Our example data, **weights** $w$, **bias** $b$, and **input** $x$ are defined as:
```
w = np.array([0.2, 0.3, 0.8])
b = 0.5
x = np.array([0.5, 0.6, 0.1])
```
Our neural unit would compute $z$ as the **dot-product** $w \cdot x$ and add the **bias** $b$ to it. The sigmoid function defined above will convert this $z$ value to the **activation value** $a$ of the unit:
```
z = w.dot(x) + b
print("z:", z)
print("a:", sigmoid(z))
```
### The XOR Problem
The power of neural units comes from combining them into larger networks. Minsky and Papert (1969): A single neural unit cannot compute the simple logical function XOR.
The task is to implement a simple **perceptron** to compute logical operations like AND, OR, and XOR.
- Input: $x_1$ and $x_2$
- Bias: $b = -1$ for AND; $b = 0$ for OR
- Weights: $w = [1, 1]$
with the following activation function:
$$
y = \begin{cases}
\ 0 & \quad \text{if } w \cdot x + b \leq 0\\
\ 1 & \quad \text{if } w \cdot x + b > 0
\end{cases}
$$
We can define this activation function in Python as:
```
def activation(z):
if z > 0:
return 1
return 0
```
For AND we could implement a perceptron as:
```
w = np.array([1, 1])
b = -1
x = np.array([0, 0])
print("0 AND 0:", activation(w.dot(x) + b))
x = np.array([1, 0])
print("1 AND 0:", activation(w.dot(x) + b))
x = np.array([0, 1])
print("0 AND 1:", activation(w.dot(x) + b))
x = np.array([1, 1])
print("1 AND 1:", activation(w.dot(x) + b))
```
For OR we could implement a perceptron as:
```
w = np.array([1, 1])
b = 0
x = np.array([0, 0])
print("0 OR 0:", activation(w.dot(x) + b))
x = np.array([1, 0])
print("1 OR 0:", activation(w.dot(x) + b))
x = np.array([0, 1])
print("0 OR 1:", activation(w.dot(x) + b))
x = np.array([1, 1])
print("1 OR 1:", activation(w.dot(x) + b))
```
There is no way to implement a perceptron for XOR this way.
no see our prediction for iris
```
from sklearn.neural_network import MLPClassifier
Model=MLPClassifier()
Model.fit(X_train,y_train)
y_pred=Model.predict(X_test)
# Summary of the predictions
print(classification_report(y_test,y_pred))
print(confusion_matrix(y_test,y_pred))
#Accuracy Score
print('accuracy is ',accuracy_score(y_pred,y_test))
```
<a id="46"></a> <br>
## 7-14 RandomForest
A random forest is a meta estimator that **fits a number of decision tree classifiers** on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting.
The sub-sample size is always the same as the original input sample size but the samples are drawn with replacement if bootstrap=True (default).
```
from sklearn.ensemble import RandomForestClassifier
Model=RandomForestClassifier(max_depth=2)
Model.fit(X_train,y_train)
y_pred=Model.predict(X_test)
print(classification_report(y_test,y_pred))
print(confusion_matrix(y_pred,y_test))
#Accuracy Score
print('accuracy is ',accuracy_score(y_pred,y_test))
```
<a id="47"></a> <br>
## 7-15 Bagging classifier
A Bagging classifier is an ensemble **meta-estimator** that fits base classifiers each on random subsets of the original dataset and then aggregate their individual predictions (either by voting or by averaging) to form a final prediction. Such a meta-estimator can typically be used as a way to reduce the variance of a black-box estimator (e.g., a decision tree), by introducing randomization into its construction procedure and then making an ensemble out of it.
This algorithm encompasses several works from the literature. When random subsets of the dataset are drawn as random subsets of the samples, then this algorithm is known as Pasting . If samples are drawn with replacement, then the method is known as Bagging . When random subsets of the dataset are drawn as random subsets of the features, then the method is known as Random Subspaces . Finally, when base estimators are built on subsets of both samples and features, then the method is known as Random Patches .[http://scikit-learn.org]
```
from sklearn.ensemble import BaggingClassifier
Model=BaggingClassifier()
Model.fit(X_train,y_train)
y_pred=Model.predict(X_test)
print(classification_report(y_test,y_pred))
print(confusion_matrix(y_pred,y_test))
#Accuracy Score
print('accuracy is ',accuracy_score(y_pred,y_test))
```
<a id="48"></a> <br>
## 7-16 AdaBoost classifier
An AdaBoost classifier is a meta-estimator that begins by fitting a classifier on the original dataset and then fits additional copies of the classifier on the same dataset but where the weights of incorrectly classified instances are adjusted such that subsequent classifiers focus more on difficult cases.
This class implements the algorithm known as **AdaBoost-SAMME** .
```
from sklearn.ensemble import AdaBoostClassifier
Model=AdaBoostClassifier()
Model.fit(X_train,y_train)
y_pred=Model.predict(X_test)
print(classification_report(y_test,y_pred))
print(confusion_matrix(y_pred,y_test))
#Accuracy Score
print('accuracy is ',accuracy_score(y_pred,y_test))
```
<a id="49"></a> <br>
## 7-17 Gradient Boosting Classifier
GB builds an additive model in a forward stage-wise fashion; it allows for the optimization of arbitrary differentiable loss functions.
```
from sklearn.ensemble import GradientBoostingClassifier
Model=GradientBoostingClassifier()
Model.fit(X_train,y_train)
y_pred=Model.predict(X_test)
print(classification_report(y_test,y_pred))
print(confusion_matrix(y_pred,y_test))
#Accuracy Score
print('accuracy is ',accuracy_score(y_pred,y_test))
```
<a id="50"></a> <br>
## 7-18 Linear Discriminant Analysis
Linear Discriminant Analysis (discriminant_analysis.LinearDiscriminantAnalysis) and Quadratic Discriminant Analysis (discriminant_analysis.QuadraticDiscriminantAnalysis) are two classic classifiers, with, as their names suggest, a **linear and a quadratic decision surface**, respectively.
These classifiers are attractive because they have closed-form solutions that can be easily computed, are inherently multiclass, have proven to work well in practice, and have no **hyperparameters** to tune.
```
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
Model=LinearDiscriminantAnalysis()
Model.fit(X_train,y_train)
y_pred=Model.predict(X_test)
print(classification_report(y_test,y_pred))
print(confusion_matrix(y_pred,y_test))
#Accuracy Score
print('accuracy is ',accuracy_score(y_pred,y_test))
```
<a id="51"></a> <br>
## 7-19 Quadratic Discriminant Analysis
A classifier with a quadratic decision boundary, generated by fitting class conditional densities to the data and using Bayes’ rule.
The model fits a **Gaussian** density to each class.
```
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
Model=QuadraticDiscriminantAnalysis()
Model.fit(X_train,y_train)
y_pred=Model.predict(X_test)
print(classification_report(y_test,y_pred))
print(confusion_matrix(y_pred,y_test))
#Accuracy Score
print('accuracy is ',accuracy_score(y_pred,y_test))
```
<a id="52"></a> <br>
## 7-20 Kmeans
K-means clustering is a type of unsupervised learning, which is used when you have unlabeled data (i.e., data without defined categories or groups).
The goal of this algorithm is **to find groups in the data**, with the number of groups represented by the variable K. The algorithm works iteratively to assign each data point to one of K groups based on the features that are provided.
```
from sklearn.cluster import KMeans
iris_SP = dataset[['SepalLengthCm','SepalWidthCm','PetalLengthCm','PetalWidthCm']]
# k-means cluster analysis for 1-15 clusters
from scipy.spatial.distance import cdist
clusters=range(1,15)
meandist=[]
# loop through each cluster and fit the model to the train set
# generate the predicted cluster assingment and append the mean
# distance my taking the sum divided by the shape
for k in clusters:
model=KMeans(n_clusters=k)
model.fit(iris_SP)
clusassign=model.predict(iris_SP)
meandist.append(sum(np.min(cdist(iris_SP, model.cluster_centers_, 'euclidean'), axis=1))
/ iris_SP.shape[0])
"""
Plot average distance from observations from the cluster centroid
to use the Elbow Method to identify number of clusters to choose
"""
plt.plot(clusters, meandist)
plt.xlabel('Number of clusters')
plt.ylabel('Average distance')
plt.title('Selecting k with the Elbow Method')
# pick the fewest number of clusters that reduces the average distance
# If you observe after 3 we can see graph is almost linear
```
<a id="53"></a> <br>
## 7-21- Backpropagation
Backpropagation is a method used in artificial neural networks to calculate a gradient that is needed in the calculation of the weights to be used in the network.It is commonly used to train deep neural networks,a term referring to neural networks with more than one hidden layer.
In this example we will use a very simple network to start with. The network will only have one input and one output layer. We want to make the following predictions from the input:
| Input | Output |
| ------ |:------:|
| 0 0 1 | 0 |
| 1 1 1 | 1 |
| 1 0 1 | 1 |
| 0 1 1 | 0 |
We will use **Numpy** to compute the network parameters, weights, activation, and outputs:
We will use the *[Sigmoid](http://ml-cheatsheet.readthedocs.io/en/latest/activation_functions.html#sigmoid)* activation function:
```
def sigmoid(z):
"""The sigmoid activation function."""
return 1 / (1 + np.exp(-z))
```
We could use the [ReLU](http://ml-cheatsheet.readthedocs.io/en/latest/activation_functions.html#activation-relu) activation function instead:
```
def relu(z):
"""The ReLU activation function."""
return max(0, z)
```
The [Sigmoid](http://ml-cheatsheet.readthedocs.io/en/latest/activation_functions.html#sigmoid) activation function introduces non-linearity to the computation. It maps the input value to an output value between $0$ and $1$.
<img src="http://s8.picofile.com/file/8339774900/SigmoidFunction1.png" style="max-width:100%; width: 30%; max-width: none">
The derivative of the sigmoid function is maximal at $x=0$ and minimal for lower or higher values of $x$:
<img src="http://s9.picofile.com/file/8339770650/sigmoid_prime.png" style="max-width:100%; width: 25%; max-width: none">
The *sigmoid_prime* function returns the derivative of the sigmoid for any given $z$. The derivative of the sigmoid is $z * (1 - z)$. This is basically the slope of the sigmoid function at any given point:
```
def sigmoid_prime(z):
"""The derivative of sigmoid for z."""
return z * (1 - z)
```
We define the inputs as rows in *X*. There are three input nodes (three columns per vector in $X$. Each row is one trainig example:
```
X = np.array([ [ 0, 0, 1 ],
[ 0, 1, 1 ],
[ 1, 0, 1 ],
[ 1, 1, 1 ] ])
print(X)
```
The outputs are stored in *y*, where each row represents the output for the corresponding input vector (row) in *X*. The vector is initiated as a single row vector and with four columns and transposed (using the $.T$ method) into a column vector with four rows:
```
y = np.array([[0,0,1,1]]).T
print(y)
```
To make the outputs deterministic, we seed the random number generator with a constant. This will guarantee that every time you run the code, you will get the same random distribution:
```
np.random.seed(1)
```
We create a weight matrix ($Wo$) with randomly initialized weights:
```
n_inputs = 3
n_outputs = 1
#Wo = 2 * np.random.random( (n_inputs, n_outputs) ) - 1
Wo = np.random.random( (n_inputs, n_outputs) ) * np.sqrt(2.0/n_inputs)
print(Wo)
```
The reason for the output weight matrix ($Wo$) to have 3 rows and 1 column is that it represents the weights of the connections from the three input neurons to the single output neuron. The initialization of the weight matrix is random with a mean of $0$ and a variance of $1$. There is a good reason for chosing a mean of zero in the weight initialization. See for details the section on Weight Initialization in the [Stanford course CS231n on Convolutional Neural Networks for Visual Recognition](https://cs231n.github.io/neural-networks-2/#init).
The core representation of this network is basically the weight matrix *Wo*. The rest, input matrix, output vector and so on are components that we need to learning and evaluation. The leraning result is stored in the *Wo* weight matrix.
We loop in the optimization and learning cycle 10,000 times. In the *forward propagation* line we process the entire input matrix for training. This is called **full batch** training. I do not use an alternative variable name to represent the input layer, instead I use the input matrix $X$ directly here. Think of this as the different inputs to the input neurons computed at once. In principle the input or training data could have many more training examples, the code would stay the same.
```
for n in range(10000):
# forward propagation
l1 = sigmoid(np.dot(X, Wo))
# compute the loss
l1_error = y - l1
#print("l1_error:\n", l1_error)
# multiply the loss by the slope of the sigmoid at l1
l1_delta = l1_error * sigmoid_prime(l1)
#print("l1_delta:\n", l1_delta)
#print("error:", l1_error, "\nderivative:", sigmoid(l1, True), "\ndelta:", l1_delta, "\n", "-"*10, "\n")
# update weights
Wo += np.dot(X.T, l1_delta)
print("l1:\n", l1)
```
The dots in $l1$ represent the lines in the graphic below. The lines represent the slope of the sigmoid in the particular position. The slope is highest with a value $x = 0$ (blue dot). It is rather shallow with $x = 2$ (green dot), and not so shallow and not as high with $x = -1$. All derivatives are between $0$ and $1$, of course, that is, no slope or a maximal slope of $1$. There is no negative slope in a sigmoid function.
<img src="http://s8.picofile.com/file/8339770734/sigmoid_deriv_2.png" style="max-width:100%; width: 50%; max-width: none">
The matrix $l1\_error$ is a 4 by 1 matrix (4 rows, 1 column). The derivative matrix $sigmoid\_prime(l1)$ is also a 4 by one matrix. The returned matrix of the element-wise product $l1\_delta$ is also the 4 by 1 matrix.
The product of the error and the slopes **reduces the error of high confidence predictions**. When the sigmoid slope is very shallow, the network had a very high or a very low value, that is, it was rather confident. If the network guessed something close to $x=0, y=0.5$, it was not very confident. Such predictions without confidence are updated most significantly. The other peripheral scores are multiplied with a number closer to $0$.
In the prediction line $l1 = sigmoid(np.dot(X, Wo))$ we compute the dot-product of the input vectors with the weights and compute the sigmoid on the sums.
The result of the dot-product is the number of rows of the first matrix ($X$) and the number of columns of the second matrix ($Wo$).
In the computation of the difference between the true (or gold) values in $y$ and the "guessed" values in $l1$ we have an estimate of the miss.
An example computation for the input $[ 1, 0, 1 ]$ and the weights $[ 9.5, 0.2, -0.1 ]$ and an output of $0.99$: If $y = 1$, the $l1\_error = y - l2 = 0.01$, and $l1\_delta = 0.01 * tiny\_deriv$:
<img src="http://s8.picofile.com/file/8339770792/toy_network_deriv.png" style="max-width:100%; width: 40%; max-width: none">
## 7-21-1 More Complex Example with Backpropagation
Consider now a more complicated example where no column has a correlation with the output:
| Input | Output |
| ------ |:------:|
| 0 0 1 | 0 |
| 0 1 1 | 1 |
| 1 0 1 | 1 |
| 1 1 1 | 0 |
The pattern here is our XOR pattern or problem: If there is a $1$ in either column $1$ or $2$, but not in both, the output is $1$ (XOR over column $1$ and $2$).
From our discussion of the XOR problem we remember that this is a *non-linear pattern*, a **one-to-one relationship between a combination of inputs**.
To cope with this problem, we need a network with another layer, that is a layer that will combine and transform the input, and an additional layer will map it to the output. We will add a *hidden layer* with randomized weights and then train those to optimize the output probabilities of the table above.
We will define a new $X$ input matrix that reflects the above table:
```
X = np.array([[0, 0, 1],
[0, 1, 1],
[1, 0, 1],
[1, 1, 1]])
print(X)
```
We also define a new output matrix $y$:
```
y = np.array([[ 0, 1, 1, 0]]).T
print(y)
```
We initialize the random number generator with a constant again:
```
np.random.seed(1)
```
Assume that our 3 inputs are mapped to 4 hidden layer ($Wh$) neurons, we have to initialize the hidden layer weights in a 3 by 4 matrix. The outout layer ($Wo$) is a single neuron that is connected to the hidden layer, thus the output layer is a 4 by 1 matrix:
```
n_inputs = 3
n_hidden_neurons = 4
n_output_neurons = 1
Wh = np.random.random( (n_inputs, n_hidden_neurons) ) * np.sqrt(2.0/n_inputs)
Wo = np.random.random( (n_hidden_neurons, n_output_neurons) ) * np.sqrt(2.0/n_hidden_neurons)
print("Wh:\n", Wh)
print("Wo:\n", Wo)
```
We will loop now 60,000 times to optimize the weights:
```
for i in range(100000):
l1 = sigmoid(np.dot(X, Wh))
l2 = sigmoid(np.dot(l1, Wo))
l2_error = y - l2
if (i % 10000) == 0:
print("Error:", np.mean(np.abs(l2_error)))
# gradient, changing towards the target value
l2_delta = l2_error * sigmoid_prime(l2)
# compute the l1 contribution by value to the l2 error, given the output weights
l1_error = l2_delta.dot(Wo.T)
# direction of the l1 target:
# in what direction is the target l1?
l1_delta = l1_error * sigmoid_prime(l1)
Wo += np.dot(l1.T, l2_delta)
Wh += np.dot(X.T, l1_delta)
print("Wo:\n", Wo)
print("Wh:\n", Wh)
```
The new computation in this new loop is $l1\_error = l2\_delta.dot(Wo.T)$, a **confidence weighted error** from $l2$ to compute an error for $l1$. The computation sends the error across the weights from $l2$ to $l1$. The result is a **contribution weighted error**, because we learn how much each node value in $l1$ **contributed** to the error in $l2$. This step is called **backpropagation**. We update $Wh$ using the same steps we did in the 2 layer implementation.
```
from sklearn import datasets
iris = datasets.load_iris()
X_iris = iris.data
y_iris = iris.target
plt.figure('sepal')
colormarkers = [ ['red','s'], ['greenyellow','o'], ['blue','x']]
for i in range(len(colormarkers)):
px = X_iris[:, 0][y_iris == i]
py = X_iris[:, 1][y_iris == i]
plt.scatter(px, py, c=colormarkers[i][0], marker=colormarkers[i][1])
plt.title('Iris Dataset: Sepal width vs sepal length')
plt.legend(iris.target_names)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.figure('petal')
for i in range(len(colormarkers)):
px = X_iris[:, 2][y_iris == i]
py = X_iris[:, 3][y_iris == i]
plt.scatter(px, py, c=colormarkers[i][0], marker=colormarkers[i][1])
plt.title('Iris Dataset: petal width vs petal length')
plt.legend(iris.target_names)
plt.xlabel('Petal length')
plt.ylabel('Petal width')
plt.show()
```
-----------------
<a id="54"></a> <br>
# 8- Conclusion
In this kernel, I have tried to cover all the parts related to the process of **Machine Learning** with a variety of Python packages and I know that there are still some problems then I hope to get your feedback to improve it.
you can follow me on:
> #### [ GitHub](https://github.com/mjbahmani)
--------------------------------------
**I hope you find this kernel helpful and some <font color="red"><b>UPVOTES</b></font> would be very much appreciated**
<a id="55"></a> <br>
-----------
# 9- References
* [1] [Iris image](https://rpubs.com/wjholst/322258)
* [2] [IRIS](https://archive.ics.uci.edu/ml/datasets/iris)
* [3] [https://skymind.ai/wiki/machine-learning-workflow](https://skymind.ai/wiki/machine-learning-workflow)
* [4] [IRIS-wiki](https://archive.ics.uci.edu/ml/datasets/iris)
* [5] [Problem-define](https://machinelearningmastery.com/machine-learning-in-python-step-by-step/)
* [6] [Sklearn](http://scikit-learn.org/)
* [7] [machine-learning-in-python-step-by-step](https://machinelearningmastery.com/machine-learning-in-python-step-by-step/)
* [8] [Data Cleaning](http://wp.sigmod.org/?p=2288)
* [9] [competitive data science](https://www.coursera.org/learn/competitive-data-science/)
-------------
| github_jupyter |
# Use PyTorch to recognize hand-written digits with Watson Machine Learning REST API
This notebook contains steps and code to demonstrate support of PyTorch Deep Learning experiments in Watson Machine Learning Service. It introduces commands for getting data, training experiments, persisting pipelines, publishing models, deploying models and scoring.
Some familiarity with cURL is helpful. This notebook uses cURL examples.
## Learning goals
The learning goals of this notebook are:
- Working with Watson Machine Learning experiments to train Deep Learning models.
- Downloading computed models to local storage.
- Online deployment and scoring of trained model.
## Contents
This notebook contains the following parts:
1. [Setup](#setup)
2. [Model definition](#model_definition)
3. [Experiment Run](#run)
4. [Historical runs](#runs)
5. [Deploy and Score](#deploy_and_score)
6. [Cleaning](#cleaning)
7. [Summary and next steps](#summary)
<a id="setup"></a>
## 1. Set up the environment
Before you use the sample code in this notebook, you must perform the following setup tasks:
- Contact with your Cloud Pack for Data administrator and ask him for your account credentials
### Connection to WML
Authenticate the Watson Machine Learning service on IBM Cloud Pack for Data. You need to provide platform `url`, your `username` and `password`.
```
%env USERNAME=
%env PASSWORD=
%env DATAPLATFORM_URL=
%env SPACE_ID=
```
<a id="wml_token"></a>
### Getting WML authorization token for further cURL calls
<a href="https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-curl#curl-token" target="_blank" rel="noopener no referrer">Example of cURL call to get WML token</a>
```
%%bash --out token
token=$(curl -sk -X GET \
--user $USERNAME:$PASSWORD \
--header "Accept: application/json" \
"$DATAPLATFORM_URL/v1/preauth/validateAuth")
token=${token#*accessToken\":\"}
token=${token%%\"*}
echo $token
%env TOKEN=$token
```
<a id="space_creation"></a>
### Space creation
**Tip:** If you do not have `space` already created, please convert below three cells to `code` and run them.
First of all, you need to create a `space` that will be used in all of your further cURL calls.
If you do not have `space` already created, below is the cURL call to create one.
<a href="https://cpd-spaces-api.eu-gb.cf.appdomain.cloud/#/Spaces/spaces_create"
target="_blank" rel="noopener no referrer">Space creation</a>
Space creation is asynchronous. This means that you need to check space creation status after creation call.
Make sure that your newly created space is `active`.
<a href="https://cpd-spaces-api.eu-gb.cf.appdomain.cloud/#/Spaces/spaces_get"
target="_blank" rel="noopener no referrer">Get space information</a>
<a id="model_definition"></a>
<a id="experiment_definition"></a>
## 2. Model definition
This section provides samples about how to store model definition via cURL calls.
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Model%20Definitions/model_definitions_create"
target="_blank" rel="noopener no referrer">Store a model definition for Deep Learning experiment</a>
```
%%bash --out model_definition_payload
MODEL_DEFINITION_PAYLOAD='{"name": "PyTorch Hand-written Digit Recognition", "space_id": "'"$SPACE_ID"'", "description": "PyTorch Hand-written Digit Recognition", "tags": ["DL", "PyTorch"], "version": "v1", "platform": {"name": "python", "versions": ["3.7"]}, "command": "pytorch_v_1.1_mnist_onnx.py --epochs 10 --debug-level debug"}'
echo $MODEL_DEFINITION_PAYLOAD | python -m json.tool
%env MODEL_DEFINITION_PAYLOAD=$model_definition_payload
%%bash --out model_definition_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$MODEL_DEFINITION_PAYLOAD" \
"$DATAPLATFORM_URL/ml/v4/model_definitions?version=2020-08-01"| grep '"id": ' | awk -F '"' '{ print $4 }'
%env MODEL_DEFINITION_ID=$model_definition_id
```
<a id="model_preparation"></a>
### Model preparation
Download files with pytorch code. You can either download it via link below or run the cell below the link.
<a href="https://github.com/IBM/watson-machine-learning-samples/raw/master/cpd3.5/definitions/pytorch/mnist/pytorch-onnx_v1_3.zip"
target="_blank" rel="noopener no referrer">Download pytorch-model.zip</a>
```
%%bash
wget https://github.com/IBM/watson-machine-learning-samples/raw/master/cpd3.5/definitions/pytorch/mnist/pytorch_onnx_v1_3.zip \
-O pytorch-onnx_v1_3.zip
```
**Tip**: Convert below cell to code and run it to see model deinition's code.
<a id="def_upload"></a>
### Upload model for the model definition
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Model%20Definitions/model_definitions_upload_model"
target="_blank" rel="noopener no referrer">Upload model for the model definition</a>
```
%%bash
curl -sk -X PUT \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data-binary "@pytorch-onnx_v1_3.zip" \
"$DATAPLATFORM_URL/ml/v4/model_definitions/$MODEL_DEFINITION_ID/model?version=2020-08-01&space_id=$SPACE_ID" \
| python -m json.tool
```
<a id="run"></a>
## 3. Experiment run
This section provides samples about how to trigger Deep Learning experiment via cURL calls.
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Trainings/trainings_create"
target="_blank" rel="noopener no referrer">Schedule a training job for Deep Learning experiment</a>
Specify the source files folder where you have stored your training data. The path should point to a local repository on Watson Machine Learning Accelerator that your system administrator has set up for your use.
**Action:**
Change `training_data_references: location: path: ...`
```
%%bash --out training_payload
TRAINING_PAYLOAD='{"training_data_references": [{"name": "training_input_data", "type": "fs", "connection": {}, "location": {"path": "pytorch-mnist"}, "schema": {"id": "idmlp_schema", "fields": [{"name": "text", "type": "string"}]}}], "results_reference": {"name": "MNIST results", "connection": {}, "location": {"path": "spaces/'"$SPACE_ID"'/assets/experiment"}, "type": "fs"}, "tags": [{"value": "tags_pytorch", "description": "Tags PyTorch"}], "name": "PyTorch hand-written Digit Recognition", "description": "PyTorch hand-written Digit Recognition", "model_definition": {"id": "'"$MODEL_DEFINITION_ID"'", "command": "pytorch_v_1.1_mnist_onnx.py --epochs 10 --debug-level debug", "hardware_spec": {"name": "K80", "nodes": 1}, "software_spec": {"name": "pytorch-onnx_1.3-py3.7"}, "parameters": {"name": "PyTorch_mnist", "description": "PyTorch mnist recognition"}}, "space_id": "'"$SPACE_ID"'"}'
echo $TRAINING_PAYLOAD | python -m json.tool
%env TRAINING_PAYLOAD=$training_payload
%%bash --out training_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$TRAINING_PAYLOAD" \
"$DATAPLATFORM_URL/ml/v4/trainings?version=2020-08-01" | awk -F'"id":' '{print $2}' | cut -c2-37
%env TRAINING_ID=$training_id
```
<a id="training_details"></a>
### Get training details
Treining is an asynchronous endpoint. In case you want to monitor training status and details,
you need to use a GET method and specify which training you want to monitor by usage of training ID.
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Trainings/trainings_get"
target="_blank" rel="noopener no referrer">Get information about training job</a>
### Get training status
```
%%bash
STATUS=$(curl -sk -X GET\
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$DATAPLATFORM_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01")
STATUS=${STATUS#*state\":\"}
STATUS=${STATUS%%\"*}
echo $STATUS
```
Please make sure that training is completed before you go to the next sections.
Monitor `state` of your training by running above cell couple of times.
<a id="runs"></a>
## 4. Historical runs
In this section you will see cURL examples describing how to get historical training runs information.
Output should be similar to the output from training creation but you should see more trainings entries.
Listing trainings:
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Trainings/trainings_list"
target="_blank" rel="noopener no referrer">Get list of historical training jobs information</a>
```
%%bash
HISTORICAL_TRAINING_LIMIT_TO_GET=2
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$DATAPLATFORM_URL/ml/v4/trainings?space_id=$SPACE_ID&version=2020-08-01&limit=$HISTORICAL_TRAINING_LIMIT_TO_GET" \
| python -m json.tool
```
<a id="training_cancel"></a>
### Cancel training run
**Tip:** If you want to cancel your training, please convert below cell to `code`, specify training ID and run.
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Trainings/trainings_delete"
target="_blank" rel="noopener no referrer">Canceling training</a>
---
<a id="deploy_and_score"></a>
## 5. Deploy and Score
In this section you will learn how to deploy and score pipeline model as webservice using WML instance.
Before deployment creation, you need store your model in WML repository.
Please see below cURL call example how to do it.
Download `request.json` with repository request json for model storing.
```
%%bash --out request_json
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$DATAPLATFORM_URL/v2/asset_files/experiment/$TRAINING_ID/assets/$TRAINING_ID/resources/wml_model/request.json?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
%env MODEL_PAYLOAD=$request_json
```
<a id="model_store"></a>
### Store Deep Learning model
Store information about your model to WML repository.
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Models/models_create"
target="_blank" rel="noopener no referrer">Model storing</a>
```
%%bash --out model_details
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$MODEL_PAYLOAD" \
"$DATAPLATFORM_URL/ml/v4/models?version=2020-08-01&space_id=$SPACE_ID"
%env MODEL_DETAILS=$model_details
%%bash --out model_id
echo $MODEL_DETAILS | awk -F '"id": ' '{ print $5 }' | cut -d '"' -f 2
%env MODEL_ID=$model_id
```
<a id="deployment_creation"></a>
### Deployment creation
An Deep Learning online deployment creation is presented below.
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Deployments/deployments_create"
target="_blank" rel="noopener no referrer">Create deployment</a>
```
%%bash --out deployment_payload
DEPLOYMENT_PAYLOAD='{"space_id": "'"$SPACE_ID"'","name": "PyTorch Mnist deployment", "description": "PyTorch model to recognize hand-written digits","online": {},"hardware_spec": {"name": "S"},"asset": {"id": "'"$MODEL_ID"'"}}'
echo $DEPLOYMENT_PAYLOAD | python -m json.tool
%env DEPLOYMENT_PAYLOAD=$deployment_payload
%%bash
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$DEPLOYMENT_PAYLOAD" \
"$DATAPLATFORM_URL/ml/v4/deployments?version=2020-08-01"
%%bash --out deployment_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$DEPLOYMENT_PAYLOAD" \
"$DATAPLATFORM_URL/ml/v4/deployments?version=2020-08-01" \
| grep '"id": ' | awk -F '"' '{ print $4 }' | sed -n 3p
%env DEPLOYMENT_ID=$deployment_id
```
<a id="deployment_details"></a>
### Get deployment details
As deployment API is asynchronous, please make sure your deployment is in `ready` state before going to the next points.
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Deployments/deployments_get"
target="_blank" rel="noopener no referrer">Get deployment details</a>
```
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$DATAPLATFORM_URL/ml/v4/deployments/$DEPLOYMENT_ID?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
```
<a id="input_score"></a>
### Prepare scoring input data
**Hint:** You may need to install numpy using following command `!pip install numpy`
```
!wget -q https://github.com/IBM/watson-machine-learning-samples/raw/master/cpd3.5/data/mnist/mnist.npz
import numpy as np
mnist_dataset = np.load('mnist.npz')
test_mnist = mnist_dataset['x_test']
image_1 = [test_mnist[0].tolist()]
image_2 = [test_mnist[1].tolist()]
%matplotlib inline
import matplotlib.pyplot as plt
for i, image in enumerate([test_mnist[0], test_mnist[1]]):
plt.subplot(2, 2, i + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
```
<a id="webservice_score"></a>
### Scoring of a webservice
If you want to make a `score` call on your deployment, please follow a below method:
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Deployment%20Jobs/deployment_jobs_create"
target="_blank" rel="noopener no referrer">Create deployment job</a>
```
%%bash -s "$image_1" "$image_2"
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data '{"space_id": "$SPACE_ID","input_data": [{"values": ['"$1"', '"$2"']}]}' \
"$DATAPLATFORM_URL/ml/v4/deployments/$DEPLOYMENT_ID/predictions?version=2020-08-01" \
| python -m json.tool
```
<a id="deployments_list"></a>
### Listing all deployments
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Deployments/deployments_list"
target="_blank" rel="noopener no referrer">List deployments details</a>
```
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$DATAPLATFORM_URL/ml/v4/deployments?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
```
<a id="cleaning"></a>
## 6. Cleaning section
Below section is useful when you want to clean all of your previous work within this notebook.
Just convert below cells into the `code` and run them.
<a id="training_delete"></a>
### Delete training run
**Tip:** You can completely delete a training run with its metadata.
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Trainings/trainings_delete"
target="_blank" rel="noopener no referrer">Deleting training</a>
<a id="deployment_delete"></a>
### Deleting deployment
**Tip:** You can delete existing deployment by calling DELETE method.
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Deployments/deployments_delete"
target="_blank" rel="noopener no referrer">Delete deployment</a>
<a id="model_delete"></a>
### Delete model from repository
**Tip:** If you want to completely remove your stored model and model metadata, just use a DELETE method.
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Models/models_delete"
target="_blank" rel="noopener no referrer">Delete model from repository</a>
<a id="def_delete"></a>
### Delete model definition
**Tip:** If you want to completely remove your model definition, just use a DELETE method.
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Model%20Definitions/model_definitions_delete"
target="_blank" rel="noopener no referrer">Delete model definition</a>
<a id="summary"></a>
## 7. Summary and next steps
You successfully completed this notebook!.
You learned how to use `cURL` calls to store, deploy and score a PyTorch Deep Learning model in WML.
### Authors
**Jan Sołtysik**, Intern in Watson Machine Learning at IBM
Copyright © 2020, 2021 IBM. This notebook and its source code are released under the terms of the MIT License.
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
#export
from exp.nb_07a import *
path = untar_data(URLs.IMAGENETTE_160)
path
#export
import os, PIL, mimetypes
Path.ls = lambda x: list(x.iterdir())
path.ls()
(path/'val').ls()
path_tench = path/'val'/'n01440764'
img_fn = path_tench.ls()[0]
img_fn
img = PIL.Image.open(img_fn)
img
plt.imshow(img);
import numpy
img_arr = numpy.array(img)
img_arr.shape
img_arr[:10,:10,0]
#export
image_extensions = set(k for k,v in mimetypes.types_map.items() if v.startswith('image/'))
' '.join(image_extensions)
#export
def setify(o): return o if isinstance(o, set) else set(listify(o))
test_eq(setify({1}), {1})
test_eq(setify({1,2,1}), {1,2})
test_eq(setify([1,2,1]), {1,2})
test_eq(setify(1), {1})
test_eq(setify(None), set())
test_eq(setify('a'), {'a'})
#export
def _get_files(p, fs, extensions=None):
p = Path(p)
res = [p/f for f in fs if not f.startswith('.') and ((not extensions) or f'.{f.split(".")[-1].lower()}' in extensions)]
return res
p = [o.name for o in os.scandir(path_tench)]
p[:3]
t = _get_files(path_tench, p, extensions=image_extensions)
t[:3]
#export
def get_files(path, extensions=None, recurse=False, include=None):
path = Path(path)
extensions = setify(extensions)
extensions = {e.lower() for e in extensions}
if recurse:
res = []
for i, (p,d,f) in enumerate(os.walk(path)):
if include is not None and i == 0: d[:] = [o for o in d if o in include]
else: d[:] = [o for o in d if not o.startswith('.')]
res += _get_files(p, f, extensions)
return res
else:
fs = [o.name for o in os.scandir(path) if o.is_file()]
return _get_files(path, fs, extensions)
get_files(path_tench, image_extensions)[:3]
get_files(path, image_extensions, recurse=True)[:3]
all_fns = get_files(path, image_extensions, recurse=True)
len(all_fns)
%timeit -n 10 get_files(path, image_extensions, recurse=True)
```
# Prepare for modeling
## Get files
```
#export
def compose(x, funcs, *args, order_key='_order', **kwargs):
key = lambda o: getattr(o, order_key, 0)
for f in sorted(listify(funcs), key=key): x = f(x, **kwargs)
return x
ListContainer??
#export
class ItemList(ListContainer):
def __init__(self, items, path='.', tfms=None):
super().__init__(items)
self.path, self.tfms = path, tfms
def __repr__(self): return f'{super().__repr__()}\nPath: {self.path}'
def new(self, items, cls=None):
if cls is None: cls = self.__class__
return cls(items, self.path, self.tfms)
def get(self, i): return i
def _get(self, i): return compose(self.get(i), self.tfms)
def __getitem__(self, idx):
res = super().__getitem__(idx)
if isinstance(res, list): return [self._get(o) for o in res]
return self._get(res)
#export
class ImageList(ItemList):
@classmethod
def from_files(cls, path, extensions=None, recurse=True, include=None, **kwargs):
if extensions is None: extensions = image_extensions
return cls(get_files(path, extensions, recurse, include), path, **kwargs)
def get(self, fn): return PIL.Image.open(fn)
#export
class Transform(): _order = 0
class MakeRGB(Transform):
def __call__(self, item): return item.convert('RGB')
def make_rgb(item): return item.convert('RGB')
il = ImageList.from_files(path, tfms=make_rgb)
il
img = il[0]; img
il[:1]
```
## Split validation set
```
fn = il.items[0]; fn
fn.parent.parent.name
#export
def grandparent_splitter(fn, train_name='train', valid_name='valid'):
gp = fn.parent.parent.name
return True if gp == valid_name else False if gp == train_name else None
def split_by_func(items, f):
mask = [f(o) for o in items]
t = [o for o,m in zip(items, mask) if m == False]
v = [o for o,m in zip(items, mask) if m == True]
return t,v
splitter = partial(grandparent_splitter, valid_name='val')
%time train, valid = split_by_func(il, splitter)
len(train), len(valid)
#export
class SplitData():
def __init__(self, train, valid): self.train, self.valid = train, valid
def __getattr__(self, k): return getattr(self.train, k)
def __setstate__(self, data:Any): self.__dict__.update(data)
@classmethod
def split_by_func(cls, il, f):
lists = map(il.new, split_by_func(il.items, f))
return cls(*lists)
def __repr__(self): return f'{self.__class__.__name__}\Train: {self.train}\nValid: {self.valid}'
sd = SplitData.split_by_func(il, splitter); sd
```
## Labeling
```
#export
from collections import OrderedDict
def uniqueify(x, sort=False):
res = list(OrderedDict.fromkeys(x).keys())
if sort: res.sort()
return res
#export
class Processor():
def process(self, items): return items
class CategoryProcessor(Processor):
def __init__(self): self.vocab = None
def __call__(self, items):
if self.vocab is None:
self.vocab = uniqueify(items)
self.otoi = {v:k for k,v in enumerate(self.vocab)}
return [self.proc1(o) for o in items]
def proc1(self, item): return self.otoi[item]
def deprocess(self, idxs):
assert self.vocab is not None
return [self.deproc1(idx) for idx in idxs]
def deproc1(self, idx): return self.vocab[idx]
#export
def parent_labeler(fn): return fn.parent.name
def _label_by_func(il, f, cls=ItemList): return cls([f(o) for o in il.items], path=il.path)
#export
class LabeledData():
def process(self, il, proc): return il.new(compose(il.items, proc))
def __init__(self, x, y, proc_x=None, proc_y=None):
self.x, self.y = self.process(x, proc_x), self.process(y, proc_y)
self.proc_x, self.proc_y = proc_x, proc_y
def __repr__(self): return f'{self.__class__.__name__}\nx: {self.x}\ny: {self.y}\n'
def __getitem__(self, idx): return self.x[idx], self.y[idx]
def __len__(self): return len(self.x)
def x_obj(self, idx): return self.obj(self.x, idx, self.proc_x)
def y_obj(self, idx): return self.obj(self.y, idx, self.proc_y)
def obj(self, items, idx, procs):
isint = isinstance(idx, int) or (isinstance(idx, torch.LongTensor) and not idx.ndim)
item = items[idx]
for proc in reversed(listify(procs)):
item = proc.deproc1(item) if isint else proc.deprocess(item)
return item
@classmethod
def label_by_func(cls, il, f, proc_x=None, proc_y=None):
return cls(il, _label_by_func(il, f), proc_x, proc_y)
def label_by_func(sd, f, proc_x=None, proc_y=None):
train = LabeledData.label_by_func(sd.train, f, proc_x, proc_y)
valid = LabeledData.label_by_func(sd.valid, f, proc_x, proc_y)
return SplitData(train, valid)
ll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())
ll
assert ll.train.proc_y is ll.valid.proc_y
ll.train.y
ll.train.y.items[0], ll.train.y_obj(0), ll.train.y_obj(range(2))
```
## Transform to tensor
```
ll.train[0]
ll.train[0][0]
ll.train[0][0].resize((128,128))
#export
class ResizeFixed(Transform):
_order = 10
def __init__(self, size):
if isinstance(size, int): size = (size,size)
self.size = size
def __call__(self, item): return item.resize(self.size, PIL.Image.BILINEAR)
def to_byte_tensor(item):
res = torch.ByteTensor(torch.ByteStorage.from_buffer(item.tobytes()))
w, h = item.size
return res.view(h,w,-1).permute(2,0,1)
to_byte_tensor._order = 20
def to_float_tensor(item): return item.float().div_(255.)
to_float_tensor._order = 30
tfms = [make_rgb, ResizeFixed(128), to_byte_tensor, to_float_tensor]
il = ImageList.from_files(path, tfms=tfms)
sd = SplitData.split_by_func(il, splitter)
ll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())
#export
def show_image(im, figsize=(3,3)):
plt.figure(figsize=figsize)
plt.axis('off')
plt.imshow(im.permute(1,2,0))
x, y = ll.train[0]
x.shape
show_image(x)
```
# Modeling
```
bs = 64
train_dl, valid_dl = get_dls(ll.train, ll.valid, bs, num_workers=4)
x,y = next(iter(train_dl))
x.shape
show_image(x[0])
ll.train.proc_y.vocab[y[0]]
y
#export
class DataBunch():
def __init__(self, train_dl, valid_dl, c_in=None, c_out=None):
self.train_dl, self.valid_dl, self.c_in, self.c_out = train_dl, valid_dl, c_in, c_out
@property
def train_ds(self): return self.train_dl.dataset
@property
def valid_ds(self): return self.valid_dl.dataset
#export
def databunchify(sd, bs, c_in=None, c_out=None, **kwargs):
dls = get_dls(sd.train, sd.valid, bs, **kwargs)
return DataBunch(*dls, c_in=c_in, c_out=c_out)
SplitData.to_databunch = databunchify
path = untar_data(URLs.IMAGENETTE_160)
tfms = [make_rgb, ResizeFixed(128), to_byte_tensor, to_float_tensor]
il = ImageList.from_files(path, tfms=tfms)
sd = SplitData.split_by_func(il, partial(grandparent_splitter, valid_name='val'))
ll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())
data = ll.to_databunch(bs, c_in=3, c_out=10, num_workers=4)
```
## Model
```
cbfs = [CudaCallback, Recorder,
partial(AvgStatsCallback, accuracy)]
m, s = x.mean((0,2,3)).cuda(), x.std((0,2,3)).cuda()
m, s
#export
def normalize_chan(x, mean, std):
return (x - mean[...,None,None]) / std[...,None,None]
_m = tensor([0.47, 0.48, 0.45])
_s = tensor([0.29, 0.28, 0.30])
norm_imagenette = partial(normalize_chan, mean=_m.cuda(), std=_s.cuda())
cbfs.append(partial(BatchTransformXCallback, norm_imagenette))
nfs = [64,64,128,256]
#export
import math
def prev_pow_2(x): return 2**math.floor(math.log2(x))
#export
def get_cnn_layers(data, nfs, layer, **kwargs):
def f(ni, nf, stride=2): return layer(ni, nf, 3, stride=stride, **kwargs)
l1 = data.c_in
l2 = prev_pow_2(l1*3*3)
layers = [f(l1, l2, stride=1),
f(l2, l2*2, stride=2),
f(l2*2, l2*4, stride=2)]
nfs = [l2*4] + nfs
layers += [f(nfs[i], nfs[i+1]) for i in range(len(nfs) - 1)]
layers += [nn.AdaptiveAvgPool2d(1), Lambda(flatten), nn.Linear(nfs[-1], data.c_out)]
return layers
def get_cnn_model(data, nfs, layer, **kwargs):
return nn.Sequential(*get_cnn_layers(data, nfs, layer, **kwargs))
def get_learn_run(nfs, data, lr, layer, cbs=None, opt_func=None, **kwargs):
model = get_cnn_model(data, nfs, layer, **kwargs)
init_cnn(model)
return get_runner(model, data, lr=lr, cbs=cbs, opt_func=opt_func)
sched = combine_scheds([0.3, 0.7], cos_1cycle_anneal(0.1, 0.3, 0.05))
learn, run = get_learn_run(nfs, data, 0.2, conv_layer, cbs=cbfs+[partial(ParamScheduler, 'lr', sched)])
#export
def model_summary(run, learn, data, find_all=False):
xb, yb = get_batch(data.valid_dl, run)
device = next(learn.model.parameters()).device
xb, yb, = xb.to(device), yb.to(device)
mods = find_modules(learn.model, is_lin_layer) if find_all else learn.model.children()
f = lambda hook,mod,inp,outp: print(f'{mod}\n{outp.shape}\n')
with Hooks(mods, f) as hooks: learn.model(xb)
model_summary(run, learn, data)
%time run.fit(5, learn)
```
| github_jupyter |

<a href="https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Fshorts&branch=master&subPath=Matplotlib.ipynb&depth=1" target="_parent"><img src="https://raw.githubusercontent.com/callysto/curriculum-notebooks/master/open-in-callysto-button.svg?sanitize=true" width="123" height="24" alt="Open in Callysto"/></a>
# Plotting with matplotlib
Matplotlib is a rich collection of commands to create mathematical plots in a Jupyter notebook. It is best to search online for extensive examples.
In this notebook, we will just touch on the basics, to get you started. Read online to get many more examples and details.
Inside Matplotlib is a module called pyplot, that does most of the work for us. This is the module that will be loaded into the notebook before plotting.
It is also important to tell the notebook that you want your plots to appear "inline", which is to say they will appear in a cell inside your notebook. The following two commands set this up for you.
```
%matplotlib inline
from matplotlib.pyplot import *
```
## Example 1. A simple plot
We plot five data points, connecting with lines.
Note the semicolon after the plot command. It removes some extraneous message from the computer.
```
plot([1,2,2,3,5]);
```
## Example 2. Another simple plot
We plot five data points, marked with circles.
```
plot([1,2,2,3,5],'o');
```
## Example 3. Another simple plot, with x and y values
We plot five data points, y versus x, marked with circles.
Note the x axis now starts at coordinate x=1.
```
x = [1,2,3,4,5]
y = [1,2,2,3,5]
plot(x,y,'o');
```
## Example 4. We can also do bar plots
We plot five data points, y versus x, as a bar chart. We also will add a title and axis labels.
```
x = [1,2,3,4,5]
y = [1,2,2,3,5]
bar(x,y);
title("Hey, this is my bar chart");
xlabel("x values");
ylabel("y values");
```
## Example 5. Object oriented plotting
For more precise control of your plotting effort, you can create a figure and axis object, and modify it as necessary. Best to read up on this online, but here is the baseic example.
We plot five data points, y versus x, by attaching them to the figure or axis object, as appropriate.
```
fig, ax = subplots()
ax.plot([1,2,3,4,5], [1,2,2,3,5], 'o')
ax.set_title('Object oriented version of plotting');
show()
```
## Example 6: More mathematical plotting
Matplotlib likes to work with Numpy (numerical Python), which gives us arrays and mathematical functions like sine and cosine.
We must first import the numpy module. I'll do it in the next cell, but keep in mind you can also include it up above when we loaded in matplotlib.
We then create an array of x values, running from 0 to 1, then evaluate the sine function on those values. We then do an x,y plot of the function.
As follows:
```
from numpy import *
x = linspace(0,1)
y = sin(2*pi*x)
plot(x,y);
title("One period of the sine function")
xlabel("x values")
ylabel("sin(2 pi x) values");
```
## Example 7: Math notation in labels
Just for fun, we note that Latex can be used in the the plot labels, so you can make your graphs looks more profession. The key is to use the"$" delimiter to identify a section of text that is to be typeset as if in Latex.
Check out these labels:
```
x = linspace(0,1)
y = sin(2*pi*x)
plot(x,y);
title("One period of the function $\sin(2 \pi x)$")
xlabel("$x$ values")
ylabel("$\sin(2 \pi x)$ values");
```
[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)
| github_jupyter |
# Writing Low-Level TensorFlow Code
**Learning Objectives**
1. Practice defining and performing basic operations on constant Tensors
2. Use Tensorflow's automatic differentiation capability
3. Learn how to train a linear regression from scratch with TensorFLow
## Introduction
In this notebook, we will start by reviewing the main operations on Tensors in TensorFlow and understand how to manipulate TensorFlow Variables. We explain how these are compatible with python built-in list and numpy arrays.
Then we will jump to the problem of training a linear regression from scratch with gradient descent. The first order of business will be to understand how to compute the gradients of a function (the loss here) with respect to some of its arguments (the model weights here). The TensorFlow construct allowing us to do that is `tf.GradientTape`, which we will describe.
At last we will create a simple training loop to learn the weights of a 1-dim linear regression using synthetic data generated from a linear model.
As a bonus exercise, we will do the same for data generated from a non linear model, forcing us to manual engineer non-linear features to improve our linear model performance.
Each learning objective will correspond to a #TODO in the [student lab notebook](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/write_low_level_code.ipynb) -- try to complete that notebook first before reviewing this solution notebook.
```
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.1 || pip install tensorflow==2.1
import numpy as np
import tensorflow as tf
from matplotlib import pyplot as plt
print(tf.__version__)
```
## Operations on Tensors
### Variables and Constants
Tensors in TensorFlow are either contant (`tf.constant`) or variables (`tf.Variable`).
Constant values can not be changed, while variables values can be.
The main difference is that instances of `tf.Variable` have methods allowing us to change
their values while tensors constructed with `tf.constant` don't have these methods, and
therefore their values can not be changed. When you want to change the value of a `tf.Variable`
`x` use one of the following method:
* `x.assign(new_value)`
* `x.assign_add(value_to_be_added)`
* `x.assign_sub(value_to_be_subtracted`
```
x = tf.constant([2, 3, 4])
x
x = tf.Variable(2.0, dtype=tf.float32, name="my_variable")
x.assign(45.8)
x
x.assign_add(4)
x
x.assign_sub(3)
x
```
### Point-wise operations
Tensorflow offers similar point-wise tensor operations as numpy does:
* `tf.add` allows to add the components of a tensor
* `tf.multiply` allows us to multiply the components of a tensor
* `tf.subtract` allow us to substract the components of a tensor
* `tf.math.*` contains the usual math operations to be applied on the components of a tensor
* and many more...
Most of the standard aritmetic operations (`tf.add`, `tf.substrac`, etc.) are overloaded by the usual corresponding arithmetic symbols (`+`, `-`, etc.)
**Lab Task #1:** Performing basic operations on Tensors
1. Compute the sum of the constants `a` and `b` below using `tf.add` and `+` and verify both operations produce the same values.
2. Compute the product of the constants `a` and `b` below using `tf.multiply` and `*` and verify both operations produce the same values.
3. Compute the exponential of the constant `a` using `tf.math.exp`. Note, you'll need to specify the type for this operation.
```
# TODO 1a
a = # TODO -- Your code here.
b = # TODO -- Your code here.
c = # TODO -- Your code here.
d = # TODO -- Your code here.
print("c:", c)
print("d:", d)
# TODO 1b
a = # TODO -- Your code here.
b = # TODO -- Your code here.
c = # TODO -- Your code here.
d = # TODO -- Your code here.
print("c:", c)
print("d:", d)
# TODO 1c
# tf.math.exp expects floats so we need to explicitly give the type
a = # TODO -- Your code here.
b = # TODO -- Your code here.
print("b:", b)
```
### NumPy Interoperability
In addition to native TF tensors, tensorflow operations can take native python types and NumPy arrays as operands.
```
# native python list
a_py = [1, 2]
b_py = [3, 4]
tf.add(a_py, b_py)
# numpy arrays
a_np = np.array([1, 2])
b_np = np.array([3, 4])
tf.add(a_np, b_np)
# native TF tensor
a_tf = tf.constant([1, 2])
b_tf = tf.constant([3, 4])
tf.add(a_tf, b_tf)
```
You can convert a native TF tensor to a NumPy array using .numpy()
```
a_tf.numpy()
```
## Linear Regression
Now let's use low level tensorflow operations to implement linear regression.
Later in the course you'll see abstracted ways to do this using high level TensorFlow.
### Toy Dataset
We'll model the following function:
\begin{equation}
y= 2x + 10
\end{equation}
```
X = tf.constant(range(10), dtype=tf.float32)
Y = 2 * X + 10
print(f"X:{X}")
print(f"Y:{Y}")
```
Let's also create a test dataset to evaluate our models:
```
X_test = tf.constant(range(10, 20), dtype=tf.float32)
Y_test = 2 * X_test + 10
print(f"X_test:{X_test}")
print(f"Y_test:{Y_test}")
```
#### Loss Function
The simplest model we can build is a model that for each value of x returns the sample mean of the training set:
```
y_mean = Y.numpy().mean()
def predict_mean(X):
y_hat = [y_mean] * len(X)
return y_hat
Y_hat = predict_mean(X_test)
```
Using mean squared error, our loss is:
\begin{equation}
MSE = \frac{1}{m}\sum_{i=1}^{m}(\hat{Y}_i-Y_i)^2
\end{equation}
For this simple model the loss is then:
```
errors = (Y_hat - Y) ** 2
loss = tf.reduce_mean(errors)
loss.numpy()
```
This values for the MSE loss above will give us a baseline to compare how a more complex model is doing.
Now, if $\hat{Y}$ represents the vector containing our model's predictions when we use a linear regression model
\begin{equation}
\hat{Y} = w_0X + w_1
\end{equation}
we can write a loss function taking as arguments the coefficients of the model:
```
def loss_mse(X, Y, w0, w1):
Y_hat = w0 * X + w1
errors = (Y_hat - Y) ** 2
return tf.reduce_mean(errors)
```
### Gradient Function
To use gradient descent we need to take the partial derivatives of the loss function with respect to each of the weights. We could manually compute the derivatives, but with Tensorflow's automatic differentiation capabilities we don't have to!
During gradient descent we think of the loss as a function of the parameters $w_0$ and $w_1$. Thus, we want to compute the partial derivative with respect to these variables.
For that we need to wrap our loss computation within the context of `tf.GradientTape` instance which will reccord gradient information:
```python
with tf.GradientTape() as tape:
loss = # computation
```
This will allow us to later compute the gradients of any tensor computed within the `tf.GradientTape` context with respect to instances of `tf.Variable`:
```python
gradients = tape.gradient(loss, [w0, w1])
```
We illustrate this procedure with by computing the loss gradients with respect to the model weights:
**Lab Task #2:** Complete the function below to compute the loss gradients with respect to the model weights `w0` and `w1`.
```
# TODO 2
def compute_gradients(X, Y, w0, w1):
# TODO -- Your code here.
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
dw0, dw1 = compute_gradients(X, Y, w0, w1)
print("dw0:", dw0.numpy())
print("dw1", dw1.numpy())
```
### Training Loop
Here we have a very simple training loop that converges. Note we are ignoring best practices like batching, creating a separate test set, and random weight initialization for the sake of simplicity.
**Lab Task #3:** Complete the `for` loop below to train a linear regression.
1. Use `compute_gradients` to compute `dw0` and `dw1`.
2. Then, re-assign the value of `w0` and `w1` using the `.assign_sub(...)` method with the computed gradient values and the `LEARNING_RATE`.
3. Finally, for every 100th step , we'll compute and print the `loss`. Use the `loss_mse` function we created above to compute the `loss`.
```
# TODO 3
STEPS = 1000
LEARNING_RATE = .02
MSG = "STEP {step} - loss: {loss}, w0: {w0}, w1: {w1}\n"
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
for step in range(0, STEPS + 1):
dw0, dw1 = # TODO -- Your code here.
if step % 100 == 0:
loss = # TODO -- Your code here.
print(MSG.format(step=step, loss=loss, w0=w0.numpy(), w1=w1.numpy()))
```
Now let's compare the test loss for this linear regression to the test loss from the baseline model that outputs always the mean of the training set:
```
loss = loss_mse(X_test, Y_test, w0, w1)
loss.numpy()
```
This is indeed much better!
## Bonus
Try modelling a non-linear function such as: $y=xe^{-x^2}$
```
X = tf.constant(np.linspace(0, 2, 1000), dtype=tf.float32)
Y = X * tf.exp(-(X ** 2))
%matplotlib inline
plt.plot(X, Y)
def make_features(X):
f1 = tf.ones_like(X) # Bias.
f2 = X
f3 = tf.square(X)
f4 = tf.sqrt(X)
f5 = tf.exp(X)
return tf.stack([f1, f2, f3, f4, f5], axis=1)
def predict(X, W):
return tf.squeeze(X @ W, -1)
def loss_mse(X, Y, W):
Y_hat = predict(X, W)
errors = (Y_hat - Y) ** 2
return tf.reduce_mean(errors)
def compute_gradients(X, Y, W):
with tf.GradientTape() as tape:
loss = loss_mse(Xf, Y, W)
return tape.gradient(loss, W)
STEPS = 2000
LEARNING_RATE = 0.02
Xf = make_features(X)
n_weights = Xf.shape[1]
W = tf.Variable(np.zeros((n_weights, 1)), dtype=tf.float32)
# For plotting
steps, losses = [], []
plt.figure()
for step in range(1, STEPS + 1):
dW = compute_gradients(X, Y, W)
W.assign_sub(dW * LEARNING_RATE)
if step % 100 == 0:
loss = loss_mse(Xf, Y, W)
steps.append(step)
losses.append(loss)
plt.clf()
plt.plot(steps, losses)
print(f"STEP: {STEPS} MSE: {loss_mse(Xf, Y, W)}")
plt.figure()
plt.plot(X, Y, label="actual")
plt.plot(X, predict(Xf, W), label="predicted")
plt.legend()
```
Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
# Clasification
## MNIST dataset
```
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1)
mnist.keys()
X, y = mnist['data'], mnist['target']
print(X.shape)
print(y.shape)
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
idx = 2
some_digit = X[idx]
some_digit_image = some_digit.reshape(28, 28)
plt.imshow(some_digit_image, cmap='binary')
plt.axis('off')
plt.show()
y[idx]
import numpy as np
y = y.astype(np.uint8)
split_idx = 60_000
X_train, X_test, y_train, y_test = X[:split_idx], X[split_idx:], y[:split_idx], y[split_idx:]
```
## Binary classifier
```
y_train_5 = (y_train == 5)
y_test_5 = (y_test == 5)
from sklearn.linear_model import SGDClassifier
sgd_clf = SGDClassifier(random_state=42)
sgd_clf.fit(X_train, y_train_5)
sgd_clf.predict([some_digit])
```
## Performance measurement
### Cross-validation
```
from sklearn.model_selection import cross_val_score
cross_val_score(sgd_clf, X_train, y_train_5, cv=3, scoring='accuracy', verbose=2, n_jobs=-1)
from sklearn.model_selection import StratifiedKFold
from sklearn.base import clone
skfolds = StratifiedKFold(n_splits=3, random_state=42, shuffle=True)
for train_index, test_index in skfolds.split(X_train, y_train_5):
clone_clf = clone(sgd_clf)
X_train_folds = X_train[train_index]
y_train_folds = y_train_5[train_index]
X_test_fold = X_train[test_index]
y_test_fold = y_train_5[test_index]
clone_clf.fit(X_train_folds, y_train_folds)
y_pred = clone_clf.predict(X_test_fold)
n_correct = sum(y_pred == y_test_fold)
print(n_correct / len(y_pred))
from sklearn.base import BaseEstimator
class Never5Classifier(BaseEstimator):
def fit(self, X, y=None):
return self
def predict(self, X):
return np.zeros((len(X), 1), dtype=bool)
never_5_clf = Never5Classifier()
cross_val_score(never_5_clf, X_train, y_train_5, cv=3, scoring='accuracy', verbose=2, n_jobs=-1)
```
### Confusion matrix
```
from sklearn.model_selection import cross_val_predict
y_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3, verbose=2, n_jobs=-1)
from sklearn.metrics import confusion_matrix
confusion_matrix(y_train_5, y_train_pred)
```
### Precision and recall
```
from sklearn.metrics import precision_score, recall_score
print('Precision: ', precision_score(y_train_5, y_train_pred))
print('Recall: ', recall_score(y_train_5, y_train_pred))
from sklearn.metrics import f1_score
f1_score(y_train_5, y_train_pred)
y_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3, method='decision_function',
n_jobs=-1, verbose=2)
from sklearn.metrics import precision_recall_curve
precisions, recalls, thresholds = precision_recall_curve(y_train_5, y_scores)
def plot_precision_recall_vs_threshold(precisions, recalls, thresholds):
plt.plot(thresholds, precisions[:-1], 'b--', label='Precision')
plt.plot(thresholds, recalls[:-1], 'g-', label='Recall')
plt.grid(True)
plt.xlabel('Threshold')
plt.xlim((-50_000, 45_000))
plot_precision_recall_vs_threshold(precisions, recalls, thresholds)
plt.show()
thresholds_90_precision = thresholds[np.argmax(precisions >= 0.90)]
thresholds_90_precision
y_train_pred_90 = (y_scores > thresholds_90_precision)
print('Precision: ',precision_score(y_train_5, y_train_pred_90))
print('Recall: ',recall_score(y_train_5, y_train_pred_90))
```
### ROC curve
```
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_train_5, y_scores)
def plot_roc_curve(fpr, tpr, label=None):
plt.plot(fpr, tpr, linewidth=2, label=label)
plt.plot([0, 1], [0, 1], 'k--')
plot_roc_curve(fpr, tpr)
plt.show()
from sklearn.metrics import roc_auc_score
roc_auc_score(y_train_5, y_scores)
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(random_state=42)
y_proba_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3,
method='predict_proba',
n_jobs=-1, verbose=2)
y_scores_forest = y_proba_forest[:, 1]
fpr_forest, tpr_forest, threshold_forest = roc_curve(y_train_5, y_scores_forest)
plt.plot(fpr, tpr, 'b:', label='SGD')
plot_roc_curve(fpr_forest, tpr_forest, 'Random forrest')
plt.legend(loc='lower right')
plt.show()
roc_auc_score(y_train_5, y_scores_forest)
```
## Multi classification
```
# LONG
from sklearn.svm import SVC
svm_clf = SVC()
svm_clf.fit(X_train, y_train)
svm_clf.predict([some_digit])
some_digit_scores = svm_clf.decision_function([some_digit])
some_digit_scores
print(np.argmax(some_digit_scores))
print(svm_clf.classes_)
print(svm_clf.classes_[4])
# LONG
from sklearn.multiclass import OneVsRestClassifier
ovr_clf = OneVsRestClassifier(SVC())
ovr_clf.fit(X_train, y_train)
print(ovr_clf.predict([some_digit]))
print(len(ovr_clf.estimators_))
sgd_clf.fit(X_train, y_train)
sgd_clf.predict([some_digit])
sgd_clf.decision_function([some_digit])
cross_val_score(sgd_clf, X_train, y_train, cv=3, scoring='accuracy')
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.astype(np.float64))
cross_val_score(sgd_clf, X_train_scaled, y_train, cv=3, scoring='accuracy',
verbose=2, n_jobs=-1)
```
## Error analysis
```
y_train_pred = cross_val_predict(sgd_clf, X_train_scaled, y_train, cv=3,
verbose=2, n_jobs=-1)
conf_mx = confusion_matrix(y_train, y_train_pred)
conf_mx
plt.matshow(conf_mx, cmap=plt.cm.gray)
plt.show()
row_sums = conf_mx.sum(axis=1, keepdims=True)
norm_conf_mx = conf_mx / row_sums
np.fill_diagonal(norm_conf_mx, 0)
plt.matshow(norm_conf_mx, cmap=plt.cm.gray)
plt.show()
# EXTRA
def plot_digits(instances, images_per_row=10, **options):
size = 28
images_per_row = min(len(instances), images_per_row)
images = [instance.reshape(size,size) for instance in instances]
n_rows = (len(instances) - 1) // images_per_row + 1
row_images = []
n_empty = n_rows * images_per_row - len(instances)
images.append(np.zeros((size, size * n_empty)))
for row in range(n_rows):
rimages = images[row * images_per_row : (row + 1) * images_per_row]
row_images.append(np.concatenate(rimages, axis=1))
image = np.concatenate(row_images, axis=0)
plt.imshow(image, cmap = mpl.cm.binary, **options)
plt.axis("off")
cl_a, cl_b = 3, 5
X_aa = X_train[(y_train == cl_a) & (y_train_pred == cl_a)]
X_ab = X_train[(y_train == cl_a) & (y_train_pred == cl_b)]
X_ba = X_train[(y_train == cl_b) & (y_train_pred == cl_a)]
X_bb = X_train[(y_train == cl_b) & (y_train_pred == cl_b)]
plt.figure(figsize=(8,8))
plt.subplot(221); plot_digits(X_aa[:25], images_per_row=5)
plt.subplot(222); plot_digits(X_ab[:25], images_per_row=5)
plt.subplot(223); plot_digits(X_ba[:25], images_per_row=5)
plt.subplot(224); plot_digits(X_bb[:25], images_per_row=5)
plt.show()
```
## Multilablel classification
```
from sklearn.neighbors import KNeighborsClassifier
y_train_large = (y_train >=7)
y_train_odd = (y_train % 2 == 1)
y_multilabel = np.c_[y_train_large, y_train_odd]
knn_clf = KNeighborsClassifier()
knn_clf.fit(X_train, y_multilabel)
knn_clf.predict([some_digit])
y_train_knn_pred = cross_val_predict(knn_clf, X_train, y_multilabel, cv=3,
verbose=2, n_jobs=-1)
f1_score(y_multilabel, y_train_knn_pred, average='macro')
noise = np.random.randint(0, 100, (len(X_train), 28*28))
X_train_mod = X_train + noise
noise = np.random.randint(0, 100, (len(X_train), 28*28))
X_test_mod = X_test + noise
y_train_mod = X_train
y_test_mod = X_test
fig = plt.figure(figsize=(12,6))
plt.subplot(121); plot_digit(X_test_mod[some_index])
plt.subplot(122); plot_digit(y_test_mod[some_index])
knn_clf.fit(X_train_mod, y_train_mod)
clean_digit = knn_clf.predit([X_test_mod[some_index]])
plot_digit(clean_digit)
```
## Tasks
### Task 1 - MNIST acc 97%
```
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import GridSearchCV
knn_clf = KNeighborsClassifier()
param_grid = [
{'weights': ['uniform', 'distance'], 'n_neighbors': [ 3, 4, 5]}
]
grid_search = GridSearchCV(knn_clf, param_grid, cv=3,
scoring='accuracy',
n_jobs=-1, verbose=3)
grid_search.fit(X_train, y_train)
final_model = grid_search.best_estimator_
y_pred = final_model.predict(X_test)
from sklearn.metrics import accuracy_score
print(grid_search.best_params_)
print(grid_search.best_score_)
print(accuracy_score(y_test, y_pred))
```
### Task 2 - data augmention
```
from scipy.ndimage.interpolation import shift
def shift_set(X, vector):
return [shift(img.reshape(28,28), vector, cval=0).flatten() for img in X]
X_train_aug_U = shift_set(X_train, [-1, 0])
X_train_aug_R = shift_set(X_train, [ 0, 1])
X_train_aug_D = shift_set(X_train, [ 1, 0])
X_train_aug_L = shift_set(X_train, [ 0,-1])
X_train_aug = np.concatenate([X_train, X_train_aug_U, X_train_aug_R, X_train_aug_D, X_train_aug_L])
y_train_aug = np.tile(y_train, 5)
print(len(X_train_aug))
print(len(y_train_aug))
knn_clf_aug = KNeighborsClassifier(n_neighbors=4, weights='distance',
n_jobs=-1)
knn_clf_aug.fit(X_train_aug, y_train_aug)
y_pred = knn_clf_aug.predict(X_test)
print(accuracy_score(y_test, y_pred))
```
| github_jupyter |
# Simple Test between NumPy and Numba
$$
x = \exp(-\Gamma_s d)
$$
```
import numba
import cython
import numexpr
import numpy as np
%load_ext cython
from empymod import filters
from scipy.constants import mu_0 # Magn. permeability of free space [H/m]
from scipy.constants import epsilon_0 # Elec. permittivity of free space [F/m]
res = np.array([2e14, 0.3, 1, 50, 1]) # nlay
freq = np.arange(1, 201)/20. # nfre
off = np.arange(1, 101)*1000 # noff
lambd = filters.key_201_2009().base/off[:, None] # nwav
aniso = np.array([1, 1, 1.5, 2, 1])
epermH = np.array([1, 80, 9, 20, 1])
epermV = np.array([1, 40, 9, 10, 1])
mpermH = np.array([1, 1, 3, 5, 1])
etaH = 1/res + np.outer(2j*np.pi*freq, epermH*epsilon_0)
etaV = 1/(res*aniso*aniso) + np.outer(2j*np.pi*freq, epermV*epsilon_0)
zetaH = np.outer(2j*np.pi*freq, mpermH*mu_0)
Gam = np.sqrt((etaH/etaV)[:, None, :, None] * (lambd*lambd)[None, :, None, :] + (zetaH*etaH)[:, None, :, None])
```
## NumPy
Numpy version to check result and compare times
```
def test_numpy(lGam, d):
return np.exp(-lGam*d)
```
## Numba @vectorize
This is exactly the same function as with NumPy, just added the @vectorize decorater.
```
@numba.vectorize('c16(c16, f8)')
def test_numba_vnp(lGam, d):
return np.exp(-lGam*d)
@numba.vectorize('c16(c16, f8)', target='parallel')
def test_numba_v(lGam, d):
return np.exp(-lGam*d)
```
## Numba @njit
```
@numba.njit
def test_numba_nnp(lGam, d):
out = np.empty_like(lGam)
for nf in numba.prange(lGam.shape[0]):
for no in numba.prange(lGam.shape[1]):
for ni in numba.prange(lGam.shape[2]):
out[nf, no, ni] = np.exp(-lGam[nf, no, ni] * d)
return out
@numba.njit(nogil=True, parallel=True)
def test_numba_n(lGam, d):
out = np.empty_like(lGam)
for nf in numba.prange(lGam.shape[0]):
for no in numba.prange(lGam.shape[1]):
for ni in numba.prange(lGam.shape[2]):
out[nf, no, ni] = np.exp(-lGam[nf, no, ni] * d)
return out
```
## Run comparison for a small and a big matrix
```
lGam = Gam[:, :, 1, :]
d = 100
# Output shape
out_shape = (freq.size, off.size, filters.key_201_2009().base.size)
print(' Shape Test Matrix ::', out_shape, '; total # elements:: '+str(freq.size*off.size*filters.key_201_2009().base.size))
print('------------------------------------------------------------------------------------------')
print(' NumPy :: ', end='')
# Get NumPy result for comparison
numpy_result = test_numpy(lGam, d)
# Get runtime
%timeit test_numpy(lGam, d)
print(' Numba @vectorize :: ', end='')
# Ensure it agrees with NumPy
numba_vnp_result = test_numba_vnp(lGam, d)
if not np.allclose(numpy_result, numba_vnp_result, atol=0, rtol=1e-10):
print('\n * FAIL, DOES NOT AGREE WITH NumPy RESULT!')
# Get runtime
%timeit test_numba_vnp(lGam, d)
print(' Numba @vectorize par :: ', end='')
# Ensure it agrees with NumPy
numba_v_result = test_numba_v(lGam, d)
if not np.allclose(numpy_result, numba_v_result, atol=0, rtol=1e-10):
print('\n * FAIL, DOES NOT AGREE WITH NumPy RESULT!')
# Get runtime
%timeit test_numba_v(lGam, d)
print(' Numba @njit :: ', end='')
# Ensure it agrees with NumPy
numba_nnp_result = test_numba_nnp(lGam, d)
if not np.allclose(numpy_result, numba_nnp_result, atol=0, rtol=1e-10):
print('\n * FAIL, DOES NOT AGREE WITH NumPy RESULT!')
# Get runtime
%timeit test_numba_nnp(lGam, d)
print(' Numba @njit par :: ', end='')
# Ensure it agrees with NumPy
numba_n_result = test_numba_n(lGam, d)
if not np.allclose(numpy_result, numba_n_result, atol=0, rtol=1e-10):
print('\n * FAIL, DOES NOT AGREE WITH NumPy RESULT!')
# Get runtime
%timeit test_numba_n(lGam, d)
from empymod import versions
versions('HTML', add_pckg=[cython, numba], ncol=5)
```
| github_jupyter |
## Fourier Transforms
The frequency components of an image can be displayed after doing a Fourier Transform (FT). An FT looks at the components of an image (edges that are high-frequency, and areas of smooth color as low-frequency), and plots the frequencies that occur as points in spectrum.
In fact, an FT treats patterns of intensity in an image as sine waves with a particular frequency, and you can look at an interesting visualization of these sine wave components [on this page](https://plus.maths.org/content/fourier-transforms-images).
In this notebook, we'll first look at a few simple image patterns to build up an idea of what image frequency components look like, and then transform a more complex image to see what it looks like in the frequency domain.
```
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the images
image_stripes = cv2.imread('images/stripes.jpg')
# Change color to RGB (from BGR)
image_stripes = cv2.cvtColor(image_stripes, cv2.COLOR_BGR2RGB)
# Read in the images
image_solid = cv2.imread('images/pink_solid.jpg')
# Change color to RGB (from BGR)
image_solid = cv2.cvtColor(image_solid, cv2.COLOR_BGR2RGB)
# Display the images
f, (ax1,ax2) = plt.subplots(1, 2, figsize=(10,5))
ax1.imshow(image_stripes)
ax2.imshow(image_solid)
# convert to grayscale to focus on the intensity patterns in the image
gray_stripes = cv2.cvtColor(image_stripes, cv2.COLOR_RGB2GRAY)
gray_solid = cv2.cvtColor(image_solid, cv2.COLOR_RGB2GRAY)
# normalize the image color values from a range of [0,255] to [0,1] for further processing
norm_stripes = gray_stripes/255.0
norm_solid = gray_solid/255.0
# perform a fast fourier transform and create a scaled, frequency transform image
def ft_image(norm_image):
'''This function takes in a normalized, grayscale image
and returns a frequency spectrum transform of that image. '''
f = np.fft.fft2(norm_image)
fshift = np.fft.fftshift(f)
frequency_tx = 20*np.log(np.abs(fshift))
return frequency_tx
# Call the function on the normalized images
# and display the transforms
f_stripes = ft_image(norm_stripes)
f_solid = ft_image(norm_solid)
# display the images
# original images to the left of their frequency transform
f, (ax1,ax2,ax3,ax4) = plt.subplots(1, 4, figsize=(20,10))
ax1.set_title('original image')
ax1.imshow(image_stripes)
ax2.set_title('frequency transform image')
ax2.imshow(f_stripes, cmap='gray')
ax3.set_title('original image')
ax3.imshow(image_solid)
ax4.set_title('frequency transform image')
ax4.imshow(f_solid, cmap='gray')
```
Low frequencies are at the center of the frequency transform image.
The transform images for these example show that the solid image has most low-frequency components (as seen by the center bright spot).
The stripes tranform image contains low-frequencies for the areas of white and black color and high frequencies for the edges in between those colors. The stripes transform image also tells us that there is one dominating direction for these frequencies; vertical stripes are represented by a horizontal line passing through the center of the frequency transform image.
Next, let's see what this looks like applied to a real-world image.
```
# Read in an image
image = cv2.imread('images/birds.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# convert to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
# normalize the image
norm_image = gray/255.0
f_image = ft_image(norm_image)
# Display the images
f, (ax1,ax2) = plt.subplots(1, 2, figsize=(20,10))
ax1.imshow(image)
ax2.imshow(f_image, cmap='gray')
```
Notice that this image has components of all frequencies. You can see a bright spot in the center of the transform image, which tells us that a large portion of the image is low-frequency; this makes sense since the body of the birds and background are solid colors. The transform image also tells us that there are **two** dominating directions for these frequencies; vertical edges (from the edges of birds) are represented by a horizontal line passing through the center of the frequency transform image, and horizontal edges (from the branch and tops of the birds' heads) are represented by a vertical line passing through the center.
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import os
import sys
import numpy as np
import pandas as pd
import csv
import cv2
import torch
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
import torchvision
from skimage import io, transform
from skimage import color
import scipy.misc
import scipy.ndimage as ndi
from glob import glob
from pathlib import Path
from pytvision import visualization as view
from pytvision.transforms import transforms as mtrans
from tqdm import tqdm
sys.path.append('../')
from torchlib.datasets import dsxbdata
from torchlib.datasets.dsxbdata import DSXBExDataset, DSXBDataset
from torchlib.datasets import imageutl as imutl
from torchlib import utils
from torchlib.models import unetpad
from torchlib.metrics import get_metrics
import matplotlib
import matplotlib.pyplot as plt
#matplotlib.style.use('fivethirtyeight')
# Ignore warnings
import warnings
warnings.filterwarnings("ignore")
plt.ion() # interactive mode
from pytvision.transforms import transforms as mtrans
from torchlib import metrics
from torchlib.segneuralnet import SegmentationNeuralNet
from torchlib import post_processing_func
map_post = post_processing_func.MAP_post()
th_post = post_processing_func.TH_post()
wts_post = post_processing_func.WTS_post()
normalize = mtrans.ToMeanNormalization(
mean = (0.485, 0.456, 0.406),
std = (0.229, 0.224, 0.225),
)
class NormalizeInverse(torchvision.transforms.Normalize):
"""
Undoes the normalization and returns the reconstructed images in the input domain.
"""
def __init__(self, mean = (0.485, 0.456, 0.406), std = (0.229, 0.224, 0.225)):
mean = torch.as_tensor(mean)
std = torch.as_tensor(std)
std_inv = 1 / (std + 1e-7)
mean_inv = -mean * std_inv
super().__init__(mean=mean_inv, std=std_inv)
def __call__(self, tensor):
return super().__call__(tensor.clone())
n = NormalizeInverse()
def get_simple_transforms(pad=0):
return transforms.Compose([
#mtrans.CenterCrop( (1008, 1008) ),
mtrans.ToPad( pad, pad, padding_mode=cv2.BORDER_CONSTANT ),
mtrans.ToTensor(),
normalize,
])
def get_flip_transforms(pad=0):
return transforms.Compose([
#mtrans.CenterCrop( (1008, 1008) ),
mtrans.ToRandomTransform( mtrans.VFlip(), prob=0.5 ),
mtrans.ToRandomTransform( mtrans.HFlip(), prob=0.5 ),
mtrans.ToPad( pad, pad, padding_mode=cv2.BORDER_CONSTANT ),
mtrans.ToTensor(),
normalize,
])
def tensor2image(tensor, norm_inverse=True):
if tensor.dim() == 4:
tensor = tensor[0]
if norm_inverse:
tensor = n(tensor)
img = tensor.cpu().numpy().transpose(1,2,0)
img = (img * 255).clip(0, 255).astype(np.uint8)
return img
def show(src, titles=[], suptitle="",
bwidth=4, bheight=4, save_file=False,
show_axis=True, show_cbar=False, last_max=0):
num_cols = len(src)
plt.figure(figsize=(bwidth * num_cols, bheight))
plt.suptitle(suptitle)
for idx in range(num_cols):
plt.subplot(1, num_cols, idx+1)
if not show_axis: plt.axis("off")
if idx < len(titles): plt.title(titles[idx])
if idx == num_cols-1 and last_max:
plt.imshow(src[idx]*1, vmax=last_max, vmin=0)
else:
plt.imshow(src[idx]*1)
if type(show_cbar) is bool:
if show_cbar: plt.colorbar()
elif idx < len(show_cbar) and show_cbar[idx]:
plt.colorbar()
plt.tight_layout()
if save_file:
plt.savefig(save_file)
def show2(src, titles=[], suptitle="",
bwidth=4, bheight=4, save_file=False,
show_axis=True, show_cbar=False, last_max=0):
num_cols = len(src)//2
plt.figure(figsize=(bwidth * num_cols, bheight*2))
plt.suptitle(suptitle)
for idx in range(num_cols*2):
plt.subplot(2, num_cols, idx+1)
if not show_axis: plt.axis("off")
if idx < len(titles): plt.title(titles[idx])
if idx == num_cols-1 and last_max:
plt.imshow(src[idx]*1, vmax=last_max, vmin=0)
else:
plt.imshow(src[idx]*1)
if type(show_cbar) is bool:
if show_cbar: plt.colorbar()
elif idx < len(show_cbar) and show_cbar[idx]:
plt.colorbar()
plt.tight_layout()
if save_file:
plt.savefig(save_file)
def get_diversity_map(preds, gt_predictionlb, th=0.5):
max_iou = 0
diversity_map = np.zeros_like(gt_predictionlb)
for idx_gt in range(1, gt_predictionlb.max()):
roi = (gt_predictionlb==idx_gt)
max_iou = 0
for predlb in preds:
for idx_pred in range(1, predlb.max()):
roi_pred = (predlb==idx_pred)
union = roi.astype(int) + roi_pred.astype(int)
val, freq = np.unique(union, return_counts=True)
if len(val)==3:
iou = freq[2]/(freq[1]+freq[2])
if iou > max_iou:
max_iou = iou
if max_iou > th: break
if max_iou >th:
diversity_map += roi
return diversity_map
pathdataset = os.path.expanduser( '/home/chcp/Datasets' )
namedataset = 'Seg33_1.0.4'
namedataset = 'Seg1009_0.3.2'
#namedataset = 'Bfhsc_1.0.0'
#'Segments_Seg1009_0.3.2_unetpad_jreg__adam_map_ransac2_1_7_1'
#namedataset = 'FluoC2DLMSC_0.0.1'
sub_folder = 'test'
folders_images = 'images'
folders_contours = 'touchs'
folders_weights = 'weights'
folders_segment = 'outputs'
num_classes = 2
num_channels = 3
pad = 0
pathname = pathdataset + '//' + namedataset
subset = 'test'
def ransac_step2(net, inputs, targets, tag=None, max_deep=3, verbose=False):
srcs = inputs[:, :3]
segs = inputs[:, 3:]
lv_segs = segs#.clone()
first = True
final_loss = 0.0
for lv in range(max_deep):
n_segs = segs.shape[1]
new_segs = []
actual_c = 7 ** (max_deep - lv)
if verbose: print(segs.shape, actual_c)
actual_seg_ids = np.random.choice(range(n_segs), size=actual_c)
step_segs = segs[:, actual_seg_ids]
for idx in range(0, actual_c, 7):
mini_inp = torch.cat((srcs, step_segs[:, idx:idx+7]), dim=1)
mini_out = net(mini_inp)
new_segs.append(mini_out.argmax(1, keepdim=True))
segs = torch.cat(new_segs, dim=1).float()
return final_loss, mini_out
model_list = [Path(url).name for url in glob(r'/home/chcp/Code/pytorch-unet/out/SEG1009/Segments_Seg1009_0.3.2_unetpad_jreg__adam_map_ransac2_1_7_1*')]
for model_url_base in tqdm(model_list):
pathmodel = r'/home/chcp/Code/pytorch-unet/out/SEG1009/'
ckpt = r'/models/model_best.pth.tar'
net = SegmentationNeuralNet(
patchproject=pathmodel,
nameproject=model_url_base,
no_cuda=True, parallel=False,
seed=2021, print_freq=False,
gpu=True
)
if net.load( pathmodel+model_url_base+ckpt ) is not True:
assert(False)
Path(f"extra/{model_url_base}").mkdir(exist_ok=True, parents=True)
for subset in ['test']:
test_data = dsxbdata.ISBIDataset(
pathname,
subset,
folders_labels=f'labels{num_classes}c',
count=None,
num_classes=num_classes,
num_channels=num_channels,
transform=get_simple_transforms(pad=0),
use_weight=False,
weight_name='',
load_segments=True,
shuffle_segments=True,
use_ori=1
)
test_loader = DataLoader(test_data, batch_size=1, shuffle=False,
num_workers=0, pin_memory=True, drop_last=False)
softmax = torch.nn.Softmax(dim=0)
wpq, wsq, wrq, total_cells = 0, 0, 0, 0
for idx, sample in enumerate(test_loader):
inputs, labels = sample['image'], sample['label']
_, outputs = ransac_step2(net, inputs, labels)
amax = outputs[0].argmax(0)
view_inputs = tensor2image(inputs[0, :3])
view_labels = labels[0].argmax(0)
prob = outputs[0] / outputs[0].sum(0)
results, n_cells, preds = get_metrics(labels, outputs, post_label='map')
predictionlb, prediction, region, output = preds
wpq += results['pq'] * n_cells
wsq += results['sq'] * n_cells
wrq += results['rq'] * n_cells
total_cells += n_cells
res_str = f"Nreal {n_cells} | Npred {results['n_cells']} | PQ {results['pq']:0.2f} " + \
f"| SQ {results['sq']:0.2f} | RQ {results['rq']:0.2f}"
show2([view_inputs, view_labels, amax, predictionlb, prob[0], prob[1]], show_axis=False, suptitle=res_str,
show_cbar=[False, False, False, False, True, True, True, True], save_file=f"extra/{model_url_base}/{namedataset}_{subset}_{idx}.png",
titles=['Original', 'Label', 'MAP', 'Cells', 'Prob 0', 'Prob 1'], bheight=4.5)
row = [namedataset, subset, model_url_base, wpq/total_cells, wsq/total_cells, wrq/total_cells, total_cells]
row = list(map(str, row))
header = ["dataset", 'subset', 'model', 'WPQ', 'WSQ', "WRQ", "Cells"]
save_file=f"extra/{model_url_base}"
summary_log = "extra/summary.csv"
write_header = not Path(summary_log).exists()
with open(summary_log, 'a') as f:
if write_header:
f.writelines(','.join(header)+'\n')
f.writelines(','.join(row)+'\n')
```
| github_jupyter |
```
import argparse
import copy
import os
import os.path as osp
import pprint
import sys
import time
from pathlib import Path
import open3d.ml as _ml3d
import open3d.ml.tf as ml3d
import yaml
from open3d.ml.datasets import S3DIS, SemanticKITTI, SmartLab
from open3d.ml.tf.models import RandLANet
from open3d.ml.tf.pipelines import SemanticSegmentation
from open3d.ml.utils import Config, get_module
randlanet_smartlab_cfg = "/home/threedee/repos/Open3D-ML/ml3d/configs/randlanet_smartlab.yml"
randlanet_semantickitti_cfg = "/home/threedee/repos/Open3D-ML/ml3d/configs/randlanet_semantickitti.yml"
randlanet_s3dis_cfg = "/home/threedee/repos/Open3D-ML/ml3d/configs/randlanet_s3dis.yml"
cfg = _ml3d.utils.Config.load_from_file(randlanet_smartlab_cfg)
# construct a dataset by specifying dataset_path
dataset = ml3d.datasets.SmartLab(**cfg.dataset)
# get the 'all' split that combines training, validation and test set
split = dataset.get_split("training")
# print the attributes of the first datum
print(split.get_attr(0))
# print the shape of the first point cloud
print(split.get_data(0)["point"].shape)
# for idx in range(split.__len__()):
# print(split.get_data(idx)["point"].shape[0])
# show the first 100 frames using the visualizer
vis = ml3d.vis.Visualizer()
vis.visualize_dataset(dataset, "training") # , indices=range(100)
cfg = _ml3d.utils.Config.load_from_file(randlanet_s3dis_cfg)
dataset = S3DIS("/home/charith/datasets/S3DIS/", use_cache=True)
model = RandLANet(**cfg.model)
pipeline = SemanticSegmentation(model=model, dataset=dataset, max_epoch=100)
pipeline.cfg_tb = {
"readme": "readme",
"cmd_line": "cmd_line",
"dataset": pprint.pformat("S3DIS", indent=2),
"model": pprint.pformat("RandLANet", indent=2),
"pipeline": pprint.pformat("SemanticSegmentation", indent=2),
}
pipeline.run_train()
# Inference and test example
from open3d.ml.tf.models import RandLANet
from open3d.ml.tf.pipelines import SemanticSegmentation
Pipeline = get_module("pipeline", "SemanticSegmentation", "tf")
Model = get_module("model", "RandLANet", "tf")
Dataset = get_module("dataset", "SemanticKITTI")
RandLANet = Model(ckpt_path=args.path_ckpt_randlanet)
# Initialize by specifying config file path
SemanticKITTI = Dataset(args.path_semantickitti, use_cache=False)
pipeline = Pipeline(model=RandLANet, dataset=SemanticKITTI)
# inference
# get data
train_split = SemanticKITTI.get_split("train")
data = train_split.get_data(0)
# restore weights
# run inference
results = pipeline.run_inference(data)
print(results)
# test
pipeline.run_test()
```
| github_jupyter |
```
import random
import numpy as np
import matplotlib.pyplot as plt
valid_RPS_actions = [0, 1, 2] # Signifying Rock, Paper, or Scissors
def playRPS(action_p1, action_p2):
if (action_p1 not in valid_RPS_actions) or (action_p2 not in valid_RPS_actions):
raise Exception("Invalid Move Detected.")
return -100
if action_p1 == action_p2: # If there is a draw, issue the agent a small penalty.
return (-2, 0)
if (action_p1 == 0 and action_p2 == 1) or (action_p1 == 1 and action_p2 == 2) or (action_p1 == 2 and action_p2 == 0): # The ways the agent could lose against player 2.
return (-10, 10)
else:
return (10, -10)
# Agent (a.k.a. player 1 settings)
agnt_hist = []
pseudo_probs = [[333334, 333333, 333333]]
# The agent essentially has 'pseudo probabilites' attached to the actions it can take.
# These pseudo probabilies are adjusted through the incentives, but they are separate from the rewards.
# The sum of the pseudo probabilities at any point should be the sum of the pseudo probabilities array.
agnt_preferences = np.concatenate((np.array(valid_RPS_actions, ndmin=2), pseudo_probs), axis = 0)
# This value affects the pseudo-probability distribution upon wins, losses, and draws.
incentive_factor = 10;
# Easier to use argmax by stuffing the eps_greedy method here.
def eps_greedy(prob):
if prob >= random.random():
return random.randint(0, len(valid_RPS_actions) - 1)
else:
return np.argmax(agnt_preferences[1, :])
# Player 2 Settings
# p2_history = [] <-- May be important provided
p2_preferences = [0] # A list that describes the possible options available to player 2. p2_pref = [0] means player 2 would only choose rock. [0, 2] would imply rock and scissors.
adjusted_p2_preferences = [2] # This was used to swap p2's behavior part of the way through training.
# Exploration rate of algorithm starting out when using epsilon greedy
initial_greed_rate = 0.999
running_gr = initial_greed_rate
# Minimum value for exploration rate
min_greed_rate = 0.0001
# The number of sets of rounds of games that will be played.
episodes = 10000
# How many times to play the game per episode.
rounds = 100
for epis in range(episodes):
reward_total = 0
running_gr = 0.999 ** (epis + 1)
# After so many episodes, p2 changes its behavior.
if epis >= 2500:
p2_preferences = adjusted_p2_preferences
# The agent uses an epsilon greedy approach to pick its next move.
for rnds in range(rounds):
if running_gr >= min_greed_rate:
agnt_pick = eps_greedy(running_gr)
else:
agnt_pick = eps_greedy(min_greed_rate)
# Player 2 picks randomly - A good future implementation would be to have it choose systematically.
p2_pick = random.choice(p2_preferences)
results = playRPS(agnt_pick, p2_pick)
# This section updates the agent's probabilities for selecting a winning action.
# Disincentivize losing actions.
if results == (-10, 10):
if agnt_pick == 0:
selection = random.choice([1, 2])
agnt_preferences[1, 0] = agnt_preferences[1, 0] - incentive_factor
elif agnt_pick == 1:
selection = random.choice([0, 2])
agnt_preferences[1, 1] = agnt_preferences[1, 1] - incentive_factor
elif agnt_pick == 2:
selection = random.choice([0, 1])
agnt_preferences[1, 2] = agnt_preferences[1, 2] - incentive_factor
else:
raise Exception("Invalid pick happened somewhere...")
agnt_preferences[1, selection] = agnt_preferences[1, selection] + incentive_factor
# Incentivize winning actions.
if results == (10, -10):
if agnt_pick == 0:
selection = random.choice([1, 2])
agnt_preferences[1, 0] = agnt_preferences[1, 0] + incentive_factor
elif agnt_pick == 1:
selection = random.choice([0, 2])
agnt_preferences[1, 1] = agnt_preferences[1, 1] + incentive_factor
elif agnt_pick == 2:
selection = random.choice([0, 1])
agnt_preferences[1, 2] = agnt_preferences[1, 2] + incentive_factor
else:
raise Exception("Invalid pick happened somewhere...")
agnt_preferences[1, selection] = agnt_preferences[1, selection] - incentive_factor
# Disincentivize actions that lead to a draw.
if results == (-2, 0):
if p2_pick == 0:
selection = random.choice([1, 2])
agnt_preferences[1, 0] = agnt_preferences[1, 0] - incentive_factor
elif p2_pick == 1:
selection = random.choice([0, 2])
agnt_preferences[1, 1] = agnt_preferences[1, 1] - incentive_factor
elif p2_pick == 2:
selection = random.choice([0, 1])
agnt_preferences[1, 2] = agnt_preferences[1, 2] - incentive_factor
else:
raise Exception("Invalid pick happened somewhere...")
agnt_preferences[1, selection] = agnt_preferences[1, selection] + incentive_factor
reward_total += results[0]
agnt_hist.append(reward_total)
# Analytics Section
plt.plot(agnt_hist)
plt.xlabel('Episode Number', fontsize = 24)
plt.ylabel('Reward Value', fontsize = 24)
plt.title('Trials with Rock-Paper-Scissors\nReward Value vs. Episode\n[ Opponent Picks Rock for Some Time then Scissors ]', fontsize = 24)
fig = plt.gcf()
fig.set_size_inches(18.5, 10.5)
print('Distribution of Agent Preferences\n[Rock Paper Scissors]: ',agnt_preferences[1, :])
```
| github_jupyter |
# ETL Processes
Use this notebook to develop the ETL process for each of your tables before completing the `etl.py` file to load the whole datasets.
```
import os
import glob
import psycopg2
import pandas as pd
from sql_queries import *
conn = psycopg2.connect("host=127.0.0.1 dbname=sparkifydb user=student password=student")
cur = conn.cursor()
def get_files(filepath):
all_files = []
for root, dirs, files in os.walk(filepath):
files = glob.glob(os.path.join(root,'*.json'))
for f in files :
all_files.append(os.path.abspath(f))
return all_files
```
# Process `song_data`
In this first part, you'll perform ETL on the first dataset, `song_data`, to create the `songs` and `artists` dimensional tables.
Let's perform ETL on a single song file and load a single record into each table to start.
- Use the `get_files` function provided above to get a list of all song JSON files in `data/song_data`
- Select the first song in this list
- Read the song file and view the data
```
song_files = "data/song_data"
filepath = get_files(song_files)[20]
df = pd.read_json(filepath, lines=True)
df.head()
```
## #1: `songs` Table
#### Extract Data for Songs Table
- Select columns for song ID, title, artist ID, year, and duration
- Use `df.values` to select just the values from the dataframe
- Index to select the first (only) record in the dataframe
- Convert the array to a list and set it to `song_data`
```
song_data = df[['song_id','title','artist_id','year','duration']].values[0]
song_data
```
#### Insert Record into Song Table
Implement the `song_table_insert` query in `sql_queries.py` and run the cell below to insert a record for this song into the `songs` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `songs` table in the sparkify database.
```
cur.execute(song_table_insert, song_data)
conn.commit()
```
Run `test.ipynb` to see if you've successfully added a record to this table.
## #2: `artists` Table
#### Extract Data for Artists Table
- Select columns for artist ID, name, location, latitude, and longitude
- Use `df.values` to select just the values from the dataframe
- Index to select the first (only) record in the dataframe
- Convert the array to a list and set it to `artist_data`
```
artist_data = df[['artist_id','artist_name','artist_location','artist_latitude','artist_longitude']].values[0]
artist_data
```
#### Insert Record into Artist Table
Implement the `artist_table_insert` query in `sql_queries.py` and run the cell below to insert a record for this song's artist into the `artists` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `artists` table in the sparkify database.
```
cur.execute(artist_table_insert, artist_data)
conn.commit()
```
Run `test.ipynb` to see if you've successfully added a record to this table.
# Process `log_data`
In this part, you'll perform ETL on the second dataset, `log_data`, to create the `time` and `users` dimensional tables, as well as the `songplays` fact table.
Let's perform ETL on a single log file and load a single record into each table.
- Use the `get_files` function provided above to get a list of all log JSON files in `data/log_data`
- Select the first log file in this list
- Read the log file and view the data
```
log_files = "data/log_data"
filepath = get_files(log_files)[0]
df = pd.read_json(filepath, lines=True)
df.head()
```
## #3: `time` Table
#### Extract Data for Time Table
- Filter records by `NextSong` action
- Convert the `ts` timestamp column to datetime
- Hint: the current timestamp is in milliseconds
- Extract the timestamp, hour, day, week of year, month, year, and weekday from the `ts` column and set `time_data` to a list containing these values in order
- Hint: use pandas' [`dt` attribute](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.html) to access easily datetimelike properties.
- Specify labels for these columns and set to `column_labels`
- Create a dataframe, `time_df,` containing the time data for this file by combining `column_labels` and `time_data` into a dictionary and converting this into a dataframe
```
df = df[df.page=='NextSong']
df['ts'] = pd.to_datetime(df['ts'],unit='ms')
df.head()
t = df["ts"]
t.head()
time_data = (t.values, t.dt.hour.values, t.dt.day.values, t.dt.weekofyear.values, t.dt.month.values, t.dt.year.values,t.dt.weekday.values)
column_labels = ('start_time', 'hour', 'day', 'week', 'month', 'year', 'weekday')
data = {label:data for label, data in zip(column_labels, time_data)}
time_df = pd.DataFrame(data)
time_df.head()
```
#### Insert Records into Time Table
Implement the `time_table_insert` query in `sql_queries.py` and run the cell below to insert records for the timestamps in this log file into the `time` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `time` table in the sparkify database.
```
for i, row in time_df.iterrows():
cur.execute(time_table_insert, list(row))
conn.commit()
```
Run `test.ipynb` to see if you've successfully added records to this table.
## #4: `users` Table
#### Extract Data for Users Table
- Select columns for user ID, first name, last name, gender and level and set to `user_df`
```
user_df = df[['userId','firstName','lastName','gender','level']]
user_df.head()
```
#### Insert Records into Users Table
Implement the `user_table_insert` query in `sql_queries.py` and run the cell below to insert records for the users in this log file into the `users` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `users` table in the sparkify database.
```
for i, row in user_df.iterrows():
cur.execute(user_table_insert, row)
conn.commit()
```
Run `test.ipynb` to see if you've successfully added records to this table.
## #5: `songplays` Table
#### Extract Data and Songplays Table
This one is a little more complicated since information from the songs table, artists table, and original log file are all needed for the `songplays` table. Since the log file does not specify an ID for either the song or the artist, you'll need to get the song ID and artist ID by querying the songs and artists tables to find matches based on song title, artist name, and song duration time.
- Implement the `song_select` query in `sql_queries.py` to find the song ID and artist ID based on the title, artist name, and duration of a song.
- Select the timestamp, user ID, level, song ID, artist ID, session ID, location, and user agent and set to `songplay_data`
#### Insert Records into Songplays Table
- Implement the `songplay_table_insert` query and run the cell below to insert records for the songplay actions in this log file into the `songplays` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `songplays` table in the sparkify database.
```
# song_select = """
# SELECT song_id, songs.artist_id FROM (
# songs JOIN artists ON songs.artist_id = artists.artist_id)
# WHERE title = %s AND name = %s AND duration = %s
# """
for index, row in df.iterrows():
# get songid and artistid from song and artist tables
cur.execute(song_select, (row.song, row.artist, row.length))
results = cur.fetchone()
if results:
songid, artistid = results
else:
songid, artistid = None, None
# insert songplay record
songplay_data = (row["ts"], row["userId"], row["level"], songid, artistid, row["sessionId"], row["location"],row["userAgent"])
cur.execute(songplay_table_insert, songplay_data)
conn.commit()
```
Run `test.ipynb` to see if you've successfully added records to this table.
# Close Connection to Sparkify Database
```
conn.close()
```
# Implement `etl.py`
Use what you've completed in this notebook to implement `etl.py`.
| github_jupyter |
```
import numpy as np
import pandas as pd
import scipy
import pickle
import matplotlib.pyplot as plt
import seaborn as sns
import ipdb
```
# generate data
## 4 types of GalSim images
```
#### 1000 training images
with open("data/galsim_simulated_2500gals_lambda0.4_theta3.14159_2021-05-20-17-01.pkl", 'rb') as handle:
group1 = pickle.load(handle)
with open("data/galsim_simulated_2500gals_lambda0.4_theta2.3562_2021-05-20-17-42.pkl", 'rb') as handle:
group2 = pickle.load(handle)
with open("data/galsim_simulated_2500gals_lambda0.4_theta1.5708_2021-05-20-17-08.pkl", 'rb') as handle:
group3 = pickle.load(handle)
with open("data/galsim_simulated_2500gals_lambda0.4_theta0.7854_2021-05-20-17-44.pkl", 'rb') as handle:
group4 = pickle.load(handle)
sns.heatmap(group1['galaxies_generated'][0])
plt.show()
sns.heatmap(group2['galaxies_generated'][0])
plt.show()
sns.heatmap(group3['galaxies_generated'][0])
plt.show()
sns.heatmap(group4['galaxies_generated'][0])
plt.show()
#### 1000 test images
with open("data/galsim_simulated_250gals_lambda0.4_theta3.14159_2021-05-20-18-14.pkl", 'rb') as handle:#
test1 = pickle.load(handle)
with open("data/galsim_simulated_250gals_lambda0.4_theta2.3562_2021-05-20-18-14.pkl", 'rb') as handle:
test2 = pickle.load(handle)
with open("data/galsim_simulated_250gals_lambda0.4_theta1.5708_2021-05-20-18-14.pkl", 'rb') as handle:
test3 = pickle.load(handle)
with open("data/galsim_simulated_250gals_lambda0.4_theta0.7854_2021-05-20-18-14.pkl", 'rb') as handle:
test4 = pickle.load(handle)
gal_img1 = group1['galaxies_generated']
gal_img2 = group2['galaxies_generated']
gal_img3 = group3['galaxies_generated']
gal_img4 = group4['galaxies_generated']
all_gal_imgs = np.vstack([gal_img1, gal_img2, gal_img3, gal_img4])
all_gal_imgs.shape
test_img1 = test1['galaxies_generated']
test_img2 = test2['galaxies_generated']
test_img3 = test3['galaxies_generated']
test_img4 = test4['galaxies_generated']
all_test_imgs = np.vstack([test_img1, test_img2, test_img3, test_img4])
all_test_imgs.shape
all_train_test_imgs = np.vstack([all_gal_imgs, all_test_imgs])
all_train_test_imgs.shape
#with open('galsim_conformal_imgs_20210520.pkl', 'wb') as handle:
# pickle.dump(all_train_test_imgs, handle, protocol=pickle.HIGHEST_PROTOCOL)
```
## 4 distributions with same mean and variance (gaussian, uniform, exponential, bimodal)
```
# N(1,1)
z1 = np.random.normal(1, 1, size=2500)
# Unif(1-sqrt(3),1+sqrt(3))
z2 = np.random.uniform(1-np.sqrt(3), 1+np.sqrt(3), size=2500)
# Expo(1)
z3 = np.random.exponential(1, size=2500)
# 0.5N(0.25,0.4375) + 0.5N(1.75,0.4375)
z4_ind = np.random.binomial(n=1, p=0.5, size=2500)
z4 = z4_ind*np.random.normal(0.25, 0.4375, size=2500) + (1-z4_ind)*np.random.normal(1.75, 0.4375, size=2500)
fig, ax = plt.subplots(figsize=(7,6))
sns.distplot(z1, color='green', label='N(1,1)', ax=ax)
sns.distplot(z2, label='Uniform(-0.732,2.732)', ax=ax)
sns.distplot(z3, label='Expo(1)', ax=ax)
sns.distplot(z4, color='purple', label='0.5N(0.25,0.4375) + 0.5N(1.75,0.4375)', bins=50, ax=ax)
plt.legend(fontsize=13)
plt.xlabel('Y', fontsize=14)
plt.ylabel('Density', fontsize=14)
plt.tick_params(axis='both', which='major', labelsize=12)
plt.savefig('z_dists_v1.pdf')
all_zs = np.hstack([z1, z2, z3, z4])
test_z1 = np.random.normal(1, 1, size=250)
test_z2 = np.random.uniform(1-np.sqrt(3), 1+np.sqrt(3), size=250)
test_z3 = np.random.exponential(1, size=250)
test_z4_ind = np.random.binomial(n=1, p=0.5, size=250)
test_z4 = test_z4_ind*np.random.normal(0.25, 0.4375, size=250) + (1-test_z4_ind)*np.random.normal(1.75, 0.4375, size=250)
all_test_zs = np.hstack([test_z1, test_z2, test_z3, test_z4])
all_train_test_zs = np.hstack([all_zs, all_test_zs])
#with open('z_conformal_20210520.pkl', 'wb') as handle:
# pickle.dump(all_train_test_zs, handle, protocol=pickle.HIGHEST_PROTOCOL)
```
# fit neural density model
# run CDE diagnostics
# conformal approach
| github_jupyter |
```
import pandas as pd
from IPython.core.display import display, HTML
display(HTML("<style>.container {width:90% !important;}</style>"))
# Don't wrap repr(DataFrame) across additional lines
pd.set_option("display.expand_frame_repr", True)
# Set max rows displayed in output to 25
pd.set_option("display.max_rows", 25)
%matplotlib inline
%matplotlib widget
# ASK WIKIPEDIA FOR LIST OF COMPANIES
# pip install sparqlwrapper
# https://rdflib.github.io/sparqlwrapper/
import sys
from SPARQLWrapper import SPARQLWrapper, JSON
endpoint_url = "https://query.wikidata.org/sparql"
query = """#List of `instances of` "business enterprise"
SELECT ?com ?comLabel ?inception ?industry ?industryLabel ?coordinate ?country ?countryLabel WHERE {
?com (wdt:P31/(wdt:P279*)) wd:Q4830453;
wdt:P625 ?coordinate.
SERVICE wikibase:label { bd:serviceParam wikibase:language "en". }
OPTIONAL { ?com wdt:P571 ?inception. }
OPTIONAL { ?com wdt:P452 ?industry. }
OPTIONAL { ?com wdt:P17 ?country. }
}"""
def get_results(endpoint_url, query):
user_agent = "WDQS-example Python/%s.%s" % (sys.version_info[0], sys.version_info[1])
# TODO adjust user agent; see https://w.wiki/CX6
sparql = SPARQLWrapper(endpoint_url, agent=user_agent)
sparql.setQuery(query)
sparql.setReturnFormat(JSON)
return sparql.query().convert()
results = get_results(endpoint_url, query)
for result in results["results"]["bindings"]:
print(result)
#PUT THE DATA ON THE RIGHT FORMAT into pandas
import os
import json
import pandas as pd
from pandas.io.json import json_normalize
# Get the dataset, and transform string into floats for plotting
dataFrame = pd.json_normalize(results["results"]["bindings"]) #in a serialized json-based format
df = pd.DataFrame(dataFrame) # into pandas
p = r'(?P<latitude>-?\d+\.\d+).*?(?P<longitude>-?\d+\.\d+)' #get lat/lon from string coordinates
df[['longitude', 'latitude']] = df['coordinate.value'].str.extract(p, expand=True)
df['latitude'] = pd.to_numeric(df['latitude'], downcast='float')
df['longitude'] = pd.to_numeric(df['longitude'], downcast='float')
data = pd.DataFrame(df, columns = ['latitude','longitude','comLabel.value','coordinate.value', 'inception.value', 'industryLabel.value', 'com.value', 'industry.value', 'country.value','countryLabel.value'])
data=data.dropna(subset=['latitude', 'longitude'])
data.rename(columns={'comLabel.value':'company'}, inplace=True)
data.rename(columns={'coordinate.value':'coordinate'}, inplace=True)
data.rename(columns={'inception.value':'inception'}, inplace=True)
data.rename(columns={'industryLabel.value':'industry'}, inplace=True)
data.rename(columns={'com.value':'id'}, inplace=True)
data.rename(columns={'industry.value':'id_industry'}, inplace=True)
data.rename(columns={'country.value':'id_country'}, inplace=True)
data.rename(columns={'countryLabel.value':'country'}, inplace=True)
data = pd.DataFrame (data) #cluster maps works ONLY with dataframe
print(data.shape)
print(data.sample(5))
print(data.info())
#DATA index cleaning
from sqlalchemy import create_engine
from pandas.io import sql
import re
IDs=[]
for name in data['id']:
ID_n = name.rsplit('/', 1)[1]
ID = re.findall('\d+', ID_n)
#print(ID[0], ID_n)
IDs.append(ID[0])
data ['ID'] = IDs
print (data['ID'].describe())
data['ID']= data['ID'].astype(int)
#print (data['ID'].describe())
data.rename(columns={'id':'URL'}, inplace=True)
data['company_foundation'] = data['inception'].str.extract(r'(\d{4})')
pd.to_numeric(data['company_foundation'])
data = data.set_index(['ID'])
print(data.columns)
#GET company-industry relationship data
industries = data.dropna(subset=['id_industry'])
#print(industries)
industries.groupby('id_industry')[['company', 'country']].apply(lambda x: x.values.tolist())
print(industries.info())
industries = pd.DataFrame (industries)
print(industries.sample(3))
IDs=[]
for name in industries['id_industry']:
ID_n = name.rsplit('/', 1)[1]
ID = re.findall('\d+', ID_n)
# print(ID, ID_n)
IDs.append(ID[0])
industries ['ID_industry'] = IDs
industries['ID_industry']= industries['ID_industry'].astype(int)
industries.set_index([industries.index, 'ID_industry'], inplace=True)
industries['id_wikipedia']=industries['id_industry']
industries.drop('id_industry', axis=1, inplace=True)
industries = pd.DataFrame(industries)
print(industries.info())
print(industries.sample(3))
import plotly.express as px
import plotly.io as pio
px.defaults.template = "ggplot2"
px.defaults.color_continuous_scale = px.colors.sequential.Blackbody
#px.defaults.width = 600
#px.defaults.height = 400
#data = data.dropna(subset=['country'])
fig = px.scatter(data.dropna(subset=['country']), x="latitude", y="longitude", color="country")# width=400)
fig.show()
#break born into quarters and use it for the x axis; y has number of companies;
#fig = px.density_heatmap(countries_industries, x="country", y="companies", template="seaborn")
fig = px.density_heatmap(data, x="latitude", y="longitude")#, template="seaborn")
fig.show()
#COMPANIES IN COUNTRIES
fig = px.histogram(data.dropna(subset=['country', 'industry']), x="country",
title='COMPANIES IN COUNTRIES',
# labels={'industry':'industries'}, # can specify one label per df column
opacity=0.8,
log_y=False, # represent bars with log scale
# color_discrete_sequence=['indianred'], # color of histogram bars
color='industry',
# marginal="rug", # can be `box`, `violin`
# hover_data="companies"
barmode='overlay'
)
fig.show()
#INDUSTRIES IN COUNTRIES
fig = px.histogram(data.dropna(subset=['industry', 'country']), x="industry",
title='INDUSTRIES IN COUNTRIES',
# labels={'industry':'industries'}, # can specify one label per df column
opacity=0.8,
log_y=False, # represent bars with log scale
# color_discrete_sequence=['indianred'], # color of histogram bars
color='country',
# marginal="rug", # can be `box`, `violin`
# hover_data="companies"
barmode='overlay'
)
fig.show()
#THIS IS THE 2D MAP I COULD FIND, :)
import plotly.graph_objects as go
data['text'] = 'COMPANY: '+ data['company'] + '<br>COUNTRY: ' + data['country'] + '<br>FOUNDATION: ' + data['company_foundation'].astype(str)
fig = go.Figure(data=go.Scattergeo(
locationmode = 'ISO-3',
lon = data['longitude'],
lat = data['latitude'],
text = data['text'],
mode = 'markers',
marker = dict(
size = 3,
opacity = 0.8,
reversescale = True,
autocolorscale = False,
symbol = 'square',
line = dict(width=1, color='rgba(102, 102, 102)'),
# colorgroup='country'
# colorscale = 'Blues',
# cmin = 0,
# color = df['cnt'],
# cmax = df['cnt'].max(),
# colorbar_title="Incoming flights<br>February 2011"
)))
fig.update_layout(
title = 'Companies of the World<br>',
geo = dict(
scope='world',
# projection_type='albers usa',
showland = True,
landcolor = "rgb(250, 250, 250)",
subunitcolor = "rgb(217, 217, 217)",
countrycolor = "rgb(217, 217, 217)",
countrywidth = 0.5,
subunitwidth = 0.5
),
)
fig.show()
print(data.info())
import tkinter as tk
from tkinter import filedialog
from pandas import DataFrame
root= tk.Tk()
canvas1 = tk.Canvas(root, width = 300, height = 300, bg = 'lightsteelblue2', relief = 'raised')
canvas1.pack()
def exportCSV ():
global df
export_file_path = filedialog.asksaveasfilename(defaultextension='.csv')
data.to_csv (export_file_path, index = True, header=True)
saveAsButton_CSV = tk.Button(text='Export CSV', command=exportCSV, bg='green', fg='white', font=('helvetica', 12, 'bold'))
canvas1.create_window(150, 150, window=saveAsButton_CSV)
root.mainloop()
```
| github_jupyter |
Probabilistic Programming
=====
and Bayesian Methods for Hackers
========
##### Version 0.1
`Original content created by Cam Davidson-Pilon`
`Ported to Python 3 and PyMC3 by Max Margenot (@clean_utensils) and Thomas Wiecki (@twiecki) at Quantopian (@quantopian)`
___
Welcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!
Chapter 1
======
***
The Philosophy of Bayesian Inference
------
> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...
If you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives.
### The Bayesian state of mind
Bayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians.
The Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability.
For this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assume that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability.
Bayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?
Notice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:
- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result.
- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug.
- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs.
This philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist.
To align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.
John Maynard Keynes, a great economist and thinker, said "When the facts change, I change my mind. What do you do, sir?" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$:
1\. $P(A): \;\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\;\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.
2\. $P(A): \;\;$ This big, complex code likely has a bug in it. $P(A | X): \;\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.
3\. $P(A):\;\;$ The patient could have any number of diseases. $P(A | X):\;\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.
It's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others).
By introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*.
### Bayesian Inference in Practice
If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.
For example, in our debugging problem above, calling the frequentist function with the argument "My code passed all $X$ tests; is my code bug-free?" would return a *YES*. On the other hand, asking our Bayesian function "Often my code has bugs. My code passed all $X$ tests; is my code bug-free?" would return something very different: probabilities of *YES* and *NO*. The function might return:
> *YES*, with probability 0.8; *NO*, with probability 0.2
This is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *"Often my code has bugs"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences.
#### Incorporating evidence
As we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like "I expect the sun to explode today", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.
Denote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \rightarrow \infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of statistical inference of a small $N$ dataset.
One may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computationally-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:
> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is "large enough," you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were "enough" you'd already be on to the next problem for which you need more data.
### Are frequentist methods incorrect then?
**No.**
Frequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.
#### A note on *Big Data*
Paradoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask "Do I really have big data?")
The much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets.
### Our Bayesian framework
We are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.
Secondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:
\begin{align}
P( A | X ) = & \frac{ P(X | A) P(A) } {P(X) } \\\\[5pt]
& \propto P(X | A) P(A)\;\; (\propto \text{is proportional to })
\end{align}
The above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.
##### Example: Mandatory coin-flip example
Every statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be.
We begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data.
Below we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).
```
"""
The book uses a custom matplotlibrc file, which provides the unique styles for
matplotlib plots. If executing this book, and you wish to use the book's
styling, provided are two options:
1. Overwrite your own matplotlibrc file with the rc-file provided in the
book's styles/ dir. See http://matplotlib.org/users/customizing.html
2. Also in the styles is bmh_matplotlibrc.json file. This can be used to
update the styles in only this notebook. Try running the following code:
import json
s = json.load(open("../styles/bmh_matplotlibrc.json"))
matplotlib.rcParams.update(s)
"""
# The code below can be passed over, as it is currently not important, plus it
# uses advanced topics we have not covered yet. LOOK AT PICTURE, MICHAEL!
%matplotlib inline
from IPython.core.pylabtools import figsize
import numpy as np
from matplotlib import pyplot as plt
figsize(11, 9)
plt.style.use('ggplot')
import warnings
warnings.filterwarnings('ignore')
import scipy.stats as stats
dist = stats.beta
n_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]
data = stats.bernoulli.rvs(0.5, size=n_trials[-1])
x = np.linspace(0, 1, 100)
# For the already prepared, I'm using Binomial's conj. prior.
for k, N in enumerate(n_trials):
sx = plt.subplot(len(n_trials)/2, 2, k+1)
plt.xlabel("$p$, probability of heads") \
if k in [0, len(n_trials)-1] else None
plt.setp(sx.get_yticklabels(), visible=False)
heads = data[:N].sum()
y = dist.pdf(x, 1 + heads, 1 + N - heads)
plt.plot(x, y, label="observe %d tosses,\n %d heads" % (N, heads))
plt.fill_between(x, 0, y, color="#348ABD", alpha=0.4)
plt.vlines(0.5, 0, 4, color="k", linestyles="--", lw=1)
leg = plt.legend()
leg.get_frame().set_alpha(0.4)
plt.autoscale(tight=True)
plt.suptitle("Bayesian updating of posterior probabilities",
y=1.02,
fontsize=14)
plt.tight_layout()
```
The posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line).
Notice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head?). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.
The next example is a simple demonstration of the mathematics of Bayesian inference.
##### Example: Bug, or just sweet, unintended feature?
Let $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$.
We are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.
What is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests.
$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\sim A\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:
\begin{align}
P(X ) & = P(X \text{ and } A) + P(X \text{ and } \sim A) \\\\[5pt]
& = P(X|A)P(A) + P(X | \sim A)P(\sim A)\\\\[5pt]
& = P(X|A)p + P(X | \sim A)(1-p)
\end{align}
We have already computed $P(X|A)$ above. On the other hand, $P(X | \sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\sim A) = 0.5$. Then
\begin{align}
P(A | X) & = \frac{1\cdot p}{ 1\cdot p +0.5 (1-p) } \\\\
& = \frac{ 2 p}{1+p}
\end{align}
This is the posterior probability. What does it look like as a function of our prior, $p \in [0,1]$?
```
figsize(12.5, 4)
p = np.linspace(0, 1, 50)
plt.plot(p, 2*p/(1+p), color="#348ABD", lw=3)
#plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=["#A60628"])
plt.scatter(0.2, 2*(0.2)/1.2, s=140, c="#348ABD")
plt.xlim(0, 1)
plt.ylim(0, 1)
plt.xlabel("Prior, $P(A) = p$")
plt.ylabel("Posterior, $P(A|X)$, with $P(A) = p$")
plt.title("Are there bugs in my code?");
```
We can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33.
Recall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.
Similarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities.
```
figsize(12.5, 4)
colours = ["#348ABD", "#A60628"]
prior = [0.20, 0.80]
posterior = [1./3, 2./3]
plt.bar([0, .7], prior, alpha=0.70, width=0.25,
color=colours[0], label="prior distribution",
lw="3", edgecolor=colours[0])
plt.bar([0+0.25, .7+0.25], posterior, alpha=0.7,
width=0.25, color=colours[1],
label="posterior distribution",
lw="3", edgecolor=colours[1])
plt.xticks([0.20, .95], ["Bugs Absent", "Bugs Present"])
plt.title("Prior and Posterior probability of bugs present")
plt.ylabel("Probability")
plt.legend(loc="upper left");
```
Notice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.
This was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.
_______
## Probability Distributions
**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter.
We can divide random variables into three classifications:
- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...
- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.
- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories.
### Discrete Case
If $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:
$$P(Z = k) =\frac{ \lambda^k e^{-\lambda} }{k!}, \; \; k=0,1,2, \dots $$
$\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\lambda$ can be any positive number. By increasing $\lambda$, we add more probability to larger values, and conversely by decreasing $\lambda$ we add more probability to smaller values. One can describe $\lambda$ as the *intensity* of the Poisson distribution.
Unlike $\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members.
If a random variable $Z$ has a Poisson mass distribution, we denote this by writing
$$Z \sim \text{Poi}(\lambda) $$
One useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:
$$E\large[ \;Z\; | \; \lambda \;\large] = \lambda $$
We will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\lambda$ values. The first thing to notice is that by increasing $\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.
```
figsize(12.5, 4)
import scipy.stats as stats
a = np.arange(16)
poi = stats.poisson
lambda_ = [1.5, 4.25]
colours = ["#348ABD", "#A60628"]
plt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0],
label="$\lambda = %.1f$" % lambda_[0], alpha=0.60,
edgecolor=colours[0], lw="3")
plt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1],
label="$\lambda = %.1f$" % lambda_[1], alpha=0.60,
edgecolor=colours[1], lw="3")
plt.xticks(a + 0.4, a)
plt.legend()
plt.ylabel("probability of $k$")
plt.xlabel("$k$")
plt.title("Probability mass function of a Poisson random variable; differing \
$\lambda$ values");
```
### Continuous Case
Instead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:
$$f_Z(z | \lambda) = \lambda e^{-\lambda z }, \;\; z\ge 0$$
Like a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\lambda$ values.
When a random variable $Z$ has an exponential distribution with parameter $\lambda$, we say *$Z$ is exponential* and write
$$Z \sim \text{Exp}(\lambda)$$
Given a specific $\lambda$, the expected value of an exponential random variable is equal to the inverse of $\lambda$, that is:
$$E[\; Z \;|\; \lambda \;] = \frac{1}{\lambda}$$
```
a = np.linspace(0, 4, 100)
expo = stats.expon
lambda_ = [0.5, 1]
for l, c in zip(lambda_, colours):
plt.plot(a, expo.pdf(a, scale=1./l), lw=3,
color=c, label="$\lambda = %.1f$" % l)
plt.fill_between(a, expo.pdf(a, scale=1./l), color=c, alpha=.33)
plt.legend()
plt.ylabel("PDF at $z$")
plt.xlabel("$z$")
plt.ylim(0,1.2)
plt.title("Probability density function of an Exponential random variable;\
differing $\lambda$");
```
### But what is $\lambda \;$?
**This question is what motivates statistics**. In the real world, $\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\lambda$. Many different methods have been created to solve the problem of estimating $\lambda$, but since $\lambda$ is never actually observed, no one can say for certain which method is best!
Bayesian inference is concerned with *beliefs* about what $\lambda$ might be. Rather than try to guess $\lambda$ exactly, we can only talk about what $\lambda$ is likely to be by assigning a probability distribution to $\lambda$.
This might seem odd at first. After all, $\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\lambda$.
##### Example: Inferring behaviour from text-message data
Let's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:
> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)
```
figsize(12.5, 3.5)
count_data = np.loadtxt("data/txtdata.csv")
n_count_data = len(count_data)
plt.bar(np.arange(n_count_data), count_data, color="#348ABD")
plt.xlabel("Time (days)")
plt.ylabel("count of text-msgs received")
plt.title("Did the user's texting habits change over time?")
plt.xlim(0, n_count_data);
```
Before we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period?
How can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$,
$$ C_i \sim \text{Poisson}(\lambda) $$
We are not sure what the value of the $\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\lambda$ increases at some point during the observations. (Recall that a higher value of $\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)
How can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\tau$), the parameter $\lambda$ suddenly jumps to a higher value. So we really have two $\lambda$ parameters: one for the period before $\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:
$$
\lambda =
\begin{cases}
\lambda_1 & \text{if } t \lt \tau \cr
\lambda_2 & \text{if } t \ge \tau
\end{cases}
$$
If, in reality, no sudden change occurred and indeed $\lambda_1 = \lambda_2$, then the $\lambda$s posterior distributions should look about equal.
We are interested in inferring the unknown $\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\lambda$. What would be good prior probability distributions for $\lambda_1$ and $\lambda_2$? Recall that $\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\alpha$.
\begin{align}
&\lambda_1 \sim \text{Exp}( \alpha ) \\\
&\lambda_2 \sim \text{Exp}( \alpha )
\end{align}
$\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:
$$\frac{1}{N}\sum_{i=0}^N \;C_i \approx E[\; \lambda \; |\; \alpha ] = \frac{1}{\alpha}$$
An alternative, and something I encourage the reader to try, would be to have two priors: one for each $\lambda_i$. Creating two exponential distributions with different $\alpha$ values reflects our prior belief that the rate changed at some point during the observations.
What about $\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying
\begin{align}
& \tau \sim \text{DiscreteUniform(1,70) }\\\\
& \Rightarrow P( \tau = k ) = \frac{1}{70}
\end{align}
So after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.
We next turn to PyMC3, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created.
Introducing our first hammer: PyMC3
-----
PyMC3 is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC3 is so cool.
We will model the problem above using PyMC3. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. Model components are first-class primitives within the PyMC3 framework.
B. Cronin [5] has a very motivating description of probabilistic programming:
> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.
Because of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is.
PyMC3 code is easy to read. The only novel thing should be the syntax. Simply remember that we are representing the model's components ($\tau, \lambda_1, \lambda_2$ ) as variables.
```
import pymc3 as pm
import theano.tensor as tt
with pm.Model() as model:
alpha = 1.0/count_data.mean() # Recall count_data is the
# variable that holds our txt counts
lambda_1 = pm.Exponential("lambda_1", alpha)
lambda_2 = pm.Exponential("lambda_2", alpha)
tau = pm.DiscreteUniform("tau", lower=0, upper=n_count_data - 1)
```
In the code above, we create the PyMC3 variables corresponding to $\lambda_1$ and $\lambda_2$. We assign them to PyMC3's *stochastic variables*, so-called because they are treated by the back end as random number generators.
```
with model:
idx = np.arange(n_count_data) # Index
lambda_ = pm.math.switch(tau > idx, lambda_1, lambda_2)
lambda_
```
This code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\lambda$ from above. The `switch()` function assigns `lambda_1` or `lambda_2` as the value of `lambda_`, depending on what side of `tau` we are on. The values of `lambda_` up until `tau` are `lambda_1` and the values afterwards are `lambda_2`.
Note that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet.
```
with model:
observation = pm.Poisson("obs", lambda_, observed=count_data)
```
The variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `observed` keyword.
The code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo* (MCMC), which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\lambda_1, \lambda_2$ and $\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called *traces* in the MCMC literature) into histograms.
```
### Mysterious code to be explained in Chapter 3.
with model:
step = pm.Metropolis()
trace = pm.sample(10000, tune=5000,step=step)
lambda_1_samples = trace['lambda_1']
lambda_2_samples = trace['lambda_2']
tau_samples = trace['tau']
figsize(12.5, 10)
#histogram of the samples:
ax = plt.subplot(311)
ax.set_autoscaley_on(False)
plt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,
label="posterior of $\lambda_1$", color="#A60628", density=True)
plt.legend(loc="upper left")
plt.title(r"""Posterior distributions of the variables
$\lambda_1,\;\lambda_2,\;\tau$""")
plt.xlim([15, 30])
plt.xlabel("$\lambda_1$ value")
ax = plt.subplot(312)
ax.set_autoscaley_on(False)
plt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,
label="posterior of $\lambda_2$", color="#7A68A6", density=True)
plt.legend(loc="upper left")
plt.xlim([15, 30])
plt.xlabel("$\lambda_2$ value")
plt.subplot(313)
w = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)
plt.hist(tau_samples, bins=n_count_data, alpha=1,
label=r"posterior of $\tau$",
color="#467821", weights=w, rwidth=2.)
plt.xticks(np.arange(n_count_data))
plt.legend(loc="upper left")
plt.ylim([0, .75])
plt.xlim([35, len(count_data)-20])
plt.xlabel(r"$\tau$ (in days)")
plt.ylabel("probability")
plt.tight_layout()
```
### Interpretation
Recall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\lambda$s and $\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\lambda_1$ is around 18 and $\lambda_2$ is around 23. The posterior distributions of the two $\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.
What other observations can you make? If you look at the original data again, do these results seem reasonable?
Notice also that the posterior distributions for the $\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.
Our analysis also returned a distribution for $\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points.
### Why would I want samples from the posterior, anyways?
We will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.
We'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \; 0 \le t \le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\lambda$. Therefore, the question is equivalent to *what is the expected value of $\lambda$ at time $t$*?
In the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\lambda_i$ for that day $t$, using $\lambda_i = \lambda_{1,i}$ if $t \lt \tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\lambda_i = \lambda_{2,i}$.
```
figsize(12.5, 5)
# tau_samples, lambda_1_samples, lambda_2_samples contain
# N samples from the corresponding posterior distribution
N = tau_samples.shape[0]
expected_texts_per_day = np.zeros(n_count_data)
for day in range(0, n_count_data):
# ix is a bool index of all tau samples corresponding to
# the switchpoint occurring prior to value of 'day'
ix = day < tau_samples
# Each posterior sample corresponds to a value for tau.
# for each day, that value of tau indicates whether we're "before"
# (in the lambda1 "regime") or
# "after" (in the lambda2 "regime") the switchpoint.
# by taking the posterior sample of lambda1/2 accordingly, we can average
# over all samples to get an expected value for lambda on that day.
# As explained, the "message count" random variable is Poisson distributed,
# and therefore lambda (the poisson parameter) is the expected value of
# "message count".
expected_texts_per_day[day] = (lambda_1_samples[ix].sum()
+ lambda_2_samples[~ix].sum()) / N
plt.plot(range(n_count_data), expected_texts_per_day, lw=4, color="#E24A33",
label="expected number of text-messages received")
plt.xlim(0, n_count_data)
plt.xlabel("Day")
plt.ylabel("Expected # text-messages")
plt.title("Expected number of text-messages received")
plt.ylim(0, 60)
plt.bar(np.arange(len(count_data)), count_data, color="#348ABD", alpha=0.65,
label="observed texts per day")
plt.legend(loc="upper left");
```
Our analysis shows strong support for believing the user's behavior did change ($\lambda_1$ would have been close in value to $\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)
##### Exercises
1\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\lambda_1$ and $\lambda_2$?
```
#type your code here.
print(lambda_1_samples.mean())
print(lambda_2_samples.mean())
```
2\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.
```
#type your code here.
print( (lambda_1_samples / lambda_2_samples).mean() )
print(lambda_1_samples.mean() / lambda_2_samples.mean() )
```
3\. What is the mean of $\lambda_1$ **given** that we know $\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\lambda_1$ now? (You do not need to redo the PyMC3 part. Just consider all instances where `tau_samples < 45`.)
```
#type your code here.
lambda_1_samples[tau_samples < 45].mean()
lambda_1_samples.mean()
```
### References
- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/31/n_is_never_larg).
- [2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).
- [3] Salvatier, J, Wiecki TV, and Fonnesbeck C. (2016) Probabilistic programming in Python using PyMC3. *PeerJ Computer Science* 2:e55 <https://doi.org/10.7717/peerj-cs.55>
- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.
- [5] Cronin, Beau. "Why Probabilistic Programming Matters." 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. <https://plus.google.com/u/0/107971134877020469960/posts/KpeRdJKR6Z1>.
```
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
| github_jupyter |
# Tracking Callbacks
```
from fastai.gen_doc.nbdoc import *
from fastai.vision import *
from fastai.callbacks import *
```
This module regroups the callbacks that track one of the metrics computed at the end of each epoch to take some decision about training. To show examples of use, we'll use our sample of MNIST and a simple cnn model.
```
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
show_doc(TerminateOnNaNCallback)
```
Sometimes, training diverges and the loss goes to nan. In that case, there's no point continuing, so this callback stops the training.
```
model = simple_cnn((3,16,16,2))
learn = Learner(data, model, metrics=[accuracy])
learn.fit_one_cycle(1,1e4)
```
Using it prevents that situation to happen.
```
model = simple_cnn((3,16,16,2))
learn = Learner(data, model, metrics=[accuracy], callbacks=[TerminateOnNaNCallback()])
learn.fit(2,1e4)
```
### Callback methods
You don't call these yourself - they're called by fastai's [`Callback`](/callback.html#Callback) system automatically to enable the class's functionality.
```
show_doc(TerminateOnNaNCallback.on_batch_end)
show_doc(TerminateOnNaNCallback.on_epoch_end)
show_doc(EarlyStoppingCallback)
```
This callback tracks the quantity in `monitor` during the training of `learn`. `mode` can be forced to 'min' or 'max' but will automatically try to determine if the quantity should be the lowest possible (validation loss) or the highest possible (accuracy). Will stop training after `patience` epochs if the quantity hasn't improved by `min_delta`.
```
model = simple_cnn((3,16,16,2))
learn = Learner(data, model, metrics=[accuracy],
callback_fns=[partial(EarlyStoppingCallback, monitor='accuracy', min_delta=0.01, patience=3)])
learn.fit(50,1e-42)
```
### Callback methods
You don't call these yourself - they're called by fastai's [`Callback`](/callback.html#Callback) system automatically to enable the class's functionality.
```
show_doc(EarlyStoppingCallback.on_train_begin)
show_doc(EarlyStoppingCallback.on_epoch_end)
show_doc(SaveModelCallback)
```
This callback tracks the quantity in `monitor` during the training of `learn`. `mode` can be forced to 'min' or 'max' but will automatically try to determine if the quantity should be the lowest possible (validation loss) or the highest possible (accuracy). Will save the model in `name` whenever determined by `every` ('improvement' or 'epoch'). Loads the best model at the end of training is `every='improvement'`.
### Callback methods
You don't call these yourself - they're called by fastai's [`Callback`](/callback.html#Callback) system automatically to enable the class's functionality.
```
show_doc(SaveModelCallback.on_epoch_end)
show_doc(SaveModelCallback.on_train_end)
show_doc(ReduceLROnPlateauCallback)
```
This callback tracks the quantity in `monitor` during the training of `learn`. `mode` can be forced to 'min' or 'max' but will automatically try to determine if the quantity should be the lowest possible (validation loss) or the highest possible (accuracy). Will reduce the learning rate by `factor` after `patience` epochs if the quantity hasn't improved by `min_delta`.
### Callback methods
You don't call these yourself - they're called by fastai's [`Callback`](/callback.html#Callback) system automatically to enable the class's functionality.
```
show_doc(ReduceLROnPlateauCallback.on_train_begin)
show_doc(ReduceLROnPlateauCallback.on_epoch_end)
show_doc(TrackerCallback)
show_doc(TrackerCallback.get_monitor_value)
```
### Callback methods
You don't call these yourself - they're called by fastai's [`Callback`](/callback.html#Callback) system automatically to enable the class's functionality.
```
show_doc(TrackerCallback.on_train_begin)
```
## Undocumented Methods - Methods moved below this line will intentionally be hidden
## New Methods - Please document or move to the undocumented section
| github_jupyter |
## Современные библиотеки градиентного бустинга
Ранее мы использовали наивную версию градиентного бустинга из scikit-learn, [придуманную](https://projecteuclid.org/download/pdf_1/euclid.aos/1013203451) в 1999 году Фридманом. С тех пор было предложено много реализаций, которые оказываются лучше на практике. На сегодняшний день популярны три библиотеки, реализующие градиентный бустинг:
* **XGBoost**. После выхода быстро набрала популярность и оставалась стандартом до конца 2016 года. Одними из основных особенностей имплементации были оптимизированность построения деревьев, а также различные регуляризации модели.
* **LightGBM**. Отличительной чертой является быстрота построения композиции. Например, используется следующий трюк для ускорения обучения: при построении вершины дерева вместо перебора по всем значениям признака производится перебор значений гистограммы этого признака. Таким образом, вместо $O(\ell)$ требуется $O(\text{#bins})$. Кроме того, в отличие от других библиотек, которые строят дерево по уровням, LightGBM использует стратегию best-first, т.е. на каждом шаге строит вершину, дающую наибольшее уменьшение функционала. Таким образом, каждое дерево является цепочкой с прикрепленными листьями.
* **CatBoost**. Библиотека от компании Яндекс. Позволяет автоматически обрабатывать категориальные признаки (даже если их значения представлены в виде строк). Кроме того, алгоритм является менее чувствительным к выбору конкретных гиперпараметров. За счёт этого уменьшается время, которое тратит человек на подбор оптимальных гиперпараметров.
### Основные параметры
(lightgbm/catboost)
* `objective` – функционал, на который будет настраиваться композиция
* `eta` / `learning_rate` – темп (скорость) обучения
* `num_iterations` / `n_estimators` – число итераций бустинга
#### Параметры, отвечающие за сложность деревьев
* `max_depth` – максимальная глубина
* `max_leaves` / num_leaves – максимальное число вершин в дереве
* `gamma` / `min_gain_to_split` – порог на уменьшение функции ошибки при расщеплении в дереве
* `min_data_in_leaf` – минимальное число объектов в листе
* `min_sum_hessian_in_leaf` – минимальная сумма весов объектов в листе, минимальное число объектов, при котором делается расщепление
* `lambda` – коэффициент регуляризации (L2)
* `subsample` / `bagging_fraction` – какую часть объектов обучения использовать для построения одного дерева
* `colsample_bytree` / `feature_fraction` – какую часть признаков использовать для построения одного дерева
Подбор всех этих параметров — настоящее искусство. Но начать их настройку можно с самых главных параметров: `learning_rate` и `n_estimators`. Обычно один из них фиксируют, а оставшийся из этих двух параметров подбирают (например, фиксируют `n_estimators=1000` и подбирают `learning_rate`). Следующим по важности является `max_depth`. В силу того, что мы заинтересованы в неглубоких деревьях, обычно его перебирают из диапазона [3; 7].
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
plt.style.use('seaborn')
%matplotlib inline
plt.rcParams['figure.figsize'] = (8, 5)
# !pip install catboost
# !pip install lightgbm
# !pip install xgboost
!pip install mlxtend
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
X, y = make_classification(n_samples=500, n_features=2, n_informative=2,
n_redundant=0, n_repeated=0,
n_classes=2, n_clusters_per_class=2,
flip_y=0.05, class_sep=0.8, random_state=241)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=241)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap='YlGn');
```
## Catboost
```
from catboost import CatBoostClassifier
??CatBoostClassifier
```
#### Задание 1.
- Обучите CatBoostClassifier с дефолтными параметрами, используя 300 деревьев.
- Нарисуйте decision boundary
- Посчитайте roc_auc_score
```
from sklearn.metrics import roc_auc_score
from mlxtend.plotting import plot_decision_regions
fig, ax = plt.subplots(1,1)
clf = CatBoostClassifier(iterations=200, logging_level='Silent')
clf.fit(X_train, y_train)
plot_decision_regions(X_test, y_test, clf, ax=ax)
print(roc_auc_score(y_test, clf.predict_proba(X_test)[:,1]))
y_test_pred = clf.predict_proba(X_test)[:, 1]
roc_auc_score(y_test, y_test_pred)
```
### Learning rate
Default is 0.03
#### Задание 2.
- Обучите CatBoostClassifier с разными значениями `learning_rate`.
- Посчитайте roc_auc_score на тестовой и тренировочной выборках
- Написуйте график зависимости roc_auc от скорости обучения (learning_rate)
```
lrs = np.arange(0.001, 1.1, 0.005)
quals_train = [] # to store roc auc on trian
quals_test = [] # to store roc auc on test
for l in lrs:
clf = CatBoostClassifier(iterations=150, logging_level='Silent',
learning_rate=l)
clf.fit(X_train, y_train)
q_train = roc_auc_score(y_train, clf.predict_proba(X_train)[:,1])
q_test = roc_auc_score(y_test, clf.predict_proba(X_test)[:,1])
quals_train.append(q_train)
quals_test.append(q_test)
# YOUR CODE HERE
plt.plot(lrs, quals_train, marker='.', label='train')
plt.plot(lrs, quals_test, marker='.', label='test')
plt.xlabel('LR')
plt.ylabel('AUC-ROC')
plt.legend()
# YOUR CODE HERE (make the plot)
```
### Number of trees
Важно также подобрать количество деревьев
#### Задание 3.
- Обучите CatBoostClassifier с разными значениями `iterations`.
- Посчитайте roc_auc_score на тестовой и тренировочной выборках
- Написуйте график зависимости roc_auc от размера копозиции
```
%%timeit
n_trees = [1, 5, 10, 100, 200, 300, 400, 500, 600, 700]
quals_train = []
quals_test = []
for n in n_trees:
clf = CatBoostClassifier(iterations=n, logging_level='Silent', learning_rate=0.02)
clf.fit(X_train, y_train)
q_train = roc_auc_score(y_train, clf.predict_proba(X_train)[:,1])
q_test = roc_auc_score(y_test, clf.predict_proba(X_test)[:,1])
quals_train.append(q_train)
quals_test.append(q_test)
# YOUR CODE HERE
plt.plot(n_trees, quals_train, marker='.', label='train')
plt.plot(n_trees, quals_test, marker='.', label='test')
plt.xlabel('N trees')
plt.ylabel('AUC-ROC')
plt.legend()
plt.plot(n_trees, quals_train, marker='.', label='train')
plt.plot(n_trees, quals_test, marker='.', label='test')
plt.xlabel('Number of trees')
plt.ylabel('AUC-ROC')
plt.legend()
plt.show()
```
### Staged prediction
Как сделать то же самое, но быстрее. Для этого в библиотеке CatBoost есть метод `staged_predict_proba`
```
%%timeit
# train the model with max trees
clf = CatBoostClassifier(iterations=700,
logging_level='Silent',
learning_rate = 0.01)
clf.fit(X_train, y_train)
# obtain staged predictiond on test
predictions_test = clf.staged_predict_proba(
data=X_test,
ntree_start=0,
ntree_end=700,
eval_period=25
)
# obtain staged predictiond on train
predictions_train = clf.staged_predict_proba(
data=X_train,
ntree_start=0,
ntree_end=700,
eval_period=25
)
# calculate roc_auc
quals_train = []
quals_test = []
n_trees = []
for iteration, (test_pred, train_pred) in enumerate(zip(predictions_test, predictions_train)):
n_trees.append((iteration+1)*25)
quals_test.append(roc_auc_score(y_test, test_pred[:, 1]))
quals_train.append(roc_auc_score(y_train, train_pred[:, 1]))
plt.plot(n_trees, quals_train, marker='.', label='train')
plt.plot(n_trees, quals_test, marker='.', label='test')
plt.xlabel('Number of trees')
plt.ylabel('AUC-ROC')
plt.legend()
plt.show()
```
## LightGBM
```
from lightgbm import LGBMClassifier
??LGBMClassifier
```
#### Задание 4.
- Обучите LGBMClassifier с дефолтными параметрами, используя 300 деревьев.
- Нарисуйте decision boundary
- Посчитайте roc_auc_score
```
clf = LGBMClassifier(n_estimators=200)
clf.fit(X_train, y_train)
plot_decision_regions(X_test, y_test, clf)
print(roc_auc_score(y_test, clf.predict_proba(X_test)[:,1]))
n_trees = [1, 5, 10, 100, 200, 300, 400, 500, 600, 700]
quals_train = []
quals_test = []
for n in n_trees:
clf = LGBMClassifier(n_estimators=n)
clf.fit(X_train, y_train)
q_train = roc_auc_score(y_train, clf.predict_proba(X_train)[:, 1])
q_test = roc_auc_score(y_test, clf.predict_proba(X_test)[:, 1])
quals_train.append(q_train)
quals_test.append(q_test)
plt.plot(n_trees, quals_train, marker='.', label='train')
plt.plot(n_trees, quals_test, marker='.', label='test')
plt.xlabel('Number of trees')
plt.ylabel('AUC-ROC')
plt.legend()
```
Теперь попробуем взять фиксированное количество деревьев, но будем менять максимальнyю глубину
```
depth = list(range(1, 17, 2))
quals_train = []
quals_test = []
for d in depth:
lgb = LGBMClassifier(n_estimators=100, max_depth=d)
lgb.fit(X_train, y_train)
q_train = roc_auc_score(y_train, lgb.predict_proba(X_train)[:, 1])
q_test = roc_auc_score(y_test, lgb.predict_proba(X_test)[:, 1])
quals_train.append(q_train)
quals_test.append(q_test)
plt.plot(depth, quals_train, marker='.', label='train')
plt.plot(depth, quals_test, marker='.', label='test')
plt.xlabel('Depth of trees')
plt.ylabel('AUC-ROC')
plt.legend()
```
И сравним с Catboost:
#### Задание 5.
- Обучите CatBoostClassifier с разной глубиной
- Посчитайте roc_auc_score,
- Сравните лучший результат с LGBM
```
depth = list(range(1, 17, 2))
quals_train = []
quals_test = []
```
Теперь, когда у нас получились отличные модели, нужно их сохранить!
```
clf = CatBoostClassifier(n_estimators=200, learning_rate=0.01,
max_depth=5, logging_level="Silent")
clf.fit(X_train, y_train)
clf.save_model('catboost.cbm', format='cbm');
lgb = LGBMClassifier(n_estimators=100, max_depth=3)
lgb.fit(X_train, y_train)
lgb.booster_.save_model('lightgbm.txt')
```
И загрузим обратно, когда понадобится их применить
```
lgb = LGBMClassifier(model_file='lightgbm.txt')
clf = clf.load_model('catboost.cbm')
```
## Блендинг и Стекинг
Блендинг представляет из себя "мета-алгоритм", предсказание которого строится как взвешенная сумма базовых алгоритмов.
Рассмотрим простой пример блендинга бустинга и линейной регрессии.
```
from sklearn.datasets import load_boston
from sklearn.metrics import mean_squared_error
data = load_boston()
X = pd.DataFrame(data.data, columns=data.feature_names)
y = data.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=10)
```
#### Задание 6.
- Обучите CatBoostRegressor со следующими гиперпараметрами:
`iterations=100, max_depth=4, learning_rate=0.01, loss_function='RMSE'`
- Посчитайте предсказание и RMSE на тестовой и тренировочной выборках
```
from catboost import CatBoostRegressor
cbm = CatBoostRegressor(iterations=100, max_depth=5, learning_rate=0.02,
loss_function='RMSE', logging_level='Silent')
cbm.fit(X_train, y_train)
y_pred_cbm = cbm.predict(X_test)
y_train_pred_cbm = cbm.predict(X_train)
print("Train RMSE = %.4f" % mean_squared_error(y_train, y_train_pred_cbm))
print("Test RMSE = %.4f" % mean_squared_error(y_test, y_pred_cbm))
```
#### Задание 7.
- Отмасштабируйте данные (StandardScaler) и обучите линейную регрессию
- Посчитайте предсказание и RMSE на тестовой и тренировочной выборках
```
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import StandardScaler
lr = LinearRegression(normalize=True)
lr.fit(X_train, y_train)
y_pred_lr = lr.predict(X_test)
y_train_lr = lr.predict(X_train)
print("Train RMSE = %.4f" % mean_squared_error(y_train, y_train_lr))
print("Test RMSE = %.4f" % mean_squared_error(y_test, y_pred_lr))
```
#### Блендинг
Будем считать, что новый алгоритм $a(x)$ представим как
$$
a(x)
=
\sum_{n = 1}^{N}
w_n b_n(x),
$$
где $\sum\limits_{n=1}^N w_n =1$
Нам нужно обучить линейную регрессию на предсказаниях двух обченных выше алгоритмов
#### Задание 8.
```
predictions_train = pd.DataFrame([y_train_lr, y_train_pred_cbm]).T
predictions_test = pd.DataFrame([y_pred_lr, y_pred_cbm]).T
lr_blend = LinearRegression()
lr_blend.fit(predictions_train, y_train)
y_pred_blend = lr_blend.predict(predictions_test)
y_train_blend = lr_blend.predict(predictions_train)
print("Train RMSE = %.4f" % mean_squared_error(y_train, y_train_blend))
print("Test RMSE = %.4f" % mean_squared_error(y_test, y_pred_blend))
```
#### Стекинг
Теперь обучим более сложную функцию композиции
$$
a(x) = f(b_1(x), b_2(x))
$$
где $f()$ это обученная модель градиентного бустинга
#### Задание 9.
```
from lightgbm import LGBMRegressor
lgb_stack = LGBMRegressor(n_estimators=100, max_depth=2)
lgb_stack.fit(predictions_train, y_train)
y_pred_stack = lgb_stack.predict(predictions_test)
mean_squared_error(y_test, y_pred_stack)
```
В итоге получаем качество на тестовой выборке лучше, чем у каждого алгоритма в отдельности.
Полезные ссылки:
* [Видео про стекинг](https://www.coursera.org/lecture/competitive-data-science/stacking-Qdtt6)
## XGBoost
```
# based on https://www.analyticsvidhya.com/blog/2016/03/complete-guide-parameter-tuning-xgboost-with-codes-python/
import pandas as pd
import numpy as np
from sklearn.model_selection import GridSearchCV
from sklearn import metrics
titanic = pd.read_csv('titanic.csv')
X = titanic[['Pclass', 'Age', 'SibSp', 'Fare']]
y = titanic.Survived.values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train.shape, y_train.shape, X_test.shape, y_test.shape
from xgboost.sklearn import XGBClassifier
??XGBClassifier
def modelfit(alg, dtrain, y, X_test=None, y_test=None, test=True):
#Fit the algorithm on the data
alg.fit(dtrain, y, eval_metric='auc')
#Predict training set:
dtrain_predictions = alg.predict(dtrain)
dtrain_predprob = alg.predict_proba(dtrain)[:,1]
#Print model report:
print ("\nModel Report")
print ("Accuracy (Train): %.4g" % metrics.accuracy_score(y, dtrain_predictions))
print ("AUC Score (Train): %f" % metrics.roc_auc_score(y, dtrain_predprob))
if test:
dtest_predictions = alg.predict(X_test)
dtest_predprob = alg.predict_proba(X_test)[:,1]
print ("Accuracy (Test): %.4g" % metrics.accuracy_score(y_test, dtest_predictions))
print ("AUC Score (Test): %f" % metrics.roc_auc_score(y_test, dtest_predprob))
# plot feature importance
feat_imp = pd.Series(alg.get_booster().get_fscore()).sort_values(ascending=False)
feat_imp.plot(kind='bar', title='Feature Importances')
plt.ylabel('Feature Importance Score')
```
These parameters are used to define the optimization objective the metric to be calculated at each step.
<table><tr>
<td> <img src="https://github.com/AKuzina/ml_dpo/blob/main/practicals/xgb.png?raw=1" alt="Drawing" style="width: 700px;"/> </td>
</tr></table>
```
xgb1 = XGBClassifier(objective='binary:logistic',
eval_metric='auc',
learning_rate =0.1,
n_estimators=1000,
booster='gbtree',
seed=27)
modelfit(xgb1, X_train, y_train, X_test, y_test)
```
#### Задание 10.
- Задайте сетку для перечисленных ниже параметров
`max_depth` - Maximum tree depth for base learners.
`gamma` - Minimum loss reduction required to make a further partition on a leaf node of the tree.
`subsample` - Subsample ratio of the training instance.
`colsample_bytree` - Subsample ratio of columns when constructing each tree.
`reg_alpha` - L1 regularization term on weights
- Запустите поиск, используя `GridSearchCV` c 5 фолдами. Используйте смесь из 100 деревьев.
```
param_grid = {
# YOUR CODE HERE
}
gsearch1 = # YOUR CODE HERE
gsearch1.best_params_, gsearch1.best_score_
```
Теперь можем взять больше деревьев, но меньше lr
```
xgb_best = XGBClassifier(objective='binary:logistic',
eval_metric='auc',
learning_rate =0.01,
n_estimators=1000,
booster='gbtree',
seed=27,
max_depth = gsearch1.best_params_['max_depth'],
gamma = gsearch1.best_params_['gamma'],
subsample = gsearch1.best_params_['subsample'],
colsample_bytree = gsearch1.best_params_['colsample_bytree'],
reg_alpha = gsearch1.best_params_['reg_alpha']
)
modelfit(xgb_best, X_train, y_train, X_test, y_test)
```
## Важность признаков
В курсе мы подробно обсуждаем, как добиваться хорошего качества решения задачи: имея выборку $X, y$, построить алгоритм с наименьшей ошибкой. Однако заказчику часто важно понимать, как работает алгоритм, почему он делает такие предсказания. Обсудим несколько мотиваций.
#### Доверие алгоритму
Например, в банках на основе решений, принятых алгоритмом, выполняются финансовые операции, и менеджер, ответственный за эти операции, будет готов использовать алгоритм, только если он понимает, что его решения обоснованы. По этой причине в банках очень часто используют простые линейные алгоритмы. Другой пример из области медицины: поскольку цена ошибки может быть очень велика, врачи готовы использовать только интерпретируемые алгоритмы.
#### Отсутствие дискриминации (fairness)
Вновь пример с банком: алгоритм кредитного скоринга не должен учитывать расовую принадлежность (racial bias) заемщика или его пол (gender bias). Между тем, такие зависимости часто могут присутствовать в датасете (исторические данные), на котором обучался алгоритм. Еще один пример: известно, что нейросетевые векторы слов содержат gender bias. Если эти вектора использовались при построении системы поиска по резюме для рекрутера, то, например, по запросу `technical skill` он может видеть женские резюме в конце ранжированного списка.
#### Учет контекста
Данные, на которых обучается алгоритм, не отображают всю предметную область. Интерпретация алгоритма позволит оценить, насколько найденные зависимости связаны с реальной жизнью. Если предсказания интерпретируемы, это также говорит о высокой обобщающей способности алгоритма.
Теперь обсудим несколько вариантов, как можно оценивать важность признаков.
### Веса линейной модели
Самый простой способ, который уже был рассмотрен на семинаре про линейные модели: после построения модели каждому признаку будет соответствовать свой вес - если признаки масштабированы, то чем он больше по модулю, тем важнее признак, а знак будет говорить о положительном или отрицательном влиянии на величину целевой переменной.
```
from sklearn.datasets import load_boston
from sklearn.metrics import mean_squared_error
data = load_boston()
X = pd.DataFrame(data.data, columns=data.feature_names)
y = data.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=10)
```
### FSTR (Feature strength)
[Fstr](https://catboost.ai/docs/concepts/fstr.html) говорит, что важность признака — это то, насколько в среднем меняется ответ модели при изменении значения данного признака (изменении значения разбиения).
Рассчитать его можно так:
$$feature\_importance_{F} = \sum_{tree, leaves_F} (v_1 - avr)^2\cdot c_1 +(v_2 - avr)^2\cdot c_2 = \left(v_1 - v_2\right)^2\frac{c_1c_2}{c_1 + c_2}\\
\qquad avr = \frac{v_1 \cdot c_1 + v_2 \cdot c_2}{c_1 + c_2}.$$
Мы сравниваем листы, отличающиеся значением сплита в узле на пути к ним: если условие сплита выполняется, объект попадает в левое поддерево, иначе — в правое.
$c_1, c_2$ - число объектов обучающего датасета, попавших в левое и правое поддерево соответственно, либо суммарный вес этих объектов, если используются веса; $v_1, v_2$ - значение модели в левом и правом поддереве (например, среднее)
Далее значения $feature\_importance$ нормируются, и получаются величины, которые суммируются в 100.
```
clf = CatBoostClassifier(n_estimators=200, learning_rate=0.01,
max_depth=5, logging_level="Silent")
# load the trained catboost model
clf = clf.load_model('catboost.cbm')
for val, name in sorted(zip(cbm.feature_importances_, data.feature_names))[::-1]:
print(name, val)
feature_importances = pd.DataFrame({'importance':cbm.feature_importances_}, index=data.feature_names)
feature_importances.sort_values('importance').plot.bar();
print(data.DESCR)
```
### Impurity-based feature importances
Важность признака рассчитывается как (нормированное) общее снижение критерия информативности за счет этого признака.
Приведем простейший пример, как можно получить такую оценку в sklearn-реализации RandomForest
```
from sklearn.ensemble import RandomForestRegressor
clf = RandomForestRegressor(n_estimators=100, oob_score=True)
clf.fit(X_train, y_train)
clf.feature_importances_
feature_importances = pd.DataFrame({'importance':clf.feature_importances_}, index=X_train.columns)
feature_importances.sort_values('importance').plot.bar();
```
| github_jupyter |
```
%cd ../
from torchsignal.datasets import OPENBMI
from torchsignal.datasets.multiplesubjects import MultipleSubjects
from torchsignal.trainer.multitask import Multitask_Trainer
from torchsignal.model import MultitaskSSVEP
import numpy as np
import torch
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
config = {
"exp_name": "multitask-run1",
"seed": 12,
"segment_config": {
"window_len": 1,
"shift_len": 1000,
"sample_rate": 1000,
"add_segment_axis": True
},
"bandpass_config": {
"sample_rate": 1000,
"lowcut": 1,
"highcut": 40,
"order": 6
},
"train_subject_ids": {
"low": 1,
"high": 54
},
"test_subject_ids": {
"low": 1,
"high": 54
},
"root": "../data/openbmi",
"selected_channels": ['P7', 'P3', 'Pz', 'P4', 'P8', 'PO9', 'O1', 'Oz', 'O2', 'PO10'],
"sessions": [1,2],
"tsdata": False,
"num_classes": 4,
"num_channel": 10,
"batchsize": 256,
"learning_rate": 0.001,
"epochs": 100,
"patience": 5,
"early_stopping": 10,
"model": {
"n1": 4,
"kernel_window_ssvep": 59,
"kernel_window": 19,
"conv_3_dilation": 4,
"conv_4_dilation": 4
},
"gpu": 0,
"multitask": True,
"runkfold": 4,
"check_model": True
}
device = torch.device("cuda:"+str(config['gpu']) if torch.cuda.is_available() else "cpu")
print('device', device)
```
# Load Data - OPENBMI
```
subject_ids = list(np.arange(config['train_subject_ids']['low'], config['train_subject_ids']['high']+1, dtype=int))
openbmi_data = MultipleSubjects(
dataset=OPENBMI,
root=config['root'],
subject_ids=subject_ids,
sessions=config['sessions'],
selected_channels=config['selected_channels'],
segment_config=config['segment_config'],
bandpass_config=config['bandpass_config'],
one_hot_labels=True,
)
```
# Train-Test model - leave one subject out
```
train_loader, val_loader, test_loader = openbmi_data.leave_one_subject_out(selected_subject_id=1)
dataloaders_dict = {
'train': train_loader,
'val': val_loader
}
check_model = config['check_model'] if 'check_model' in config else False
if check_model:
x = torch.ones((20, 10, 1000)).to(device)
if config['tsdata'] == True:
x = torch.ones((40, config['num_channel'], config['segment_config']['window_len'] * config['bandpass_config']['sample_rate'])).to(device)
model = MultitaskSSVEP(num_channel=config['num_channel'],
num_classes=config['num_classes'],
signal_length=config['segment_config']['window_len'] * config['bandpass_config']['sample_rate'],
filters_n1= config['model']['n1'],
kernel_window_ssvep= config['model']['kernel_window_ssvep'],
kernel_window= config['model']['kernel_window'],
conv_3_dilation= config['model']['conv_3_dilation'],
conv_4_dilation= config['model']['conv_4_dilation'],
).to(device)
out = model(x)
print('output',out.shape)
def count_params(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print('model size',count_params(model))
del model
del out
model = MultitaskSSVEP(num_channel=config['num_channel'],
num_classes=config['num_classes'],
signal_length=config['segment_config']['window_len'] * config['bandpass_config']['sample_rate'],
filters_n1= config['model']['n1'],
kernel_window_ssvep= config['model']['kernel_window_ssvep'],
kernel_window= config['model']['kernel_window'],
conv_3_dilation= config['model']['conv_3_dilation'],
conv_4_dilation= config['model']['conv_4_dilation'],
).to(device)
epochs=config['epochs'] if 'epochs' in config else 50
patience=config['patience'] if 'patience' in config else 20
early_stopping=config['early_stopping'] if 'early_stopping' in config else 40
trainer = Multitask_Trainer(model, model_name="multitask", device=device, num_classes=config['num_classes'], multitask_learning=True, patience=patience, verbose=True)
trainer.fit(dataloaders_dict, num_epochs=epochs, early_stopping=early_stopping, topk_accuracy=1, save_model=False)
test_loss, test_acc, test_metric = trainer.validate(test_loader, 1)
print('test: {:.5f}, {:.5f}, {:.5f}'.format(test_loss, test_acc, test_metric))
```
# Train-Test model - k-fold and leave one subject out
```
subject_kfold_acc = {}
subject_kfold_f1 = {}
test_subject_ids = list(np.arange(config['test_subject_ids']['low'], config['test_subject_ids']['high']+1, dtype=int))
for subject_id in test_subject_ids:
print('Subject', subject_id)
kfold_acc = []
kfold_f1 = []
for k in range(config['runkfold']):
openbmi_data.split_by_kfold(kfold_k=k, kfold_split=config['runkfold'])
train_loader, val_loader, test_loader = openbmi_data.leave_one_subject_out(selected_subject_id=subject_id, dataloader_batchsize=config['batchsize'])
dataloaders_dict = {
'train': train_loader,
'val': val_loader
}
model = MultitaskSSVEP(num_channel=config['num_channel'],
num_classes=config['num_classes'],
signal_length=config['segment_config']['window_len'] * config['bandpass_config']['sample_rate'],
filters_n1= config['model']['n1'],
kernel_window_ssvep= config['model']['kernel_window_ssvep'],
kernel_window= config['model']['kernel_window'],
conv_3_dilation= config['model']['conv_3_dilation'],
conv_4_dilation= config['model']['conv_4_dilation'],
).to(device)
epochs=config['epochs'] if 'epochs' in config else 50
patience=config['patience'] if 'patience' in config else 20
early_stopping=config['early_stopping'] if 'early_stopping' in config else 40
trainer = Multitask_Trainer(model, model_name="Network064b_1-8sub", device=device, num_classes=config['num_classes'], multitask_learning=True, patience=patience, verbose=False)
trainer.fit(dataloaders_dict, num_epochs=epochs, early_stopping=early_stopping, topk_accuracy=1, save_model=True)
test_loss, test_acc, test_metric = trainer.validate(test_loader, 1)
# print('test: {:.5f}, {:.5f}, {:.5f}'.format(test_loss, test_acc, test_metric))
kfold_acc.append(test_acc)
kfold_f1.append(test_metric)
subject_kfold_acc[subject_id] = kfold_acc
subject_kfold_f1[subject_id] = kfold_f1
print('results')
print('subject_kfold_acc', subject_kfold_acc)
print('subject_kfold_f1', subject_kfold_f1)
# acc
subjects = []
acc = []
acc_min = 1.0
acc_max = 0.0
for subject_id in subject_kfold_acc:
subjects.append(subject_id)
avg_acc = np.mean(subject_kfold_acc[subject_id])
if avg_acc < acc_min:
acc_min = avg_acc
if avg_acc > acc_max:
acc_max = avg_acc
acc.append(avg_acc)
x_pos = [i for i, _ in enumerate(subjects)]
figure(num=None, figsize=(15, 3), dpi=80, facecolor='w', edgecolor='k')
plt.bar(x_pos, acc, color='skyblue')
plt.xlabel("Subject")
plt.ylabel("Accuracies")
plt.title("Average k-fold Accuracies by subjects")
plt.xticks(x_pos, subjects)
plt.ylim([acc_min-0.02, acc_max+0.02])
plt.show()
# f1
subjects = []
f1 = []
f1_min = 1.0
f1_max = 0.0
for subject_id in subject_kfold_f1:
subjects.append(subject_id)
avg_f1 = np.mean(subject_kfold_f1[subject_id])
if avg_f1 < f1_min:
f1_min = avg_f1
if avg_f1 > f1_max:
f1_max = avg_f1
f1.append(avg_f1)
x_pos = [i for i, _ in enumerate(subjects)]
figure(num=None, figsize=(15, 3), dpi=80, facecolor='w', edgecolor='k')
plt.bar(x_pos, f1, color='skyblue')
plt.xlabel("Subject")
plt.ylabel("Accuracies")
plt.title("Average k-fold F1 by subjects")
plt.xticks(x_pos, subjects)
plt.ylim([f1_min-0.02, f1_max+0.02])
plt.show()
print('Average acc:', np.mean(acc))
print('Average f1:', np.mean(f1))
```
| github_jupyter |
**Exploratory Data Analysis for HMDA**
Ideas:
Outcome Variable,
Quantity of Filers,
Property Type,
Loan Type
```
%matplotlib inline
import os
import requests
import matplotlib
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from pandas.plotting import scatter_matrix
```
**Load data into a dataframe:**
```
filepath = os.path.abspath(os.path.join( "..", "fixtures", "hmda2017sample.csv"))
DATA = pd.read_csv(filepath, low_memory=False)
DATA.head()
DATA = DATA.drop(DATA.columns[0], axis=1)
```
**Summary statistics:**
```
DATA.describe()
```
**Create a binary outcome variable, 'action_taken'**
```
DATA['action_taken'] = DATA.action_taken_name.apply(lambda x: 1 if x in ['Loan purchased by the institution', 'Loan originated'] else 0)
pd.crosstab(DATA['action_taken_name'],DATA['action_taken'], margins=True)
```
#### Making a box plot:
```
matplotlib.style.use('ggplot')
DATA[[ 'population',
'number_of_owner_occupied_units',
'number_of_1_to_4_family_units',
'loan_amount_000s',
'applicant_income_000s'
]].plot(kind='box',figsize=(20,10))
DATA[[ 'population',
'number_of_owner_occupied_units',
'number_of_1_to_4_family_units',
'loan_amount_000s',
'applicant_income_000s'
]].hist(figsize=(20,10)) # Histogram for all features
```
#### Visualizing the distribution with a kernel density estimate:
```
DATA['number_of_1_to_4_family_units'].plot(kind='kde')
```
#### Making a scatter plot matrix:
```
DATA_targ_numeric = DATA[['action_taken',
'tract_to_msamd_income',
'population',
'minority_population',
'number_of_owner_occupied_units',
'number_of_1_to_4_family_units',
'loan_amount_000s',
'hud_median_family_income',
'applicant_income_000s'
]]
# Extract our X and y data
X = DATA_targ_numeric[:-1]
y = DATA_targ_numeric['action_taken']
# Create a scatter matrix of the dataframe features
from pandas.plotting import scatter_matrix
scatter_matrix = scatter_matrix(X, alpha=0.2, figsize=(12, 12), diagonal='kde')
for ax in scatter_matrix.ravel():
ax.set_xlabel(ax.get_xlabel(), fontsize = 6, rotation = 90)
ax.set_ylabel(ax.get_ylabel(), fontsize = 6, rotation = 0)
plt.show()
```
### Don't forget about Matplotlib...
Sometimes you'll want to something a bit more custom (or you'll want to figure out how to tweak the labels, change the colors, make small multiples, etc), so you'll want to go straight to the Matplotlib documentation.
You will learn more about matplotlib.pyplot on the next Lab.
#### Tweak the labels
For example, say we want to tweak the labels on one of our graphs:
```
x = [1, 2, 3, 4]
y = [1, 4, 9, 6]
labels = ['Frogs', 'Hogs', 'Bogs', 'Slogs']
plt.plot(x, y, 'ro')
# You can specify a rotation for the tick labels in degrees or with keywords.
plt.xticks(x, labels, rotation=30)
# Pad margins so that markers don't get clipped by the axes
plt.margins(0.2)
# Tweak spacing to prevent clipping of tick-labels
plt.subplots_adjust(bottom=0.15)
plt.show()
```
# Seaborn
## Obtaining the Data For the Census Dataset
### Exploratory Data Analysis (EDA)
[Seaborn](https://seaborn.pydata.org/) is another great Python visualization library to have up your sleeve.
Seaborn is a Python visualization library based on matplotlib. It provides a high-level interface for drawing attractive statistical graphics. For a brief introduction to the ideas behind the package, you can read the introductory notes. More practical information is on the installation page. You may also want to browse the example gallery to get a sense for what you can do with seaborn and then check out the tutorial and API reference to find out how.
Seaborn has a lot of the same methods as Pandas, like [boxplots](http://seaborn.pydata.org/generated/seaborn.boxplot.html?highlight=box%2520plot#seaborn.boxplot) and [histograms](http://seaborn.pydata.org/generated/seaborn.distplot.html) (albeit with slightly different syntax!), but also comes with some novel tools
We will now use the census dataset to explore the use of visualizations in feature analysis and selection using this library.
#### Making a Countplot:
In this dataset, our target variable is data['income'] which is categorical. It would be interesting to see the frequencies of each class, relative to the target of our classifier. To do this, we can use the countplot function from the Python visualization package Seaborn to count the occurrences of each data point. Let's take a look at the counts of different categories in data['occupation'] and in data['education'] — two likely predictors of income in the Census data:
The [Countplot](https://seaborn.pydata.org/generated/seaborn.countplot.html) function accepts either an x or a y argument to specify if this is a bar plot or a column plot. We chose to use the y argument so that the labels would be readable. The hue argument specifies a column for comparison; in this case we're concerned with the relationship of our categorical variables to the target income. Go ahead and explore other variables in the dataset, for example data.race and data.sex to see if those values are predictive of the level of income or not!
```
DATA.columns
ax = sns.countplot(y='loan_type_name', hue='action_taken', data=DATA,)
ax = sns.countplot(y='state_abbr', hue='action_taken', data=DATA,)
ax = sns.countplot(y='purchaser_type_name', hue='action_taken', data=DATA,)
ax = sns.countplot(y='property_type_name', hue='action_taken', data=DATA,)
ax = sns.countplot(y='loan_purpose_name', hue='action_taken', data=DATA,)
ax = sns.countplot(y='loan_type_name', hue='action_taken', data=DATA,)
ax = sns.countplot(y='loan_type_name', hue='action_taken', data=DATA,)
ax = sns.countplot(y='hoepa_status_name', hue='action_taken', data=DATA,)
ax = sns.countplot(y='agency_name', hue='action_taken', data=DATA,)
ax = sns.countplot(y='applicant_sex_name', hue='action_taken', data=DATA,)
ax = sns.countplot(y='applicant_ethnicity_name', hue='action_taken', data=DATA,)
ax = sns.countplot(y='applicant_race_name_1', hue='action_taken', data=DATA,)
ax = sns.countplot(y='hoepa_status_name', hue='action_taken', data=DATA,)
ax = sns.countplot(y='hoepa_status_name', hue='action_taken', data=DATA,)
```
| github_jupyter |
```
# TODO
# 1. # of words
# 2. # of sensor types
# 3. how bag of words clustering works
# 4. how data feature classification works on sensor types
# 5. how data feature classification works on tag classification
# 6. # of unique sentence structure
import json
from functools import reduce
import os.path
import os
import random
import pandas as pd
from scipy.stats import entropy
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from sklearn.gaussian_process.kernels import RBF
from sklearn.gaussian_process import GaussianProcessClassifier
nae_dict = {
'bonner': ['607', '608', '609', '557', '610'],
'ap_m': ['514', '513','604'],
'bsb': ['519', '568', '567', '566', '564', '565'],
'ebu3b': ['505', '506']
}
def counterize_feature(feat):
indexList = [not np.isnan(val) for val in feat]
maxVal = max(feat.loc[indexList])
minVal = min(feat.loc[indexList])
gran = 100
interval = (maxVal-minVal)/100.0
keys = np.arange(minVal,maxVal,interval)
resultDict = defaultdict(int)
for key, val in feat.iteritems():
try:
if np.isnan(val):
resultDict[None] += 1
continue
diffList = [abs(key-val) for key in keys]
minVal = min(diffList)
minIdx = diffList.index(minVal)
minKey = keys[minIdx]
resultDict[minKey] += 1
except:
print key, val
return resultDict
true_df.loc[true_df['Unique Identifier']=='505_0_3000003']['Schema Label'].ravel()[0]
#sklearn.ensemble.RandomForestClassifier
building_list = ['ebu3b']
for building_name in building_list:
print("============ %s ==========="%building_name)
with open('metadata/%s_sentence_dict.json'%building_name, 'r') as fp:
sentence_dict = json.load(fp)
srcid_list = list(sentence_dict.keys())
# 1. Number of unique words
adder = lambda x,y:x+y
num_remover = lambda xlist: ["number" if x.isdigit() else x for x in xlist]
total_word_set = set(reduce(adder, map(num_remover,sentence_dict.values()), []))
print("# of unique words: %d"%(len(total_word_set)))
# 2. of sensor types
labeled_metadata_filename = 'metadata/%s_sensor_types_location.csv'%building_name
if os.path.isfile(labeled_metadata_filename):
true_df = pd.read_csv(labeled_metadata_filename)
else:
true_df = None
if isinstance(true_df, pd.DataFrame):
sensor_type_set = set(true_df['Schema Label'].ravel())
print("# of unique sensor types: %d"%(len(sensor_type_set)))
else:
sensor_type_set = None
# 3. how bag of words clustering works
with open('model/%s_word_clustering.json'%building_name, 'r') as fp:
cluster_dict = json.load(fp)
print("# of word clusterings: %d"%(len(cluster_dict)))
small_cluster_num = 0
large_cluster_num = 0
for cluster_id, srcids in cluster_dict.items():
if len(srcids)<5:
small_cluster_num +=1
else:
large_cluster_num +=1
print("# of word small (<5)clusterings: %d"%small_cluster_num)
print("# of word large (>=5)clusterings: %d"%large_cluster_num)
# 4. how data feature classification works on sensor types
with open('model/fe_%s.json'%building_name, 'r') as fp:
#data_feature_dict = json.load(fp)
pass
with open('model/fe_%s_normalized.json'%building_name, 'r') as fp:
data_feature_dict = json.load(fp)
pass
feature_num = len(list(data_feature_dict.values())[0])
data_available_srcid_list = list(data_feature_dict.keys())
if isinstance(true_df, pd.DataFrame):
sample_num = 500
sample_idx_list = random.sample(range(0,len(data_feature_dict)), sample_num)
learning_srcid_list = [data_available_srcid_list[sample_idx]
for sample_idx in sample_idx_list]
learning_x = [data_feature_dict[srcid] for srcid in learning_srcid_list]
learning_y = [true_df.loc[true_df['Unique Identifier']==srcid]
['Schema Label'].ravel()[0]
for srcid in learning_srcid_list]
test_srcid_list = [srcid for srcid in data_available_srcid_list
if srcid not in learning_srcid_list]
test_x = [data_feature_dict[srcid] for srcid in test_srcid_list]
classifier_list = [RandomForestClassifier(),
AdaBoostClassifier(),
MLPClassifier(),
KNeighborsClassifier(),
SVC(),
GaussianNB(),
DecisionTreeClassifier()
]
for classifier in classifier_list:
classifier.fit(learning_x, learning_y)
test_y = classifier.predict(test_x)
precision = calc_accuracy(test_srcid_list, test_y)
print(type(classifier).__name__, precision)
# 5. How entropy varies in clusters
entropy_dict = dict()
for cluster_id, cluster in cluster_dict.items():
entropy_list = list()
for feature_idx in range(0,feature_num):
entropy_list.append(\
entropy([data_feature_dict[srcid][feature_idx] + 0.01
for srcid in cluster #random_sample_srcid_list \
if srcid in data_available_srcid_list]))
entropy_dict[cluster_id] = entropy_list
# 5. how data feature classification works on tag classification
#if isinstance()
# 6. # of unique sentence structure
def feature_check(data_feature_dict):
for srcid, features in data_feature_dict.items():
for feat in features:
#if np.isnan(feat):
if feat < -100:
print(srcid, features)
correct_cnt = 0
for i, srcid in enumerate(test_srcid_list):
schema_label = true_df.loc[true_df['Unique Identifier']==srcid]['Schema Label'].ravel()[0]
if schema_label==test_y[i]:
correct_cnt += 1
print(correct_cnt)
print(correct_cnt/len(test_srcid_list))
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# 케라스를 사용한 분산 훈련
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/distribute/keras"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />TensorFlow.org에서 보기</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/ko/tutorials/distribute/keras.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/ko/tutorials/distribute/keras.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />깃허브(GitHub) 소스 보기</a>
</td>
</table>
Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도 불구하고 [공식 영문 문서](https://github.com/tensorflow/docs/blob/master/site/en/tutorials/distribute/keras.ipynb)의 내용과 일치하지 않을 수 있습니다. 이 번역에 개선할 부분이 있다면 [tensorflow/docs](https://github.com/tensorflow/docs) 깃허브 저장소로 풀 리퀘스트를 보내주시기 바랍니다. 문서 번역이나 리뷰에 참여하려면 [docs-ko@tensorflow.org](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ko)로 메일을 보내주시기 바랍니다.
## 개요
`tf.distribute.Strategy` API는 훈련을 여러 처리 장치들로 분산시키는 것을 추상화한 것입니다. 기존의 모델이나 훈련 코드를 조금만 바꾸어 분산 훈련을 할 수 있게 하는 것이 분산 전략 API의 목표입니다.
이 튜토리얼에서는 `tf.distribute.MirroredStrategy`를 사용합니다. 이 전략은 동기화된 훈련 방식을 활용하여 한 장비에 있는 여러 개의 GPU로 그래프 내 복제를 수행합니다. 다시 말하자면, 모델의 모든 변수를 각 프로세서에 복사합니다. 그리고 각 프로세서의 그래디언트(gradient)를 [올 리듀스(all-reduce)](http://mpitutorial.com/tutorials/mpi-reduce-and-allreduce/)를 사용하여 모읍니다. 그다음 모아서 계산한 값을 각 프로세서의 모델 복사본에 적용합니다.
`MirroredStategy`는 텐서플로에서 기본으로 제공하는 몇 가지 분산 전략 중 하나입니다. 다른 전략들에 대해서는 [분산 전략 가이드](../../guide/distribute_strategy.ipynb)를 참고하십시오.
### 케라스 API
이 예는 모델과 훈련 루프를 만들기 위해 `tf.keras` API를 사용합니다. 직접 훈련 코드를 작성하는 방법은 [사용자 정의 훈련 루프로 분산 훈련하기](training_loops.ipynb) 튜토리얼을 참고하십시오.
## 필요한 패키지 가져오기
```
from __future__ import absolute_import, division, print_function, unicode_literals
# 텐서플로와 텐서플로 데이터셋 패키지 가져오기
!pip install tensorflow-gpu==2.0.0-beta1
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
import os
```
## 데이터셋 다운로드
MNIST 데이터셋을 [TensorFlow Datasets](https://www.tensorflow.org/datasets)에서 다운로드받은 후 불러옵니다. 이 함수는 `tf.data` 형식을 반환합니다.
`with_info`를 `True`로 설정하면 전체 데이터에 대한 메타 정보도 함께 불러옵니다. 이 정보는 `info` 변수에 저장됩니다. 여기에는 훈련과 테스트 샘플 수를 비롯한 여러가지 정보들이 들어있습니다.
```
datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = datasets['train'], datasets['test']
```
## 분산 전략 정의하기
분산과 관련된 처리를 하는 `MirroredStrategy` 객체를 만듭니다. 이 객체가 컨텍스트 관리자(`tf.distribute.MirroredStrategy.scope`)도 제공하는데, 이 안에서 모델을 만들어야 합니다.
```
strategy = tf.distribute.MirroredStrategy()
print('장치의 수: {}'.format(strategy.num_replicas_in_sync))
```
## 입력 파이프라인 구성하기
다중 GPU로 모델을 훈련할 때는 배치 크기를 늘려야 컴퓨팅 자원을 효과적으로 사용할 수 있습니다. 기본적으로는 GPU 메모리에 맞추어 가능한 가장 큰 배치 크기를 사용하십시오. 이에 맞게 학습률도 조정해야 합니다.
```
# 데이터셋 내 샘플의 수는 info.splits.total_num_examples 로도
# 얻을 수 있습니다.
num_train_examples = info.splits['train'].num_examples
num_test_examples = info.splits['test'].num_examples
BUFFER_SIZE = 10000
BATCH_SIZE_PER_REPLICA = 64
BATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync
```
픽셀의 값은 0~255 사이이므로 [0-1 범위로 정규화](https://en.wikipedia.org/wiki/Feature_scaling)해야 합니다. 정규화 함수를 정의합니다.
```
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
```
이 함수를 훈련과 테스트 데이터에 적용합니다. 훈련 데이터 순서를 섞고, [훈련을 위해 배치로 묶습니다](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#batch).
```
train_dataset = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
eval_dataset = mnist_test.map(scale).batch(BATCH_SIZE)
```
## 모델 만들기
`strategy.scope` 컨텍스트 안에서 케라스 모델을 만들고 컴파일합니다.
```
with strategy.scope():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(loss='sparse_categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
```
## 콜백 정의하기
여기서 사용하는 콜백은 다음과 같습니다.
* *텐서보드(TensorBoard)*: 이 콜백은 텐서보드용 로그를 남겨서, 텐서보드에서 그래프를 그릴 수 있게 해줍니다.
* *모델 체크포인트(Checkpoint)*: 이 콜백은 매 에포크(epoch)가 끝난 후 모델을 저장합니다.
* *학습률 스케줄러*: 이 콜백을 사용하면 매 에포크 혹은 배치가 끝난 후 학습률을 바꿀 수 있습니다.
콜백을 추가하는 방법을 보여드리기 위하여 노트북에 *학습률*을 표시하는 콜백도 추가하겠습니다.
```
# 체크포인트를 저장할 체크포인트 디렉터리를 지정합니다.
checkpoint_dir = './training_checkpoints'
# 체크포인트 파일의 이름
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
# 학습률을 점점 줄이기 위한 함수
# 필요한 함수를 직접 정의하여 사용할 수 있습니다.
def decay(epoch):
if epoch < 3:
return 1e-3
elif epoch >= 3 and epoch < 7:
return 1e-4
else:
return 1e-5
# 에포크가 끝날 때마다 학습률을 출력하는 콜백.
class PrintLR(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
print('\n에포크 {}의 학습률은 {}입니다.'.format(epoch + 1,
model.optimizer.lr.numpy()))
callbacks = [
tf.keras.callbacks.TensorBoard(log_dir='./logs'),
tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_prefix,
save_weights_only=True),
tf.keras.callbacks.LearningRateScheduler(decay),
PrintLR()
]
```
## 훈련과 평가
이제 평소처럼 모델을 학습합시다. 모델의 `fit` 함수를 호출하고 튜토리얼의 시작 부분에서 만든 데이터셋을 넘깁니다. 이 단계는 분산 훈련 여부와 상관없이 동일합니다.
```
model.fit(train_dataset, epochs=12, callbacks=callbacks)
```
아래에서 볼 수 있듯이 체크포인트가 저장되고 있습니다.
```
# 체크포인트 디렉터리 확인하기
!ls {checkpoint_dir}
```
모델의 성능이 어떤지 확인하기 위하여, 가장 최근 체크포인트를 불러온 후 테스트 데이터에 대하여 `evaluate`를 호출합니다.
평소와 마찬가지로 적절한 데이터셋과 함께 `evaluate`를 호출하면 됩니다.
```
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
eval_loss, eval_acc = model.evaluate(eval_dataset)
print('평가 손실: {}, 평가 정확도: {}'.format(eval_loss, eval_acc))
```
텐서보드 로그를 다운로드받은 후 터미널에서 다음과 같이 텐서보드를 실행하여 훈련 결과를 확인할 수 있습니다.
```
$ tensorboard --logdir=path/to/log-directory
```
```
!ls -sh ./logs
```
## SavedModel로 내보내기
플랫폼에 무관한 SavedModel 형식으로 그래프와 변수들을 내보냅니다. 모델을 내보낸 후에는, 전략 범위(scope) 없이 불러올 수도 있고, 전략 범위와 함께 불러올 수도 있습니다.
```
path = 'saved_model/'
tf.keras.experimental.export_saved_model(model, path)
```
`strategy.scope` 없이 모델 불러오기.
```
unreplicated_model = tf.keras.experimental.load_from_saved_model(path)
unreplicated_model.compile(
loss='sparse_categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
eval_loss, eval_acc = unreplicated_model.evaluate(eval_dataset)
print('평가 손실: {}, 평가 정확도: {}'.format(eval_loss, eval_acc))
```
`strategy.scope`와 함께 모델 불러오기.
```
with strategy.scope():
replicated_model = tf.keras.experimental.load_from_saved_model(path)
replicated_model.compile(loss='sparse_categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
eval_loss, eval_acc = replicated_model.evaluate(eval_dataset)
print ('평가 손실: {}, 평가 정확도: {}'.format(eval_loss, eval_acc))
```
### 예제와 튜토리얼
케라스 적합/컴파일과 함께 분산 전략을 쓰는 예제들이 더 있습니다.
1. `tf.distribute.MirroredStrategy`를 사용하여 학습한 [Transformer](https://github.com/tensorflow/models/blob/master/official/transformer/v2/transformer_main.py) 예제.
2. `tf.distribute.MirroredStrategy`를 사용하여 학습한 [NCF](https://github.com/tensorflow/models/blob/master/official/recommendation/ncf_keras_main.py) 예제.
[분산 전략 가이드](../../guide/distribute_strategy.ipynb#examples_and_tutorials)에 더 많은 예제 목록이 있습니다.
## 다음 단계
* [분산 전략 가이드](../../guide/distribute_strategy.ipynb)를 읽어보세요.
* [사용자 정의 훈련 루프를 사용한 분산 훈련](training_loops.ipynb) 튜토리얼을 읽어보세요.
Note: `tf.distribute.Strategy`은 현재 활발히 개발 중입니다. 근시일내에 예제나 튜토리얼이 더 추가될 수 있습니다. 한 번 사용해 보세요. [깃허브 이슈](https://github.com/tensorflow/tensorflow/issues/new)를 통하여 피드백을 주시면 감사하겠습니다.
| github_jupyter |
Lambda School Data Science, Unit 2: Predictive Modeling
# Applied Modeling, Module 1
You will use your portfolio project dataset for all assignments this sprint.
## Assignment
Complete these tasks for your project, and document your decisions.
- [ ] Choose your target. Which column in your tabular dataset will you predict?
- [ ] Choose which observations you will use to train, validate, and test your model. And which observations, if any, to exclude.
- [ ] Determine whether your problem is regression or classification.
- [ ] Choose your evaluation metric.
- [ ] Begin with baselines: majority class baseline for classification, or mean baseline for regression, with your metric of choice.
- [ ] Begin to clean and explore your data.
- [ ] Choose which features, if any, to exclude. Would some features "leak" information from the future?
## Reading
- [Attacking discrimination with smarter machine learning](https://research.google.com/bigpicture/attacking-discrimination-in-ml/), by Google Research, with interactive visualizations. _"A threshold classifier essentially makes a yes/no decision, putting things in one category or another. We look at how these classifiers work, ways they can potentially be unfair, and how you might turn an unfair classifier into a fairer one. As an illustrative example, we focus on loan granting scenarios where a bank may grant or deny a loan based on a single, automatically computed number such as a credit score."_
- [How Shopify Capital Uses Quantile Regression To Help Merchants Succeed](https://engineering.shopify.com/blogs/engineering/how-shopify-uses-machine-learning-to-help-our-merchants-grow-their-business)
- [Maximizing Scarce Maintenance Resources with Data: Applying predictive modeling, precision at k, and clustering to optimize impact](https://towardsdatascience.com/maximizing-scarce-maintenance-resources-with-data-8f3491133050), **by Lambda DS3 student** Michael Brady. His blog post extends the Tanzania Waterpumps scenario, far beyond what's in the lecture notebook.
- [Notebook about how to calculate expected value from a confusion matrix by treating it as a cost-benefit matrix](https://github.com/podopie/DAT18NYC/blob/master/classes/13-expected_value_cost_benefit_analysis.ipynb)
- [Simple guide to confusion matrix terminology](https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/) by Kevin Markham, with video
- [Visualizing Machine Learning Thresholds to Make Better Business Decisions](https://blog.insightdatascience.com/visualizing-machine-learning-thresholds-to-make-better-business-decisions-4ab07f823415)
## Overview of Intent
## Targets
**Original Feature-Targets**
* BMI
* HealthScore
**Generated Feature-Targets**
* BMI -> Underweight, Average Weight, Overweight, Obese
* health_trajectory (matched polynomial regression of BMI & health scores - still in development)
## Features
**Original Features**
From Activity Response:
* Case ID only (for join)
From Child Response:
* Case ID only (for join)
From General Response (In Development):
* (In Development)
**Transformed Features**
From Activity Response:
* Total Secondary Eating (drinking not associated with primary meals)
* Total Secondary Drinking (drinking not associated with primary meals)
From Child Response:
* Total Assisted Meals
* Number Children Under 19 in Household
From General Response (In Development):
* (In Development)
## Evaluation Metrics
**Bifurcated design**
One phase will use regression forms to estimate future BMI & BMI Trajectory Over Time (coefficients of line)
Another will use classification in an attempt to model reported health status (1 thru 5; already encoded in data)
**Useful Metrics**
Accuracy scores and confusion matrices for health status. Balanced accuracy may be considered if target distribution is skewed.
r^2 and t-statistics for BMI prediction. Will also look at explained variance to see how much the model is capturing.
## Data Cleaning and Exploration
**For work on combining datasets and first pass feature engineering, see stitch.ipynb in this folder**
| github_jupyter |
# Example Usage of plotDist
This notebook presents the usage of the `plotDist` and `utilities` module.
The general starting point is a 3-dimensional array `samples` of size `samples.shape = (nObservables, nXrange, nSamples)`, where
* `nObservables` is the number of observables
* `nXrange` the independent variable of the observables
* `nSamples` the number of statistical data for each observable at each `xRange` point
## Init
### Import modules
```
# Numeric and aata handling modules
import numpy as np
import pandas as pd
# Plotting modules
import matplotlib.pylab as plt
# Local modules
import distalysis.plotDist as pD # Plotting
import distalysis.utilities as ut # Data manipulation
```
### Define statistic sample array
Compute random array for later use. This array follows exponenital shape which is 'smeared' by gaussian noise.
```
help(ut.generatePseudoSamples)
# Parameters
nC, nT, nSamples = 4, 32, 100
# x-range
nt = np.arange(nT)
```
Define the exponenotial parameters and generate the pseudo samples.
```
aAA1 = 1./nT; aAA2 = 4./nT
bAA1 = 0.5; bAA2 = 1.0
aBB1 = -aAA1; aBB2 = -aAA2;
bBB1 = bAA1 + nT*aAA1; bBB2 = bAA2 + nT*aAA2
expPars = np.array([
[(aAA1, bAA1), (aAA2, bAA2)], # C_{AA}
[(0., 20.), (0., 20.)], # C_{AB} set to zero
[(0., 20.), (0., 20.)], # C_{BA} set to zero
[(aBB1, bBB1), (aBB2, bBB2)], # C_{BA}
])
samples = ut.generatePseudoSamples(nt, nSamples, expPars)
```
# Single sample plots
In this section you find routines for data of one `samples` array.
## Visualize sample data
```
help(pD.plotSamples)
fig, ax = plt.subplots(dpi=400, figsize=(3, 2))
pD.plotSamples(nt, samples, ax=ax, marker=".", linestyle="None", lw=1, ms=4)
ax.legend(loc="best", fontsize="xx-small")
plt.show(fig)
```
## Plot distribution of individual samples
```
help(pD.plotSampleDistributions)
nTstart = 0; nTstep = 5
obsTitles = [r"$C_{%s}$" % ij for ij in ["AA", "AB", "BA", "BB"]]
fig = pD.plotSampleDistributions(samples, nXStart=nTstart, nXStep=nTstep, obsTitles=obsTitles)
plt.show(fig)
```
## Plot individual Kernel Density Eesimtates (KDEs) for distributions
```
help(pD.plotDistribution)
AA = 0; nt = 1
dist = samples[AA, nt]
fig = pD.plotDistribution(dist)
plt.show(fig)
```
# Collective sample plots
In this section you find routines for visualizing more than one independent variable dependencies.
## Get sample statistics
```
help(ut.getStatisticsFrame)
nTstart = 0; nTstep = 5
obsTitles = [r"$C_{%s}$" % ij for ij in ["AA", "AB", "BA", "BB"]]
df = ut.getStatisticsFrame(samples, nXStart=nTstart, nXStep=nTstep, obsTitles=obsTitles)
print(df.describe())
df.head()
```
## Prepare collective pseudo sample data frame
Assume you have more than one indpendent variable and want to anaylze the collective dependence of the dependet variable.
You can mimic this dependence by adding new columns to the statistic frames.
In this case, the columns are named `nBinSize` and `nSamples`.
```
# Define independt variable ranges
binSizeRange = [1,2,5]
sampleSizeRange = [400, 500, 700, 1000]
nt = np.arange(nT)
# Create storage frame
df = pd.DataFrame()
# Generate pseudo samples for each parameter configuration
for nBinSize in binSizeRange:
for nSamples in sampleSizeRange:
## Generate individual pseudo sample set
samples = ut.generatePseudoSamples(nt, nSamples, expPars)
## Get temporary statistics data frame
tmp = ut.getStatisticsFrame(samples, nXStart=nTstart, nXStep=nTstep, obsTitles=obsTitles)
## Store independent variable parameter
tmp["nBinSize"] = nBinSize
tmp["nSamples"] = nSamples
## Collect in data frame
df = df.append(tmp)
df.head()
```
## Plot parameter dependent errorbars for mean values
```
help(pD.errBarPlot)
g = pD.errBarPlot(df)
plt.show(g)
```
## Summary convergence plot for distributions
Suppose you want to summarize the previous frame for several ensembles at once.
This is done by the `plotFluctuations` method.
In this example, this method computes the average and std of `mean` and `sDev` over `nX` and `observables`.
The informations are seperatly computed for different values of `nBinSize` and `nSamples`.
```
help(ut.getFluctuationFrame)
fluctFrame = ut.getFluctuationFrame(
df,
valueKeys=["mean", "sDev"], # present collective mean and sDev statistics
collectByKeys=["nSamples", "nBinSize"] # group by nSamples and nBinSize
)
fluctFrame.head()
help(pD.plotFluctuations)
fig = pD.plotFluctuations(fluctFrame, valueKey="sDev", axisKey="nSamples")
plt.show(fig)
```
| github_jupyter |
# **Custom Training: Walkthrough `tf-1.x`**
---
[](https://mybinder.org/v2/gh/kyle-w-brown/tensorflow-1.x.git/HEAD)
This guide uses machine learning to *categorize* Iris flowers by species. It uses TensorFlow's [eager execution](https://www.tensorflow.org/guide/eager) to:
1. Build a model,
2. Train this model on example data, and
3. Use the model to make predictions about unknown data.
## TensorFlow programming
This guide uses these high-level TensorFlow concepts:
* Enable an [eager execution](https://www.tensorflow.org/guide/eager) development environment,
* Import data with the [Datasets API](https://www.tensorflow.org/guide/datasets),
* Build models and layers with TensorFlow's [Keras API](https://keras.io/getting-started/sequential-model-guide/).
This tutorial is structured like many TensorFlow programs:
1. Import and parse the data sets.
2. Select the type of model.
3. Train the model.
4. Evaluate the model's effectiveness.
5. Use the trained model to make predictions.
## Setup program
### Configure imports and eager execution
Import the required Python modules—including TensorFlow—and enable eager execution for this program. Eager execution makes TensorFlow evaluate operations immediately, returning concrete values instead of creating a [computational graph](https://www.tensorflow.org/guide/graphs) that is executed later. If you are used to a REPL or the `python` interactive console, this feels familiar. Eager execution is available in [Tensorlow >=1.8](https://www.tensorflow.org/install/).
Once eager execution is enabled, it *cannot* be disabled within the same program. See the [eager execution guide](https://www.tensorflow.org/guide/eager) for more details.
```
%tensorflow_version 1.x
from __future__ import absolute_import, division, print_function
import os
import matplotlib.pyplot as plt
import tensorflow as tf
tf.enable_eager_execution()
print("TensorFlow version: {}".format(tf.__version__))
print("Eager execution: {}".format(tf.executing_eagerly()))
```
## The Iris classification problem
Imagine you are a botanist seeking an automated way to categorize each Iris flower you find. Machine learning provides many algorithms to classify flowers statistically. For instance, a sophisticated machine learning program could classify flowers based on photographs. Our ambitions are more modest—we're going to classify Iris flowers based on the length and width measurements of their [sepals](https://en.wikipedia.org/wiki/Sepal) and [petals](https://en.wikipedia.org/wiki/Petal).
The Iris genus entails about 300 species, but our program will only classify the following three:
* Iris setosa
* Iris virginica
* Iris versicolor
<table>
<tr><td>
<img src="https://www.tensorflow.org/images/iris_three_species.jpg"
alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor">
</td></tr>
<tr><td align="center">
<b>Figure 1.</b> <a href="https://commons.wikimedia.org/w/index.php?curid=170298">Iris setosa</a> (by <a href="https://commons.wikimedia.org/wiki/User:Radomil">Radomil</a>, CC BY-SA 3.0), <a href="https://commons.wikimedia.org/w/index.php?curid=248095">Iris versicolor</a>, (by <a href="https://commons.wikimedia.org/wiki/User:Dlanglois">Dlanglois</a>, CC BY-SA 3.0), and <a href="https://www.flickr.com/photos/33397993@N05/3352169862">Iris virginica</a> (by <a href="https://www.flickr.com/photos/33397993@N05">Frank Mayfield</a>, CC BY-SA 2.0).<br/>
</td></tr>
</table>
Fortunately, someone has already created a [data set of 120 Iris flowers](https://en.wikipedia.org/wiki/Iris_flower_data_set) with the sepal and petal measurements. This is a classic dataset that is popular for beginner machine learning classification problems.
## Import and parse the training dataset
Download the dataset file and convert it into a structure that can be used by this Python program.
### Download the dataset
Download the training dataset file using the [tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) function. This returns the file path of the downloaded file.
```
train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv"
train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url),
origin=train_dataset_url)
print("Local copy of the dataset file: {}".format(train_dataset_fp))
```
### Inspect the data
This dataset, `iris_training.csv`, is a plain text file that stores tabular data formatted as comma-separated values (CSV). Use the `head -n5` command to take a peak at the first five entries:
```
!head -n5 {train_dataset_fp}
```
From this view of the dataset, notice the following:
1. The first line is a header containing information about the dataset:
* There are 120 total examples. Each example has four features and one of three possible label names.
2. Subsequent rows are data records, one *[example](https://developers.google.com/machine-learning/glossary/#example)* per line, where:
* The first four fields are *[features](https://developers.google.com/machine-learning/glossary/#feature)*: these are characteristics of an example. Here, the fields hold float numbers representing flower measurements.
* The last column is the *[label](https://developers.google.com/machine-learning/glossary/#label)*: this is the value we want to predict. For this dataset, it's an integer value of 0, 1, or 2 that corresponds to a flower name.
Let's write that out in code:
```
# column order in CSV file
column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
feature_names = column_names[:-1]
label_name = column_names[-1]
print("Features: {}".format(feature_names))
print("Label: {}".format(label_name))
```
Each label is associated with string name (for example, "setosa"), but machine learning typically relies on numeric values. The label numbers are mapped to a named representation, such as:
* `0`: Iris setosa
* `1`: Iris versicolor
* `2`: Iris virginica
For more information about features and labels, see the [ML Terminology section of the Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology).
```
class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica']
```
### Create a `tf.data.Dataset`
TensorFlow's [Dataset API](https://www.tensorflow.org/guide/datasets) handles many common cases for loading data into a model. This is a high-level API for reading data and transforming it into a form used for training. See the [Datasets Quick Start guide](https://www.tensorflow.org/get_started/datasets_quickstart) for more information.
Since the dataset is a CSV-formatted text file, use the [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/contrib/data/make_csv_dataset) function to parse the data into a suitable format. Since this function generates data for training models, the default behavior is to shuffle the data (`shuffle=True, shuffle_buffer_size=10000`), and repeat the dataset forever (`num_epochs=None`). We also set the [batch_size](https://developers.google.com/machine-learning/glossary/#batch_size) parameter.
```
batch_size = 32
train_dataset = tf.contrib.data.make_csv_dataset(
train_dataset_fp,
batch_size,
column_names=column_names,
label_name=label_name,
num_epochs=1)
```
The `make_csv_dataset` function returns a `tf.data.Dataset` of `(features, label)` pairs, where `features` is a dictionary: `{'feature_name': value}`
With eager execution enabled, these `Dataset` objects are iterable. Let's look at a batch of features:
```
features, labels = next(iter(train_dataset))
features
```
Notice that like-features are grouped together, or *batched*. Each example row's fields are appended to the corresponding feature array. Change the `batch_size` to set the number of examples stored in these feature arrays.
You can start to see some clusters by plotting a few features from the batch:
```
plt.scatter(features['petal_length'].numpy(),
features['sepal_length'].numpy(),
c=labels.numpy(),
cmap='viridis')
plt.xlabel("Petal length")
plt.ylabel("Sepal length");
```
To simplify the model building step, create a function to repackage the features dictionary into a single array with shape: `(batch_size, num_features)`.
This function uses the [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) method which takes values from a list of tensors and creates a combined tensor at the specified dimension.
```
def pack_features_vector(features, labels):
"""Pack the features into a single array."""
features = tf.stack(list(features.values()), axis=1)
return features, labels
```
Then use the [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) method to pack the `features` of each `(features,label)` pair into the training dataset:
```
train_dataset = train_dataset.map(pack_features_vector)
```
The features element of the `Dataset` are now arrays with shape `(batch_size, num_features)`. Let's look at the first few examples:
```
features, labels = next(iter(train_dataset))
print(features[:5])
```
## Select the type of model
### Why model?
A *[model](https://developers.google.com/machine-learning/crash-course/glossary#model)* is a relationship between features and the label. For the Iris classification problem, the model defines the relationship between the sepal and petal measurements and the predicted Iris species. Some simple models can be described with a few lines of algebra, but complex machine learning models have a large number of parameters that are difficult to summarize.
Could you determine the relationship between the four features and the Iris species *without* using machine learning? That is, could you use traditional programming techniques (for example, a lot of conditional statements) to create a model? Perhaps—if you analyzed the dataset long enough to determine the relationships between petal and sepal measurements to a particular species. And this becomes difficult—maybe impossible—on more complicated datasets. A good machine learning approach *determines the model for you*. If you feed enough representative examples into the right machine learning model type, the program will figure out the relationships for you.
### Select the model
We need to select the kind of model to train. There are many types of models and picking a good one takes experience. This tutorial uses a neural network to solve the Iris classification problem. *[Neural networks](https://developers.google.com/machine-learning/glossary/#neural_network)* can find complex relationships between features and the label. It is a highly-structured graph, organized into one or more *[hidden layers](https://developers.google.com/machine-learning/glossary/#hidden_layer)*. Each hidden layer consists of one or more *[neurons](https://developers.google.com/machine-learning/glossary/#neuron)*. There are several categories of neural networks and this program uses a dense, or *[fully-connected neural network](https://developers.google.com/machine-learning/glossary/#fully_connected_layer)*: the neurons in one layer receive input connections from *every* neuron in the previous layer. For example, Figure 2 illustrates a dense neural network consisting of an input layer, two hidden layers, and an output layer:
<table>
<tr><td>
<img src="https://www.tensorflow.org/images/custom_estimators/full_network.png"
alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs">
</td></tr>
<tr><td align="center">
<b>Figure 2.</b> A neural network with features, hidden layers, and predictions.<br/>
</td></tr>
</table>
When the model from Figure 2 is trained and fed an unlabeled example, it yields three predictions: the likelihood that this flower is the given Iris species. This prediction is called *[inference](https://developers.google.com/machine-learning/crash-course/glossary#inference)*. For this example, the sum of the output predictions is 1.0. In Figure 2, this prediction breaks down as: `0.02` for *Iris setosa*, `0.95` for *Iris versicolor*, and `0.03` for *Iris virginica*. This means that the model predicts—with 95% probability—that an unlabeled example flower is an *Iris versicolor*.
### Create a model using Keras
The TensorFlow [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API is the preferred way to create models and layers. This makes it easy to build models and experiment while Keras handles the complexity of connecting everything together.
The [tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) model is a linear stack of layers. Its constructor takes a list of layer instances, in this case, two [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense) layers with 10 nodes each, and an output layer with 3 nodes representing our label predictions. The first layer's `input_shape` parameter corresponds to the number of features from the dataset, and is required.
```
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # input shape required
tf.keras.layers.Dense(10, activation=tf.nn.relu),
tf.keras.layers.Dense(3)
])
```
The *[activation function](https://developers.google.com/machine-learning/crash-course/glossary#activation_function)* determines the output shape of each node in the layer. These non-linearities are important—without them the model would be equivalent to a single layer. There are many [available activations](https://www.tensorflow.org/api_docs/python/tf/keras/activations), but [ReLU](https://developers.google.com/machine-learning/crash-course/glossary#ReLU) is common for hidden layers.
The ideal number of hidden layers and neurons depends on the problem and the dataset. Like many aspects of machine learning, picking the best shape of the neural network requires a mixture of knowledge and experimentation. As a rule of thumb, increasing the number of hidden layers and neurons typically creates a more powerful model, which requires more data to train effectively.
### Using the model
Let's have a quick look at what this model does to a batch of features:
```
predictions = model(features)
predictions[:5]
```
Here, each example returns a [logit](https://developers.google.com/machine-learning/crash-course/glossary#logits) for each class.
To convert these logits to a probability for each class, use the [softmax](https://developers.google.com/machine-learning/crash-course/glossary#softmax) function:
```
tf.nn.softmax(predictions[:5])
```
Taking the `tf.argmax` across classes gives us the predicted class index. But, the model hasn't been trained yet, so these aren't good predictions.
```
print("Prediction: {}".format(tf.argmax(predictions, axis=1)))
print(" Labels: {}".format(labels))
```
## Train the model
*[Training](https://developers.google.com/machine-learning/crash-course/glossary#training)* is the stage of machine learning when the model is gradually optimized, or the model *learns* the dataset. The goal is to learn enough about the structure of the training dataset to make predictions about unseen data. If you learn *too much* about the training dataset, then the predictions only work for the data it has seen and will not be generalizable. This problem is called *[overfitting](https://developers.google.com/machine-learning/crash-course/glossary#overfitting)*—it's like memorizing the answers instead of understanding how to solve a problem.
The Iris classification problem is an example of *[supervised machine learning](https://developers.google.com/machine-learning/glossary/#supervised_machine_learning)*: the model is trained from examples that contain labels. In *[unsupervised machine learning](https://developers.google.com/machine-learning/glossary/#unsupervised_machine_learning)*, the examples don't contain labels. Instead, the model typically finds patterns among the features.
### Define the loss and gradient function
Both training and evaluation stages need to calculate the model's *[loss](https://developers.google.com/machine-learning/crash-course/glossary#loss)*. This measures how off a model's predictions are from the desired label, in other words, how bad the model is performing. We want to minimize, or optimize, this value.
Our model will calculate its loss using the [tf.keras.losses.categorical_crossentropy](https://www.tensorflow.org/api_docs/python/tf/losses/sparse_softmax_cross_entropy) function which takes the model's class probability predictions and the desired label, and returns the average loss across the examples.
```
def loss(model, x, y):
y_ = model(x)
return tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_)
l = loss(model, features, labels)
print("Loss test: {}".format(l))
```
Use the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) context to calculate the *[gradients](https://developers.google.com/machine-learning/crash-course/glossary#gradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/guide/eager).
```
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
```
### Create an optimizer
An *[optimizer](https://developers.google.com/machine-learning/crash-course/glossary#optimizer)* applies the computed gradients to the model's variables to minimize the `loss` function. You can think of the loss function as a curved surface (see Figure 3) and we want to find its lowest point by walking around. The gradients point in the direction of steepest ascent—so we'll travel the opposite way and move down the hill. By iteratively calculating the loss and gradient for each batch, we'll adjust the model during training. Gradually, the model will find the best combination of weights and bias to minimize loss. And the lower the loss, the better the model's predictions.
<table>
<tr><td>
<img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%"
alt="Optimization algorithms visualized over time in 3D space.">
</td></tr>
<tr><td align="center">
<b>Figure 3.</b> Optimization algorithms visualized over time in 3D space.<br/>(Source: <a href="http://cs231n.github.io/neural-networks-3/">Stanford class CS231n</a>, MIT License, Image credit: <a href="https://twitter.com/alecrad">Alec Radford</a>)
</td></tr>
</table>
TensorFlow has many [optimization algorithms](https://www.tensorflow.org/api_guides/python/train) available for training. This model uses the [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer) that implements the *[stochastic gradient descent](https://developers.google.com/machine-learning/crash-course/glossary#gradient_descent)* (SGD) algorithm. The `learning_rate` sets the step size to take for each iteration down the hill. This is a *hyperparameter* that you'll commonly adjust to achieve better results.
Let's setup the optimizer and the `global_step` counter:
```
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
global_step = tf.Variable(0)
```
We'll use this to calculate a single optimization step:
```
loss_value, grads = grad(model, features, labels)
print("Step: {}, Initial Loss: {}".format(global_step.numpy(),
loss_value.numpy()))
optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step)
print("Step: {}, Loss: {}".format(global_step.numpy(),
loss(model, features, labels).numpy()))
```
### Training loop
With all the pieces in place, the model is ready for training! A training loop feeds the dataset examples into the model to help it make better predictions. The following code block sets up these training steps:
1. Iterate each *epoch*. An epoch is one pass through the dataset.
2. Within an epoch, iterate over each example in the training `Dataset` grabbing its *features* (`x`) and *label* (`y`).
3. Using the example's features, make a prediction and compare it with the label. Measure the inaccuracy of the prediction and use that to calculate the model's loss and gradients.
4. Use an `optimizer` to update the model's variables.
5. Keep track of some stats for visualization.
6. Repeat for each epoch.
The `num_epochs` variable is the number of times to loop over the dataset collection. Counter-intuitively, training a model longer does not guarantee a better model. `num_epochs` is a *[hyperparameter](https://developers.google.com/machine-learning/glossary/#hyperparameter)* that you can tune. Choosing the right number usually requires both experience and experimentation.
```
## Note: Rerunning this cell uses the same model variables
from tensorflow import contrib
tfe = contrib.eager
# keep results for plotting
train_loss_results = []
train_accuracy_results = []
num_epochs = 201
for epoch in range(num_epochs):
epoch_loss_avg = tfe.metrics.Mean()
epoch_accuracy = tfe.metrics.Accuracy()
# Training loop - using batches of 32
for x, y in train_dataset:
# Optimize the model
loss_value, grads = grad(model, x, y)
optimizer.apply_gradients(zip(grads, model.trainable_variables),
global_step)
# Track progress
epoch_loss_avg(loss_value) # add current batch loss
# compare predicted label to actual label
epoch_accuracy(tf.argmax(model(x), axis=1, output_type=tf.int32), y)
# end epoch
train_loss_results.append(epoch_loss_avg.result())
train_accuracy_results.append(epoch_accuracy.result())
if epoch % 50 == 0:
print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch,
epoch_loss_avg.result(),
epoch_accuracy.result()))
```
### Visualize the loss function over time
While it's helpful to print out the model's training progress, it's often *more* helpful to see this progress. [TensorBoard](https://www.tensorflow.org/guide/summaries_and_tensorboard) is a nice visualization tool that is packaged with TensorFlow, but we can create basic charts using the `matplotlib` module.
Interpreting these charts takes some experience, but you really want to see the *loss* go down and the *accuracy* go up.
```
fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8))
fig.suptitle('Training Metrics')
axes[0].set_ylabel("Loss", fontsize=14)
axes[0].plot(train_loss_results)
axes[1].set_ylabel("Accuracy", fontsize=14)
axes[1].set_xlabel("Epoch", fontsize=14)
axes[1].plot(train_accuracy_results);
```
## Evaluate the model's effectiveness
Now that the model is trained, we can get some statistics on its performance.
*Evaluating* means determining how effectively the model makes predictions. To determine the model's effectiveness at Iris classification, pass some sepal and petal measurements to the model and ask the model to predict what Iris species they represent. Then compare the model's prediction against the actual label. For example, a model that picked the correct species on half the input examples has an *[accuracy](https://developers.google.com/machine-learning/glossary/#accuracy)* of `0.5`. Figure 4 shows a slightly more effective model, getting 4 out of 5 predictions correct at 80% accuracy:
<table cellpadding="8" border="0">
<colgroup>
<col span="4" >
<col span="1" bgcolor="lightblue">
<col span="1" bgcolor="lightgreen">
</colgroup>
<tr bgcolor="lightgray">
<th colspan="4">Example features</th>
<th colspan="1">Label</th>
<th colspan="1" >Model prediction</th>
</tr>
<tr>
<td>5.9</td><td>3.0</td><td>4.3</td><td>1.5</td><td align="center">1</td><td align="center">1</td>
</tr>
<tr>
<td>6.9</td><td>3.1</td><td>5.4</td><td>2.1</td><td align="center">2</td><td align="center">2</td>
</tr>
<tr>
<td>5.1</td><td>3.3</td><td>1.7</td><td>0.5</td><td align="center">0</td><td align="center">0</td>
</tr>
<tr>
<td>6.0</td> <td>3.4</td> <td>4.5</td> <td>1.6</td> <td align="center">1</td><td align="center" bgcolor="red">2</td>
</tr>
<tr>
<td>5.5</td><td>2.5</td><td>4.0</td><td>1.3</td><td align="center">1</td><td align="center">1</td>
</tr>
<tr><td align="center" colspan="6">
<b>Figure 4.</b> An Iris classifier that is 80% accurate.<br/>
</td></tr>
</table>
### Setup the test dataset
Evaluating the model is similar to training the model. The biggest difference is the examples come from a separate *[test set](https://developers.google.com/machine-learning/crash-course/glossary#test_set)* rather than the training set. To fairly assess a model's effectiveness, the examples used to evaluate a model must be different from the examples used to train the model.
The setup for the test `Dataset` is similar to the setup for training `Dataset`. Download the CSV text file and parse that values, then give it a little shuffle:
```
test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv"
test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url),
origin=test_url)
test_dataset = tf.contrib.data.make_csv_dataset(
test_fp,
batch_size,
column_names=column_names,
label_name='species',
num_epochs=1,
shuffle=False)
test_dataset = test_dataset.map(pack_features_vector)
```
### Evaluate the model on the test dataset
Unlike the training stage, the model only evaluates a single [epoch](https://developers.google.com/machine-learning/glossary/#epoch) of the test data. In the following code cell, we iterate over each example in the test set and compare the model's prediction against the actual label. This is used to measure the model's accuracy across the entire test set.
```
test_accuracy = tfe.metrics.Accuracy()
for (x, y) in test_dataset:
logits = model(x)
prediction = tf.argmax(logits, axis=1, output_type=tf.int32)
test_accuracy(prediction, y)
print("Test set accuracy: {:.3%}".format(test_accuracy.result()))
```
We can see on the last batch, for example, the model is usually correct:
```
tf.stack([y,prediction],axis=1)
```
## Use the trained model to make predictions
We've trained a model and "proven" that it's good—but not perfect—at classifying Iris species. Now let's use the trained model to make some predictions on [unlabeled examples](https://developers.google.com/machine-learning/glossary/#unlabeled_example); that is, on examples that contain features but not a label.
In real-life, the unlabeled examples could come from lots of different sources including apps, CSV files, and data feeds. For now, we're going to manually provide three unlabeled examples to predict their labels. Recall, the label numbers are mapped to a named representation as:
* `0`: Iris setosa
* `1`: Iris versicolor
* `2`: Iris virginica
```
predict_dataset = tf.convert_to_tensor([
[5.1, 3.3, 1.7, 0.5,],
[5.9, 3.0, 4.2, 1.5,],
[6.9, 3.1, 5.4, 2.1]
])
predictions = model(predict_dataset)
for i, logits in enumerate(predictions):
class_idx = tf.argmax(logits).numpy()
p = tf.nn.softmax(logits)[class_idx]
name = class_names[class_idx]
print("Example {} prediction: {} ({:4.1f}%)".format(i, name, 100*p))
```
| github_jupyter |
# Assignment 4: Transfer learning
The goal of this assignment is to demonstrate a technique called transfer learning. Transfer learning is a good way to quickly get good performance on the Patch-CAMELYON benchmark.
### Peliminaries
Transfer learning is a technique where instead of random initialization of the parameters of a model, we use a model that was pre-trained for a different task as the starting point. The two ways by which the pre-trained model can be transferred to the new task is by fine-tuning the complete model, or using it as a fixed feature extractor on top of which a new (usually linear) model is trained. For example, we can take a neural network model that was trained on the popular [ImageNet](http://www.image-net.org/) dataset that consists of images of objects (including categories such as "parachute" and "toaster") and apply it to cancer metastases detection.
This technique is explained in more detail in the following [video](https://www.youtube.com/watch?v=yofjFQddwHE) by Andrew Ng:
```
from IPython.display import YouTubeVideo
YouTubeVideo('yofjFQddwHE')
```
TL: haal de laatste layer weg en train deze opnieuw met random-initialized weights. Meer layers kunnen toegevoegd worden of meer layers kunnen opnieuw getraind worden.
Toepassing van TL: veel data beschikbaar voor oorsprong van transfer, weinig data beschikbaar voor doel van transfer (eg: veel annotated fotos van honden, weinig annotated X-rays)
If you are curious about different pre-training that you can use, you might want to have a look at [this paper]( https://arxiv.org/abs/1810.05444).
### Fine-tuning a pre-trained model
*Note that the code blocks below are only illustrative snippets from* `transfer.py` *and cannot be executed on their own within the notebook.*
An example of fine tuning a model is given in the `transfer.py` file. This example is very similar to the convolutional neural network example from the third assignments, so we will just highlight the differences.
The Keras library includes quite a few pre-trained models that can be used for transfer learning. The examples uses the MobileNetV2 model that is described in details [here](https://arxiv.org/abs/1801.04381). This architecture is targeted for use on mobile devices. We chose it for this example since it is "lightweight" and it can be relatively efficiently trained even on the CPU.
```
from tensorflow.keras.applications.mobilenet_v2 import MobileNetV2, preprocess_input
```
In addition to the model, we also import the associated preprocessing function that is then used in the generator function instead of the rescale-only preprocessing used in the CNN example:
```
datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
```
The code snippet below shows how to initialize the MobileNetV2 model for fine-tuning on the Patch-CAMELYON dataset. Compared to the previous examples that used the Keras Sequential API, this example uses the Keras Functional API.
```
input = Input(input_shape)
# get the pretrained model, cut out the top layer
pretrained = MobileNetV2(input_shape=input_shape, include_top=False, weights='imagenet')
# if the pretrained model it to be used as a feature extractor, and not for
# fine-tuning, the weights of the model can be frozen in the following way
# for layer in pretrained.layers:
# layer.trainable = False
output = pretrained(input)
output = GlobalAveragePooling2D()(output)
output = Dropout(0.5)(output)
output = Dense(1, activation='sigmoid')(output)
model = Model(input, output)
# note the lower lr compared to the cnn example
model.compile(SGD(lr=0.001, momentum=0.95), loss = 'binary_crossentropy', metrics=['accuracy'])
```
The architecture of the model is given below. The MobileNetV2 model takes the 96x96x3 images from the Patch-CAMELYON dataset and produces 1280 feature maps of size 3x3. The feature maps are then pooled and connected to the output layer of the model (with a dropout layer in between; see Exercise 3).
```
# _________________________________________________________________
# Layer (type) Output Shape Param #
# =================================================================
# input_1 (InputLayer) (None, 96, 96, 3) 0
# _________________________________________________________________
# mobilenetv2_1.00_96 (Model) (None, 3, 3, 1280) 2257984
# _________________________________________________________________
# global_average_pooling2d_1 ( (None, 1280) 0
# _________________________________________________________________
# dropout_1 (Dropout) (None, 1280) 0
# _________________________________________________________________
# dense_1 (Dense) (None, 1) 1281
# =================================================================
# Total params: 2,259,265
# Trainable params: 2,225,153
# Non-trainable params: 34,112
```
The remainder of the code in `transfer.py` performs training (i.e. fine-tuning) of the model in much the same way as in the CNN example. One difference is that instead of training for a number of full epochs, we define "mini-epochs" that contain around 5% of the training and validation samples. Since the fine-tuning of the model converges fast (you can expect convergence in less than one epoch), this will provide more fine-grained feedback about the performance on the validation set.
## Exercise 1
When does transfer learning make sense? Hint: watch the video. Does it make sense to do transfer learning from ImageNet to the Patch-CAMELYON dataset?
<i><b> ANSWER:</b> When a limited amount of images is available in the dataset that you want to model on, but there is another model trained on a dataset with more images. Since ImageNet trained on >14 million images and Patch-Camelyon on 144.000, it would make sense to do transfer learning from ImageNet to Patch-Camelyon. </i>
## Exercise 2
Run the example in `transfer.py`. Then, modify the code so that the MobileNetV2 model is not initialized from the ImageNet weights, but randomly (you can do that by setting the `weights` parameter to `None`). Analyze the results from both runs and compare them to the CNN example in assignment 3.
<i><b> ANSWER: </b></i>
| Metric (weights = `ImageNet`) | Score |
|---------------------------|-------|
| Loss (model.evaluate) | 0.883 |
| Accuracy (model.evaluate) | 0.608 |
| AUC (model.predict) | 0.826 |
| F1 (model.predict) | 0.376 |
| Accuracy (model.predict) | 0.608 |
| Metric (weights = `None`) | Score |
|---------------------------|-------|
| Loss (model.evaluate) | 0.693 |
| Accuracy (model.evaluate) | 0.5 |
| AUC (model.predict) | 0.554 |
| F1 (model.predict) | 0.0 |
| Accuracy (model.predict) | 0.5 |
## Exercise 3
The model in `transfer.py` uses a dropout layer. How does dropout work and what is the effect of adding dropout layers the the network architecture? What is the observed effect when removing the dropout layer from this model? Hint: check out the Keras documentation for this layer.
<i><b> ANSWER: </b> </i>
| Metric (weights = `ImageNet`, no dropout) | Score |
|---------------------------|-------|
| Loss (model.evaluate) | 2.542 |
| Accuracy (model.evaluate) | 0.500 |
| AUC (model.predict) | 0.704 |
| F1 (model.predict) | 0.001 |
| Accuracy (model.predict) | 0.500 |
## Submission checklist
- Exercise 1: Answers to the questions
- Exercise 2: Answers to the questions and code
- Exercise 3: Answers to the questions
### Before you start working on the main project...
As mentioned before, transfer learning is a good way to quickly get good performance on the Patch-CAMELYON benchmark. Note, however, that this is not the objectives of the course. One of the main objectives is for the students to get "insight in setting up a research question that can be quantitatively investigated". While it would certainly be nice to score high on the challenge leaderboard, it is much more important to ask a good research question and properly investigate it. You are free to choose what you want to investigate and the course instructors can give you feedback.
| github_jupyter |
# Python Basics
## Variables
Python variables are untyped, i.e. no datatype is required to define a variable
```
x=10 # static allocation
print(x) # to print a variable
```
Sometimes variables are allocated dynamically during runtime by user input. Python not only creates a new variable on-demand, also, it assigns corresponding type.
```
y=input('Enter something : ')
print(y)
```
To check the type of a variable use `type()` method.
```
type(x)
type(y)
```
Any input given is Python is by default of string type. You may use different __typecasting__ constructors to change it.
```
# without typeasting
y=input('Enter a number... ')
print(f'type of y is {type(y)}') # string formatting
# with typeasting into integer
y=int(input('Enter a number... '))
print(f'type of y is {type(y)}') # string formatting
```
How to check if an existing variable is of a given type ?
```
x=10
print(isinstance(x,int))
print(isinstance(x,float))
```
## Control Flow Structure
### if-else-elif
```
name = input ('Enter name...')
age = int(input('Enter age... '))
if age in range(0,150):
if age < 18 :
print(f'{name} is a minor')
elif age >= 18 and age < 60:
print(f'{name} is a young person')
else:
print(f'{name} is an elderly person')
else:
print('Invalid age')
```
### For-loop
```
print('Table Generator\n************************')
num = int(input('Enter a number... '))
for i in range(1,11):
print(f'{num} x {i} \t = {num*i}')
```
### While-loop
```
print('Table Generator\n************************')
num = int(input('Enter a number...'))
i = 1
while i<11:
print(f'{num} x {i} \t = {num * i}')
i += 1
```
## Primitive Data-structures
### List
List is a heterogenous linked-list stucture in python
```
lst = [1,2,'a','b'] #creating a list
lst
type(lst)
loc=2
print(f'item at location {loc} is {lst[loc]}') # reading item by location
lst[2]='abc' # updating an item in a list
lst
lst.insert(2,'a') # inserting into a specific location
lst
lst.pop(2) # deleting from a specific location
lst
len(lst) # length of a list
lst.reverse() # reversing a list
lst
test_list=[1,5,7,8,10] # sorting a list
test_list.sort(reverse=False)
test_list
```
### Set
```
P = {2,3,5,7} # Set of single digit prime numbers
O = {1,3,5,7} # Set of single digit odd numbers
E = {0,2,4,6,8} # Set of ingle digit even numbers
type(P)
P.union(O) # odd or prime
P.intersection(E) # even and prime
P-E # Prime but not even
```
Finding distinct numbers from a list of numbers by typecasting into set
```
lst = [1,2,4,5,6,2,1,4,5,6,1]
print(lst)
lst = list(set(lst)) # List --> Set --> List
print(lst)
```
### Dictionarry
Unordered named list, i.e. values are index by alphanumeric indices called key.
```
import random as rnd
test_d = {
'name' : 'Something', #kay : value
'age' : rnd.randint(18,60),
'marks' : {
'Physics' : rnd.randint(0,100),
'Chemistry' : rnd.randint(0,100),
'Mathematics' : rnd.randint(0,100),
'Biology' : rnd.randint(0,100),
}
}
test_d
```
A list of dictionarry forms a tabular structure. Each key becomes a column and the corresponding value becomes the value that specific row at that coloumn.
```
test_d['marks'] # reading a value by key
test_d['name'] = 'anything' # updating a value by its key
test_d
for k in test_d.keys(): # reading values iteratively by its key
print(f'value at key {k} is {test_d[k]} of type {type(test_d[k])}')
```
### Tuples
Immutable ordered collection of heterogenous data.
```
tup1 = ('a',1,2)
tup1
type(tup1)
tup1[1] # reading from index
tup1[1] = 3 # immutable collection, updation is not possible
lst1 = list(tup1) #typecast into list
lst1
```
## Serialization
### Theory
Computer networks are defined as a collection interconnected autonomous systems. The connections (edges) between netwrok devices (nodes) are descibed by its Topology which is modeled by Graph Theoretic principles and the computing modeles i.e. Algorithms are designed based on Distributed Systems. The connetions are inheritly FIFO (Sequential) in nature, thus it cannot carry any non-linear data-structures. However, duting RPC communication limiting the procedures to only linear structures are not realistic, especially while using Objects, as Objects are stored in memory Heap. Therefore Data stored in a Non-Linear DS must be converted into a linear format (Byte-Stream) before transmitting in a way that the receiver must reconstruct the source DS and retrive the original data. This transformation is called Serialization. All Modern programming languages such as Java and Python support Serializtion.
### Serializing primitive ADTs
```
test_d = {
'name' : 'Something', #kay : value
'age' : rnd.randint(18,60),
'marks' : {
'Physics' : rnd.randint(0,100),
'Chemistry' : rnd.randint(0,100),
'Mathematics' : rnd.randint(0,100),
'Biology' : rnd.randint(0,100),
},
'optionals' : ['music', 'Mechanics']
}
test_d
import json # default serialization library commonly used in RESTFul APIs
# Step 1
ser_dat = json.dumps(test_d) # Serialization
print(ser_dat)
print(type(ser_dat))
# Step 2
bs_data = ser_dat.encode() # Encoding into ByteStream
print(bs_data)
print(type(bs_data))
# Step 3
ser_data2 = bytes.decode(bs_data) # Decoding strings from ByteStream
print(ser_data2)
print(type(ser_data2))
# Step 4
json.loads(ser_data2) # Deserializing
```
### Serializing Objects
```
class MyClass: # defining class
# member variables
name
age
# member functions
def __init__(self,name, age): #__init__() = Constructor
self.name = name #'self' is like 'this' in java
self.age = age
def get_info(self): # returns a dictionary
return {'name' : self.name , 'age' : self.age}
obj1 = MyClass('abc',20) # crates an object
obj1.get_info() #invoke functions from object
json.dumps(obj1) # object can't be serializable in string
import pickle as pkl # pickle library is used to serialize objects
bs_data = pkl.dumps(obj1) # serialization + encoding
print(bs_data)
print(type(bs_data))
obj2 = pkl.loads(bs_data) # Decoding + Deserialization
obj2.get_info()
```
# Interfacing with the Operating System
In this section we will discuss various methods a Python script may use to interface with an Operating Systems. We'll fist understand the Local interfacing i.e. the script runs on top of the OS. Later, We'll see how it communicates with a remote computer using networking protocols such as Telnet and SSH.
## Local interfacing
```
import os
cmd = 'dir *.exe' # command to be executed
for i in os.popen(cmd).readlines():
print(i)
```
To run a command without any outputs
```
import os
# write a batch of commnad
cmds = ['md test_dir' ,
'cd test_dir' ,
'fsutil file createnew test1.txt 0',
'fsutil file createnew test2.txt 0',
'fsutil file createnew test3.txt 0',
'cd..'
]
# call commands from the batch
for c in cmds:
os.system(c)
# verify
for i in os.popen('dir test*.txt').readlines():
print(i)
```
## Remote Interfacing
* Install Telnet daemon on the Linux host : `sudo apt -y install telnetd`
* Verify installation using : `namp localhiost`
```
import telnetlib as tn
import getpass
host = '192.168.1.84'
user = input("Enter your remote account: ")
password = getpass.getpass()
tn_session = tn.Telnet(host)
tn_session.read_until(b"login: ")
tn_session.write(user.encode('ascii') + b"\n")
if password:
tn_session.read_until(b"Password: ")
tn_session.write(password.encode('ascii') + b"\n")
tn_session.write(b"ls\n")
print(tn_session.read_all().decode('ascii'))
```

Remote config with SSH (Secure Communication)
```
import paramiko
import getpass
host = input('Enter host IP')
port = 22
username = input("Enter your remote account: ")
password = getpass.getpass()
command = "ls"
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host, port, username, password)
stdin, stdout, stderr = ssh.exec_command(command)
for l in stdout.readlines():
print(l)
```

# Home Tasks
1. Write a python API that runs shell scripts on demand. The shell scripts must be present on the system. The API must take the name of the script as input and display output from the script. Create at least 3 shell scripts of your choice to demonstrate.
2. Write a python API that automatically calls DHCP request for dynamic IP allocation on a given interface, if it doesnt have any IP address.
3. Write a python API that organises files.
* The API first takes a directory as input on which it will run the organization
* Thereafter, it asks for a list of pairs (filetype, destination_folder).
* For example, [('mp3','music'),('png','images'),('jpg','images'),('mov','videos')] means all '.mp3' files will be moved to 'Music' directory likewise for images and Videos. In case the directories do not exist, the API must create them.
4. Write a python API that remotely monitors number of processes running on a system over a given period.
# Course Suggestion
https://www.linkedin.com/learning/python-essential-training-2/
| github_jupyter |
<a href="https://colab.research.google.com/github/furkanonat/DS-Unit-2-Applied-Modeling/blob/master/module3-permutation-boosting/Furkan_Onat_LS_DS_233_assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import pandas as pd
import os
from google.colab import files
uploaded = files.upload()
df = pd.read_csv('freMTPL2freq.csv')
df.head()
df.isnull().sum()
import sys
!{sys.executable} -m pip install pandas-profiling
from pandas_profiling import ProfileReport
profile = ProfileReport(df, minimal=True).to_notebook_iframe()
profile
# Adding a feature for annualized claim frequency
df['Frequency'] = df['ClaimNb'] /df['Exposure']
df.head()
df['Frequency'].value_counts(normalize=True)
df['Frequency'].nunique()
df.describe()
df['Exposure'].value_counts()
df['ClaimNb'].value_counts()
df.dtypes
df.nunique()
```
#### Model 1
Target= ClaimNb
Model= DecisionTree Classifier
Evaluation Metric. = Validation Accuracy
Description = Make ClaimNb feature 3-class feature
Added Frequency Feature
```
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
df_model_1 = df.copy()
df_model_1['ClaimNb'].value_counts(normalize=True)
df_model_1['ClaimNb'].value_counts()
# I will create a new column for number of claims per policy.
df_model_1['ClaimNb_Adj'] = df_model_1['ClaimNb']
df_model_1.head()
# I modify the new 'ClaimNb' column to have just 3 classes : 'no claim', 'once', 'more than once'.
df_model_1['ClaimNb_Adj'] = df_model_1['ClaimNb_Adj'].replace({0: 'no claim', 1: 'once', 2: 'more than once', 3: 'more than once', 4: 'more than once', 11: 'more than once', 5: 'more than once', 16: 'more than once', 9: 'more than once', 8: 'more than once', 6: 'more than once'})
df_model_1.head()
# I will use "ClaimNb_Adj" feature as the target for the model
y = df_model_1['ClaimNb_Adj']
# Baseline for the majority class
df_model_1['ClaimNb_Adj'].value_counts(normalize=True)
df_model_1.dtypes
# Split for test and train
train, test = train_test_split(df_model_1, train_size=0.80, test_size=0.20, stratify=df_model_1['ClaimNb_Adj'], random_state=42)
train.shape, test.shape
# Split for train and val
train, val = train_test_split(train, train_size = 0.80, test_size=0.20, stratify=train['ClaimNb_Adj'], random_state=42)
train.shape, val.shape
def wrangle(X):
# Drop IDpol since it doesn't have any explanatory power
# Drop ClaimNb and Frequency as they are a function of our target.
column_drop = ['IDpol','ClaimNb', 'Frequency']
X = X.drop(columns=column_drop)
return X
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
train.head()
# Arranging features matrix and y target vector
target = 'ClaimNb_Adj'
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test.drop(columns=target)
y_test = test[target]
pipeline = make_pipeline(
ce.OrdinalEncoder(),
DecisionTreeClassifier(max_depth = 3)
)
pipeline.fit(X_train, y_train)
print('Validation Accuracy', pipeline.score(X_val, y_val))
import graphviz
from sklearn.tree import export_graphviz
tree = pipeline.named_steps['decisiontreeclassifier']
dot_data = export_graphviz(tree,
out_file=None,
feature_names=X_train.columns,
class_names=y_train.unique().astype(str),
filled=True,
impurity=False,
proportion=True
)
graphviz.Source(dot_data)
y.value_counts(normalize=True)
# Getting feature importances
rf = pipeline.named_steps['decisiontreeclassifier']
importances = pd.Series(rf.feature_importances_,X_train.columns)
# plot feature importances
%matplotlib inline
n=11
plt.figure(figsize=(5,n))
plt.title("Feature Importances")
importances.sort_values()[-n:].plot.barh(color='black');
importances.sort_values(ascending=False)
# Predict on Test
y_pred = pipeline.predict(X_test)
y_pred.shape, y_test.shape
print('Train Accuracy', pipeline.score(X_train, y_train))
print('Validation Accuracy', pipeline.score(X_val, y_val))
```
A to Assignment Q: Validation Accuracy of Decision Tree Classifier model beats baseline narrowly as the majority class had a frequency of 94.9765%.
```
from sklearn.metrics import accuracy_score
# print the accuracy
accuracy = accuracy_score(y_pred, y_test)
print("Accuracy : %.4f%%" % (accuracy * 100.0))
from sklearn.metrics import confusion_matrix
cnf_matrix = confusion_matrix(y_test, y_pred)
print('Confusion matrix:\n', cnf_matrix)
# Explanatory graph: Confusion Matrix
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical')
# import the metric
from sklearn.metrics import classification_report
# print classification report
print("Classification Report:\n\n", classification_report(y_test, y_pred))
```
### Getting my model's permutation importances
```
transformers = make_pipeline(ce.OrdinalEncoder(),
SimpleImputer(strategy='median'))
X_train_transformed = transformers.fit_transform(X_train)
X_val_transformed = transformers.transform(X_val)
model=DecisionTreeClassifier()
model.fit(X_train_transformed, y_train)
import eli5
from eli5.sklearn import PermutationImportance
permuter = PermutationImportance(
model,
scoring='accuracy',
n_iter=5,
random_state=42)
permuter.fit(X_val_transformed, y_val)
feature_names= X_val.columns.tolist()
pd.Series(permuter.feature_importances_, feature_names).sort_values(ascending=False)
eli5.show_weights(permuter, top=None, feature_names=feature_names)
```
### Model 2: Xgboost
```
from xgboost import XGBClassifier
pipeline = make_pipeline(ce.OrdinalEncoder(),
XGBClassifier(n_estimators=100, random_state=42, n_jobs=-1))
pipeline.fit(X_train, y_train)
# Validation accuracy
y_pred = pipeline.predict(X_val)
print('Validation Accuracy is ', accuracy_score(y_val,y_pred))
# Validation accuracy of Decision Tree Classifier is 0.9497704688335392
# which is same with Xgboost model's validation accuracy.
```
| github_jupyter |
Unsupervised learning means a lack of labels: we are looking for structure in the data, without having an *a priori* intuition what that structure might be. A great example is clustering, where the goal is to identify instances that clump together in some high-dimensional space. Unsupervised learning in general is a harder problem. Deep learning revolutionized supervised learning and it had made significant advances in unsupervised learning, but there remains plenty of room for improvement. In this notebook, we look at how we can map an unsupervised learning problem to graph optimization, which in turn we can solve on a quantum computer.
# Mapping clustering to discrete optimization
Assume that we have some points $\{x_i\}_{i=1}^N$ lying in some high-dimensional space $\mathbb{R}^d$. How do we tell which ones are close to one another and which ones are distant? To get some intuition, let's generate a simple dataset with two distinct classes. The first five instances will belong to class 1, and the second five to class 2:
```
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
n_instances = 4
class_1 = np.random.rand(n_instances//2, 3)/5
class_2 = (0.6, 0.1, 0.05) + np.random.rand(n_instances//2, 3)/5
data = np.concatenate((class_1, class_2))
colors = ["red"] * (n_instances//2) + ["green"] * (n_instances//2)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d', xticks=[], yticks=[], zticks=[])
ax.scatter(data[:, 0], data[:, 1], data[:, 2], c=colors)
```
The high-dimensional space is endowed with some measure of distance, the Euclidean distance being the simplest case. We can calculate all pairwise distances between the data points:
```
import itertools
w = np.zeros((n_instances, n_instances))
for i, j in itertools.product(*[range(n_instances)]*2):
w[i, j] = np.linalg.norm(data[i]-data[j])
```
This matrix is sometimes called the Gram or the kernel matrix. The Gram matrix contains a fair bit of information about the topology of the points in the high-dimensional space, but it is not easy to see. We can think of the Gram matrix as the weighted adjacency matrix of a graph: two nodes represent two data instances. Their distance as contained in the Gram matrix is the weight on the edge that connects them. If the distance is zero, they are not connected by an edge. In general, this is a dense graph with many edges -- sparsity can be improved by a distance function that gets exponentially smaller.
What can we do with this graph to find the clusters? We could look for the max-cut, that is, the collection of edges that would split the graph in exactly two if removed, while maximizing the total weight of these edges [[1](#1)]. This is a well-known NP-hard problem, but it also very naturally maps to an Ising model.
The spin variables $\sigma_i \in \{-1, +1\}$ take on value $\sigma_i = +1$ if a data instance is in cluster 1 (nodes $V_1$ in the graph), and $\sigma_i = -1$ if the data instance is in cluster 2 (nodes $V_2$ in the graph). The cost of a cut is
$$
\sum_{i\in V_1, j\in V_2} w_{ij}
$$
Let us assume a fully connected graph. Then, accounting for the symmetry of the adjacency matrix, we can expand this as
$$
\frac{1}{4}\sum_{i, j} w_{ij} - \frac{1}{4} \sum_{i, j} w_{ij} \sigma_i \sigma_j
$$
$$
= \frac{1}{4}\sum_{i, j\in V} w_{ij} (1- \sigma_i \sigma_j).
$$
By taking the negative of this, we can directly solve the problem by a quantum optimizer.
# Solving the max-cut problem by QAOA
Most quantum computing frameworks have convenience functions defined for common graph optimization algorithms, and max-cut is a staple. This reduces our task to importing the relevant functions:
```
from qiskit import Aer
from qiskit.aqua import QuantumInstance
from qiskit.aqua.algorithms import QAOA
from qiskit.aqua.components.optimizers import COBYLA
from qiskit.optimization.applications.ising import max_cut
from qiskit.optimization.applications.ising.common import sample_most_likely
```
Setting $p=1$ in the QAOA algorithm, we can initialize it with the max-cut problem.
```
qubit_operators, offset = max_cut.get_operator(w)
p = 1
optimizer = COBYLA()
qaoa = QAOA(qubit_operators, optimizer, p)
```
Here the choice of the classical optimizer `COBYLA` was arbitrary. Let us run this and analyze the solution. This can take a while on a classical simulator.
```
backend = Aer.get_backend('statevector_simulator')
quantum_instance = QuantumInstance(backend, shots=1)
result = qaoa.run(quantum_instance)
x = sample_most_likely(result['eigenstate'])
graph_solution = max_cut.get_graph_solution(x)
print('energy:', result['eigenvalue'])
print('maxcut objective:', result['eigenvalue'] + offset)
print('solution:', max_cut.get_graph_solution(x))
print('solution objective:', max_cut.max_cut_value(x, w))
```
Looking at the solution, the cut matches the clustering structure.
# Solving the max-cut problem by annealing
Naturally, the same problem can be solved on an annealer. Our only task is to translate the couplings and the on-site fields to match the programming interface:
```
import dimod
J, h = {}, {}
for i in range(n_instances):
h[i] = 0
for j in range(i+1, n_instances):
J[(i, j)] = w[i, j]
model = dimod.BinaryQuadraticModel(h, J, 0.0, dimod.SPIN)
sampler = dimod.SimulatedAnnealingSampler()
response = sampler.sample(model, num_reads=10)
print("Energy of samples:")
for solution in response.data():
print("Energy:", solution.energy, "Sample:", solution.sample)
```
If you look at the first sample, you will see that the first five data instances belong to the same graph partition, matching the actual cluster.
# References
[1] Otterbach, J. S., Manenti, R., Alidoust, N., Bestwick, A., Block, M., Bloom, B., Caldwell, S., Didier, N., Fried, E. Schuyler, Hong, S., Karalekas, P., Osborn, C. B., Papageorge, A., Peterson, E. C., Prawiroatmodjo, G., Rubin, N., Ryan, Colm A., Scarabelli, D., Scheer, M., Sete, E. A., Sivarajah, P., Smith, Robert S., Staley, A., Tezak, N., Zeng, W. J., Hudson, A., Johnson, Blake R., Reagor, M., Silva, M. P. da, Rigetti, C. (2017). [Unsupervised Machine Learning on a Hybrid Quantum Computer](https://arxiv.org/abs/1712.05771). *arXiv:1712.05771*. <a id='1'></a>
| github_jupyter |
```
%tensorflow_version 1.x
!pip install -q h5py==2.10.0
from scipy.io import loadmat
from scipy import stats
import pandas as pd
import numpy as np
import pickle
import matplotlib.pyplot as plt
from scipy import stats
import tensorflow as tf
#import tensorflow.compat.v1 as tf1
#tf1.disable_v2_behavior()
import seaborn as sns
from pylab import rcParams
from sklearn import metrics
from sklearn.model_selection import train_test_split
from tensorflow.keras.utils import to_categorical
from numpy import dstack
from pandas import read_csv
from sklearn.svm import SVC
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import Dropout
from keras.layers import LSTM
from keras.layers import TimeDistributed
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D
from keras.layers import ConvLSTM2D
# load a single file as a numpy array
def load_file(filepath):
df = read_csv(filepath)
# convert activity names to number
df.activity = pd.factorize(df.activity)[0]
# select mean and var features
x1 = df.iloc[:,3:9].values
x2 = df.iloc[:,21:27].values
x = np.append(x1,x2,axis=1)
y = df.activity.values
print(x.shape, y.shape)
return x, y
# load the dataset, returns train and test X and y elements
def load_dataset(prefix=''):
# load all train
x, y = load_file(prefix + '/sensoringData_feature_prepared_20_19.0_2'+'.csv')
# transform for LSTM
N_TIME_STEPS = 20
step = 15 # faster with bigger step but accuracy degrades fast
X = []
Y = []
num = y.shape[0]
for i in range(0, num - N_TIME_STEPS, step):
part = x[i: i + N_TIME_STEPS]
label = stats.mode(y[i: i + N_TIME_STEPS])[0][0]
X.append(part)
Y.append(label)
trainX, testX, trainy, testy = train_test_split(np.array(X), np.array(Y), test_size=0.2, random_state=42)
print(trainX.shape, trainy.shape, testX.shape, testy.shape)
return trainX, trainy, testX, testy
# run an experiment
# load data
trainX1, trainy1, testX1, testy1 = load_dataset('drive/MyDrive/Thesis/Test Data/HAR Spain')
# load a single file as a numpy array
def load_file1(filepath):
dataframe = read_csv(filepath, header=None, delim_whitespace=True)
return dataframe.values
# load a list of files and return as a 3d numpy array
def load_group1(filenames, prefix=''):
loaded = list()
for name in filenames:
data = load_file1(prefix + name)
loaded.append(data)
# stack group so that features are the 3rd dimension
loaded = dstack(loaded)
return loaded
# load a dataset group, such as train or test
def load_dataset_group1(group, prefix=''):
filepath = prefix + group + '/Inertial Signals/'
# load all 9 files as a single array
filenames = list()
# body acceleration
filenames += ['body_acc_x_'+group+'.txt', 'body_acc_y_'+group+'.txt', 'body_acc_z_'+group+'.txt']
# body gyroscope
filenames += ['body_gyro_x_'+group+'.txt', 'body_gyro_y_'+group+'.txt', 'body_gyro_z_'+group+'.txt']
# load input data
X = load_group1(filenames, filepath)
# load class output
y = load_file1(prefix + group + '/y_'+group+'.txt')
return X, y
def extract_features(x, y, window, step):
num = x.shape[0]
X = []
Y = []
for i in range(0, num - window, step):
part = np.append(np.mean(x[i:i + window], axis=0),np.var(x[i: i + window], axis=0), axis=1)
part = np.mean(part,axis=0)
label = stats.mode(y[i: i + window])[0][0]
X.append(part)
Y.append(label)
return np.array(X), np.array(Y)
# load the dataset, returns train and test X and y elements
def load_dataset1(prefix=''):
# load all train
trainX, trainy = load_dataset_group1('train', prefix + 'HARDataset/')
print(trainX.shape, trainy.shape)
# load all test
testX, testy = load_dataset_group1('test', prefix + 'HARDataset/')
print(testX.shape, testy.shape)
# zero-offset class values
trainy = trainy - 1
testy = testy - 1
window = 10
# calculate mean and var
window = 10
step = 1
x, y = extract_features(np.append(trainX,testX,axis=0), np.append(trainy,testy,axis=0), window, step)
ind = np.argsort( y[:,0] )
x = x[ind]
y = y[ind]
x = x[1832:4692]
y = y[1832:4692]
x[:, [2, 1,5,4]] = x[:, [1, 2,4,5]]
# transform for LSTM
N_TIME_STEPS = 20
step = 1 # faster with bigger step but accuracy degrades fast
X = []
Y = []
num = y.shape[0]
for i in range(0, num - N_TIME_STEPS, step):
part = x[i: i + N_TIME_STEPS]
label = stats.mode(y[i: i + N_TIME_STEPS])[0][0]
X.append(part)
Y.append(label)
trainX, testX, trainy, testy = train_test_split(np.array(X), np.array(Y), test_size=0.2, random_state=42)
testy = testy.reshape(-1)
trainy = trainy.reshape(-1)
print(trainX.shape, trainy.shape, testX.shape, testy.shape)
# offset upstairs label 4 abd downstair 5
trainy = trainy + 3
testy = testy + 3
return trainX, trainy, testX, testy
# load data
trainX2, trainy2, testX2, testy2 = load_dataset1('drive/MyDrive/Thesis/Test Data/')
# append train data
trainX = np.append(trainX1,trainX2,axis=0)
testX = np.append(testX1,testX2,axis=0)
trainy = np.append(trainy1,trainy2,axis=0)
testy = np.append(testy1,testy2,axis=0)
print(trainX.shape, trainy.shape, testX.shape, testy.shape)
#def classify_svm(x_train, y_train, x_test):
# # train label SVM
# clf = SVC(kernel='rbf', class_weight='balanced', C=1e3, gamma=0.1)
# clf = clf.fit(x_train, y_train)
# predict using svm
# y_pred = clf.predict(x_test)
# return y_pred
#y_pred = classify_svm(testX, testy, testX)
#from sklearn.metrics import accuracy_score
#accuracy_score(testy, y_pred)
# fit and evaluate a model
def evaluate_model(trainX, trainy, testX, testy):
# define model
verbose, epochs, batch_size = 1, 10, 64
n_timesteps, n_features, n_outputs = trainX.shape[1], trainX.shape[2], trainy.shape[1]
# reshape into subsequences (samples, time steps, rows, cols, channels)
n_steps, n_length = 2, 10
trainX = trainX.reshape((trainX.shape[0], n_steps, 1, n_length, n_features))
testX = testX.reshape((testX.shape[0], n_steps, 1, n_length, n_features))
# define model
model = Sequential()
model.add(ConvLSTM2D(filters=64, kernel_size=(1,3), activation='relu', input_shape=(n_steps, 1, n_length, n_features)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(Dense(n_outputs, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit network
history = model.fit(trainX, trainy, validation_split=0.33, epochs=epochs, batch_size=batch_size, verbose=verbose)
# evaluate model
_, accuracy = model.evaluate(testX, testy, batch_size=batch_size, verbose=0)
return model,history,accuracy
# repeat experiment
# one hot encode y
trainy = to_categorical(trainy)
testy = to_categorical(testy)
model,model_history,score = evaluate_model(trainX, trainy, testX, testy)
score = score * 100.0
print('>#%d: %.3f' % (1, score))
# reshape data into time steps of sub-sequences
n_features, n_steps, n_length = trainX.shape[2], 2, 10
trainX = trainX.reshape((trainX.shape[0], n_steps, 1, n_length, n_features))
testX = testX.reshape((testX.shape[0], n_steps, 1, n_length, n_features))
X = np.append(trainX, testX, axis=0)
Y = np.append(trainy, testy, axis=0)
Y_cat = np.argmax(Y, axis=1)
ind = np.argsort( Y_cat[:] )
X = X[ind]
Y = Y[ind]
Y_pred = model.predict(np.array(X))
np.shape(Y_pred)
print(model_history.history.keys())
plt.figure(figsize=(12, 8))
plt.plot(np.array(model_history.history['loss']), "r-", label="Train loss")
plt.plot(np.array(model_history.history['accuracy']), "g-", label="Train accuracy")
plt.plot(np.array(model_history.history['val_loss']), "r--", label="Valid loss")
plt.plot(np.array(model_history.history['val_accuracy']), "g--", label="Valid accuracy")
plt.title("Training session's progress over iterations")
plt.legend(loc='upper right', shadow=True)
plt.ylabel('Training Progress (Loss or Accuracy values)')
plt.xlabel('Training Epoch')
plt.ylim(0)
plt.show()
plt.figure(figsize=(12, 8))
plt.plot(np.array(Y[:,3]), linewidth=5.0, label="True")
plt.plot(np.array(Y_pred[:,3]), '--', linewidth=0.20, label="Predict")
plt.title("Confidence of Driving Detection")
plt.legend(loc='upper right', shadow=True)
plt.ylabel('Confidence (%)')
plt.xlabel('Training Epoch')
plt.ylim(0)
plt.show()
N = 50
y_f = np.convolve(np.array(Y_pred[:,3]), np.ones(N)/N, mode='valid')
plt.figure(figsize=(12, 8))
plt.plot(np.array(Y[:,3]), linewidth=5.0, label="True")
plt.plot(y_f, '--', linewidth=2.0, label="Predict")
plt.title("Confidence of Driving Detection")
plt.legend(loc='upper right', shadow=True)
plt.ylabel('Confidence (%)')
plt.xlabel('Training Epoch')
plt.ylim(0)
plt.show()
LABELS = ['Walking','Inactive','Active','Driving']
Y_cat = np.argmax(Y, axis=1)
Y_pred_cat = np.argmax(Y_pred, axis=1)
confusion_matrix = metrics.confusion_matrix(Y_cat, Y_pred_cat)
plt.figure(figsize=(10, 8))
sns.heatmap(confusion_matrix, xticklabels=LABELS, yticklabels=LABELS, annot=True, fmt="d",cmap="Blues");
plt.title("Confusion matrix")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show();
from sklearn.metrics import accuracy_score
accuracy_score(Y_cat2, Y_pred2_cat)
def visualize_activity_recognition(t, label_true, label_pred_mode, label_classes,name):
plt.figure(figsize=(10, 8))
plt.title("Activity recognition {}".format(name))
plt.plot(t, label_true, linewidth=5.0)
plt.plot(t.reshape(-1), label_pred_mode.reshape(-1), '--', linewidth=0.10)
plt.yticks(np.arange(len(label_classes)), label_classes)
plt.xlabel("time (s)")
plt.ylabel("Activity")
plt.legend(["True", "Predict"])
plt.show()
num = len(Y_cat)
t = np.arange(0,num*1,1)
visualize_activity_recognition(t, Y_cat, Y_pred_cat, LABELS,"LSTM")
model.save('drive/MyDrive/Thesis/Test Data/HAR Spain/Driving_model')
pickle.dump(model_history.history, open("drive/MyDrive/Thesis/Test Data/HAR Spain/Driving_model_history", "wb"))
print("Saved model to disk")
```
| github_jupyter |
```
import scipy.io as io
import matplotlib.pyplot as plt
import matplotlib.pylab as pylab
#Set up parameters for figure display
params = {'legend.fontsize': 'x-large',
'figure.figsize': (8, 10),
'axes.labelsize': 'x-large',
'axes.titlesize':'x-large',
'axes.labelweight': 'bold',
'xtick.labelsize':'x-large',
'ytick.labelsize':'x-large'}
pylab.rcParams.update(params)
pylab.rcParams["font.family"] = "serif"
pylab.rcParams["font.weight"] = "heavy"
#Load the hori data from some samples..
mat_hori = io.loadmat('/work/imagingQ/SpatialAttention_Drowsiness/Jagannathan_Neuroimage2018/'
'Scripts/mat_files/horigraphics.mat')
data_hori = mat_hori['Hori_graphics']
#take the data for different scales..
y_hori1 = data_hori[0,]
y_hori2 = data_hori[3,]
y_hori3 = data_hori[6,]
y_hori4 = data_hori[9,]
y_hori5 = data_hori[12,]
y_hori6 = data_hori[13,]
y_hori7 = data_hori[15,]
y_hori8 = data_hori[18,]
y_hori9 = data_hori[21,]
y_hori10 = data_hori[23,]
#Set the bolding range..
x = list(range(0, 1001))
bold_hori1a = slice(0, 500)
bold_hori1b = slice(500, 1000)
bold_hori2a = slice(50, 460)
bold_hori2b = slice(625, 835)
bold_hori3a = slice(825, 1000)
bold_hori4a = slice(0, 1000)
bold_hori6a = slice(800, 875)
bold_hori7a = slice(200, 250)
bold_hori7b = slice(280, 350)
bold_hori7c = slice(450, 525)
bold_hori7d = slice(550, 620)
bold_hori7e = slice(750, 800)
bold_hori8a = slice(650, 750)
bold_hori8b = slice(750, 795)
bold_hori9a = slice(200, 325)
bold_hori10a = slice(720, 855)
#Set the main figure of the Hori scale..
plt.style.use('ggplot')
ax1 = plt.subplot2grid((60, 1), (0, 0), rowspan=6)
ax2 = plt.subplot2grid((60, 1), (6, 0), rowspan=6)
ax3 = plt.subplot2grid((60, 1), (12, 0), rowspan=6)
ax4 = plt.subplot2grid((60, 1), (18, 0), rowspan=6)
ax5 = plt.subplot2grid((60, 1), (24, 0), rowspan=6)
ax6 = plt.subplot2grid((60, 1), (30, 0), rowspan=6)
ax7 = plt.subplot2grid((60, 1), (36, 0), rowspan=6)
ax8 = plt.subplot2grid((60, 1), (42, 0), rowspan=6)
ax9 = plt.subplot2grid((60, 1), (48, 0), rowspan=6)
ax10 = plt.subplot2grid((60, 1), (54, 0), rowspan=6)
plt.setp(ax1, xticks=[0, 250, 500, 750, 999], xticklabels=['0','1', '2', '3', '4'])
plt.setp(ax2, xticks=[0, 250, 500, 750, 999], xticklabels=['0','1', '2', '3', '4'])
plt.setp(ax3, xticks=[0, 250, 500, 750, 999], xticklabels=['0','1', '2', '3', '4'])
plt.setp(ax4, xticks=[0, 250, 500, 750, 999], xticklabels=['0','1', '2', '3', '4'])
plt.setp(ax5, xticks=[0, 250, 500, 750, 999], xticklabels=['0','1', '2', '3', '4'])
plt.setp(ax6, xticks=[0, 250, 500, 750, 999], xticklabels=['0','1', '2', '3', '4'])
plt.setp(ax7, xticks=[0, 250, 500, 750, 999], xticklabels=['0','1', '2', '3', '4'])
plt.setp(ax8, xticks=[0, 250, 500, 750, 999], xticklabels=['0','1', '2', '3', '4'])
plt.setp(ax9, xticks=[0, 250, 500, 750, 999], xticklabels=['0','1', '2', '3', '4'])
plt.setp(ax10, xticks=[0, 250, 500, 750, 999], xticklabels=['0','1', '2', '3', '4'])
plt.subplots_adjust(wspace=0, hspace=0)
ax1.plot(x, y_hori1, 'k-', alpha=0.5, linewidth=2.0)
ax1.plot(x[bold_hori1a], y_hori1[bold_hori1a], 'b-', alpha=0.75)
ax1.plot(x[bold_hori1b], y_hori1[bold_hori1b], 'b-', alpha=0.75)
ax1.set_ylim([-150, 150])
ax1.axes.xaxis.set_ticklabels([])
ax1.set_ylabel('1: Alpha wave \ntrain', rotation=0,ha='right',va='center', fontsize=20, labelpad=10)
ax2.plot(x, y_hori2, 'k-', alpha=0.5, linewidth=2.0)
ax2.plot(x[bold_hori2a], y_hori2[bold_hori2a], 'b-', alpha=0.75)
ax2.plot(x[bold_hori2b], y_hori2[bold_hori2b], 'b-', alpha=0.75)
ax2.set_ylim([-150, 150])
ax2.axes.xaxis.set_ticklabels([])
ax2.set_ylabel('2: Alpha wave \nintermittent(>50%)', rotation=0,ha='right',va='center',
fontsize=20, labelpad=10)
ax3.plot(x, y_hori3, 'k-', alpha=0.5, linewidth=2.0)
ax3.plot(x[bold_hori3a], y_hori3[bold_hori3a], 'b-', alpha=0.75)
ax3.set_ylim([-150, 150])
ax3.axes.xaxis.set_ticklabels([])
ax3.set_ylabel('3: Alpha wave \nintermittent(<50%)', rotation=0,ha='right',va='center',
fontsize=20, labelpad=10)
ax4.plot(x, y_hori4, 'g-', alpha=0.5, linewidth=2.0)
ax4.plot(x[bold_hori4a], y_hori4[bold_hori4a], 'g-', alpha=0.75)
ax4.set_ylim([-150, 150])
ax4.axes.xaxis.set_ticklabels([])
ax4.set_ylabel('4: EEG flattening', rotation=0,ha='right',va='center', fontsize=20, labelpad=10)
ax5.plot(x, y_hori5, 'g-', alpha=0.5, linewidth=2.0)
ax5.plot(x[bold_hori4a], y_hori5[bold_hori4a], 'g-', alpha=0.75)
ax5.set_ylim([-150, 150])
ax5.axes.xaxis.set_ticklabels([])
ax5.set_ylabel('5: Ripples', rotation=0,ha='right',va='center', fontsize=20, labelpad=10)
ax6.plot(x, y_hori6, 'k-', alpha=0.5, linewidth=2.0)
ax6.plot(x[bold_hori6a], y_hori6[bold_hori6a], 'r-', alpha=0.75)
ax6.set_ylim([-150, 150])
ax6.axes.xaxis.set_ticklabels([])
ax6.set_ylabel('6: Vertex sharp wave \nsolitary', rotation=0,ha='right',va='center',
fontsize=20, labelpad=10)
ax7.plot(x, y_hori7, 'k-', alpha=0.5, linewidth=2.0)
ax7.plot(x[bold_hori7a], y_hori7[bold_hori7a], 'r-', alpha=0.75)
ax7.plot(x[bold_hori7b], y_hori7[bold_hori7b], 'r-', alpha=0.75)
ax7.plot(x[bold_hori7c], y_hori7[bold_hori7c], 'r-', alpha=0.75)
ax7.plot(x[bold_hori7d], y_hori7[bold_hori7d], 'r-', alpha=0.75)
ax7.plot(x[bold_hori7e], y_hori7[bold_hori7e], 'r-', alpha=0.75)
ax7.set_ylim([-150, 150])
ax7.set_ylabel('7: Vertex sharp wave \nbursts', rotation=0,ha='right',va='center',
fontsize=20, labelpad=10)
ax7.axes.xaxis.set_ticklabels([])
ax8.plot(x, y_hori8, 'k-', alpha=0.5, linewidth=2.0)
ax8.plot(x[bold_hori8a], y_hori8[bold_hori8a], 'r-', alpha=0.75)
ax8.plot(x[bold_hori8b], y_hori8[bold_hori8b], 'm-', alpha=0.75)
ax8.set_ylim([-150, 150])
ax8.set_ylabel('8: Vertex sharp wave \nand incomplete spindles', rotation=0,ha='right',va='center',
fontsize=20, labelpad=10)
ax8.axes.xaxis.set_ticklabels([])
ax9.plot(x, y_hori9, 'k-', alpha=0.5, linewidth=2.0)
ax9.plot(x[bold_hori9a], y_hori9[bold_hori9a], 'm-', alpha=0.75)
ax9.set_ylim([-40, 40])
ax9.set_ylabel('9: Spindles', rotation=0,ha='right',va='center', fontsize=20, labelpad=10)
ax9.axes.xaxis.set_ticklabels([])
ax10.plot(x, y_hori10, 'k-', alpha=0.5, linewidth=2.0)
ax10.plot(x[bold_hori10a], y_hori10[bold_hori10a], 'c-', alpha=0.75)
ax10.set_ylim([-175, 175])
ax10.set_ylabel('10: K-complexes', rotation=0,ha='right',va='center', fontsize=20, labelpad=10)
ax10.set_xlabel('Time(seconds)', rotation=0,ha='center',va='center', fontsize=20, labelpad=10)
ax1.axes.yaxis.set_ticklabels([' ',' ',''])
ax2.axes.yaxis.set_ticklabels([' ',' ',''])
ax3.axes.yaxis.set_ticklabels([' ',' ',''])
ax4.axes.yaxis.set_ticklabels([' ',' ',''])
ax5.axes.yaxis.set_ticklabels([' ',' ',''])
ax6.axes.yaxis.set_ticklabels([' ',' ',''])
ax7.axes.yaxis.set_ticklabels([' ',' ',''])
ax8.axes.yaxis.set_ticklabels([' ',' ',''])
ax9.axes.yaxis.set_ticklabels([' ',' ',''])
ax10.axes.yaxis.set_ticklabels(['-100(uV)','','100(uV)'])
ax10.axes.yaxis.tick_right()
ax1.axes.yaxis.set_ticks([-100, 0, 100])
ax2.axes.yaxis.set_ticks([-100, 0, 100])
ax3.axes.yaxis.set_ticks([-100, 0, 100])
ax4.axes.yaxis.set_ticks([-100, 0, 100])
ax5.axes.yaxis.set_ticks([-100, 0, 100])
ax6.axes.yaxis.set_ticks([-100, 0, 100])
ax7.axes.yaxis.set_ticks([-100, 0, 100])
ax8.axes.yaxis.set_ticks([-100, 0, 100])
ax9.axes.yaxis.set_ticks([-100, 0, 100])
ax10.axes.yaxis.set_ticks([-100, 0, 100])
# Here is the label of interest
ax2.annotate('Wake', xy=(-0.85, 0.90), xytext=(-0.85, 1.00), xycoords='axes fraction',rotation='vertical',
fontsize=20, ha='center', va='center')
ax6.annotate('N1', xy=(-0.85, 1), xytext=(-0.85, 1), xycoords='axes fraction', rotation='vertical',
fontsize=20, ha='center', va='center')
ax10.annotate('N2', xy=(-0.85, 0.90), xytext=(-0.85, 1.00), xycoords='axes fraction', rotation='vertical',
fontsize=20, ha='center', va='center')
#Set up the vertex element now..
params = {'figure.figsize': (3, 6)}
pylab.rcParams.update(params)
y_hori6 = data_hori[13,]
y_hori7 = data_hori[15,]
x = list(range(0, 101))
x_spin = list(range(0, 301))
x_kcomp = list(range(0, 301))
y_hori6 = y_hori6[800:901]
y_hori7 = y_hori7[281:382]
#Vertex
bold_biphasic = slice(8, 75)
bold_monophasic = slice(8, 65)
plt.style.use('ggplot')
f, axarr = plt.subplots(2, sharey=True) # makes the 2 subplots share an axis.
f.suptitle('Vertex element', size=12, fontweight='bold')
plt.setp(axarr, xticks=[0, 50,100], xticklabels=['0', '0.5', '1'],
yticks=[-150,0, 150])
axarr[0].plot(x, y_hori6, 'k-', alpha=0.5, linewidth=2.0)
axarr[0].plot(x[bold_biphasic], y_hori6[bold_biphasic], 'r-', alpha=0.75)
axarr[0].set_title('Biphasic', fontsize=10, fontweight='bold')
axarr[0].set_ylim([-150, 150])
axarr[1].plot(x, y_hori7, 'k-', alpha=0.5, linewidth=2.0)
axarr[1].plot(x[bold_monophasic], y_hori7[bold_monophasic], 'r-', alpha=0.75)
axarr[1].set_title('Monophasic', fontsize=10, fontweight='bold')
axarr[1].set_xlabel('Time(s)')
f.text(-0.2, 0.5, 'Amp(uV)', va='center', rotation='vertical', fontsize=20)
f.subplots_adjust(hspace=0.3)
#Set up the Spindle element now..
params = {'figure.figsize': (3, 1.5)}
pylab.rcParams.update(params)
bold_spindle = slice(95, 205)
y_hori8 = data_hori[21,]
y_hori8 = y_hori8[101:402]
fspin, axarrspin = plt.subplots(1, sharey=False) # makes the 2 subplots share an axis.
plt.setp(axarrspin, xticks=[0, 150,300], xticklabels=['0', '1.5', '3'],
yticks=[-100,0, 100])
axarrspin.plot(x_spin, y_hori8, 'k-', alpha=0.5, linewidth=2.0)
axarrspin.plot(x_spin[bold_spindle], y_hori8[bold_spindle], 'r-', alpha=0.75)
axarrspin.set_title('', fontsize=10, fontweight='bold')
axarrspin.set_ylim([-100, 100])
axarrspin.set_xlabel('Time(s)')
fspin.text(0.3, 1.5, 'Spindle element', va='center', rotation='horizontal', fontsize=12)
fspin.subplots_adjust(hspace=0.3)
#Set up the K-complex element now..
bold_kcomp = slice(20, 150)
y_hori10 = data_hori[23,]
y_hori10 = y_hori10[700:1007]
fkcomp, axarrkcomp = plt.subplots(1, sharey=False) # makes the 2 subplots share an axis.
plt.setp(axarrkcomp, xticks=[0, 150,300], xticklabels=['0', '1.5', '3'],
yticks=[-200,0, 200])
axarrkcomp.plot(x_kcomp, y_hori10, 'k-', alpha=0.5, linewidth=2.0)
axarrkcomp.plot(x_kcomp[bold_kcomp], y_hori10[bold_kcomp], 'r-', alpha=0.75)
axarrkcomp.set_title('', fontsize=10, fontweight='bold')
axarrkcomp.set_ylim([-200, 200])
axarrkcomp.set_xlabel('Time(s)')
fkcomp.text(0.3, 1.5, 'K-complex element', va='center', rotation='horizontal', fontsize=12)
fkcomp.subplots_adjust(hspace=0.3)
```
| github_jupyter |
```
from ceres_infer.session import workflow
from ceres_infer.models import model_infer_ens_custom
import logging
logging.basicConfig(level=logging.INFO)
params = {
# directories
'outdir_run': '../out/20.0909 Lx/L200only_reg_rf_boruta/', # output dir for the run
'outdir_modtmp': '../out/20.0909 Lx/L200only_reg_rf_boruta/model_perf/', # intermediate files for each model
'indir_dmdata_Q3': '../out/20.0817 proc_data/gene_effect/dm_data.pkl', # pickled preprocessed DepMap Q3 data
'indir_dmdata_external': '../out/20.0817 proc_data/gene_effect/dm_data_Q4.pkl', # pickled preprocessed DepMap Q3 data
'indir_genesets': '../data/gene_sets/',
'indir_landmarks': '../out/19.1013 tight cluster/landmarks_n200_k200.csv', # csv file of landmarks [default: None]
# notes
'session_notes': 'L200 landmarks only; regression with random forest-boruta lite iteration',
# data
'external_data_name': 'p19q4', # name of external validation dataset
'opt_scale_data': False, # scale input data True/False
'opt_scale_data_types': '\[(?:RNA-seq|CN)\]', # data source types to scale; in regexp
'model_data_source': ['CERES_Lx'],
'anlyz_set_topN': 10, # for analysis set how many of the top features to look at
'perm_null': 1000, # number of samples to get build the null distribution, for corr
'useGene_dependency': False, # whether to use CERES gene dependency (true) or gene effect (false)
'scope': 'differential', # scope for which target genes to run on; list of gene names, or 'all', 'differential'
# model
'model_name': 'rf',
'model_params': {'n_estimators':1000,'max_depth':15,'min_samples_leaf':5,'max_features':'log2'},
'model_paramsgrid': {},
'model_pipeline': model_infer_ens_custom,
'pipeline_params': {'sf_iterThresholds': [], 'sf_topK': None},
# pipeline
'parallelize': False, # parallelize workflow
'processes': 1, # number of cpu processes to use
# analysis
'metric_eval': 'score_test', # metric in model_results to evaluate, e.g. score_test, score_oob
'thresholds': {'score_rd10': 0.1, # score of reduced model - threshold for filtering
'recall_rd10': 0.95}, # recall of reduced model - threshold for filtering
'min_gs_size': 4 # minimum gene set size, to be derived
}
wf = workflow(params)
pipeline = ['load_processed_data', 'infer']
wf.create_pipe(pipeline)
wf.run_pipe()
wf = workflow(params)
pipeline = ['load_processed_data', 'load_model_results', 'analyze', 'analyze_filtered', 'derive_genesets']
wf.create_pipe(pipeline)
wf.run_pipe()
```
| github_jupyter |
```
#Source for DBSCAN code: https://www.youtube.com/watch?v=5cOhL4B5waU&t=918s
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
from sklearn.cluster import DBSCAN
from sklearn.neighbors import NearestNeighbors
from scipy.spatial.distance import euclidean
import numpy as np
import numpy.matlib
plt.style.use('ggplot')
%matplotlib inline
X, label = make_moons(n_samples=200, noise=0.1,random_state=19)
#print(X[:5,])
fig, ax = plt.subplots(figsize=(10,8))
sctr1 = ax.scatter(X[:,0],X[:,1],s=140,alpha=0.9)
len(X)
#Using a kNN distance plot to find eps
#Using k=12
#Source - https://scikit-learn.org/stable/modules/neighbors.html
nbrs = NearestNeighbors(n_neighbors=12).fit(X)
distances, indices = nbrs.kneighbors(X)
#print(distances)
sortedDistancesInc = sorted(distances[:,11],reverse=False)
plt.plot(list(range(1,len(X)+1)), sortedDistancesInc)
#plt.show()
#Figuring out how to automatically get epsilon from the graph
#The elbow point is the point on the curve with the maximum absolute second derivative
#Source: https://dataplatform.cloud.ibm.com/analytics/notebooks/54d79c2a-f155-40ec-93ec-ed05b58afa39/view?access_token=6d8ec910cf2a1b3901c721fcb94638563cd646fe14400fecbb76cea6aaae2fb1
x = list(range(1,len(X)+1))
y = sortedDistancesInc
kNNdata = np.vstack((x,y)).T
nPoints = len(x)
#print(kNNdata)
#Drawing a line from the first point to the last point on the curve
firstPoint = kNNdata[0]
lastPoint = kNNdata[-1]
plt.scatter(firstPoint[0],firstPoint[1], c='blue',s=10)
plt.scatter(lastPoint[0],lastPoint[1], c='blue',s=10)
lv = lastPoint - firstPoint #Finding a vector between the first and last point
lvn = lv/np.linalg.norm(lv)#Normalizing the vector
plt.plot([firstPoint[0],lastPoint[0]],[firstPoint[1],lastPoint[1]])
#plt.show()
#Finding the distance to the line
vecFromFirst = kNNdata - firstPoint
scalarProduct = np.sum(vecFromFirst * np.matlib.repmat(lvn, nPoints, 1), axis=1)
vecFromFirstParallel = np.outer(scalarProduct, lvn)
vecToLine = vecFromFirst - vecFromFirstParallel
# distance to line is the norm of vecToLine
distToLine = np.sqrt(np.sum(vecToLine ** 2, axis=1))
# knee/elbow is the point with max distance value
idxOfBestPoint = np.argmax(distToLine)
print ("Knee of the curve is at index =",idxOfBestPoint)
print ("Knee value =", kNNdata[idxOfBestPoint])
plt.scatter(kNNdata[idxOfBestPoint][0],kNNdata[idxOfBestPoint][1])
plt.show()
#DBSCAN
model = DBSCAN(eps=kNNdata[idxOfBestPoint][1],min_samples=12).fit(X)
print(model)
model.labels_
#To know which points are the core points
model.core_sample_indices_
#Visualizing clusters
#fig, ax = plt.subplots(figsize=(10,8))
#sctr2 = ax.scatter(X[:,0],X[:,1],c=model.labels_,s=140,alpha=0.9,cmap=plt.cm.Set1)
#fig.show()
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=2)
kmeans_labels = kmeans.fit_predict(X)
plt.scatter(X[:,0], X[:,1], c=kmeans_labels)
plt.show()
kmeans_score = DBCV(X, kmeans_labels, dist_function=euclidean)
print(kmeans_score)
print(kmeans_labels)
import hdbscan
print(X)
hdbscanner = hdbscan.HDBSCAN()
hdbscan_labels = hdbscanner.fit_predict(X)
plt.scatter(X[:,0], X[:,1], c=hdbscan_labels)
hdbscan_score = DBCV(X, hdbscan_labels, dist_function=euclidean)
print(hdbscan_score)
'''
Cluster validation
DBCV evaluates the within- and between- cluster desnity connectedness of clustering results
by measuring the least dense region inside and cluster and the most dense region between clusters.
A relative measure for evaluation of of density-based clustering should be defined by means of densities
rather than by distances.
Step1: Define all points core distance (APCD) for each object in a cluster
Step 2: Define mutual reachability distance (MRD) for every pair of points in a cluster
Step 3: Build a fully connected graph G, for each cluster, based on mutual reachability distance
Step 4: Find the minimum spanning tree of G
Step 5: Using the MST define the density sparseness and density separation of each cluster
Density sparseness - maximum edge of the MST - can be interpreted as the area with the lowest density inside the cluster
Density separation - minimum MRD between the objects of two clusters - can be interpreted as the maximum density area between the clusters
Step 6: Compute the validity index of a cluster (VC)
Step 7: Compute the validity index of the clustering solution
Note: Distances are Euclidean
'''
dbscan_score = DBCV(X,model.labels_,dist_function=euclidean)
print(model.labels_)
print(dbscan_score)
"""
Implimentation of Density-Based Clustering Validation "DBCV"
Citation:
Moulavi, Davoud, et al. "Density-based clustering validation."
Proceedings of the 2014 SIAM International Conference on Data Mining.
Society for Industrial and Applied Mathematics, 2014.
"""
import numpy as np
from scipy.spatial.distance import euclidean, cdist
from scipy.sparse.csgraph import minimum_spanning_tree
from scipy.sparse import csgraph
def DBCV(X, labels, dist_function=euclidean):
"""
Density Based clustering validation
Args:
X (np.ndarray): ndarray with dimensions [n_samples, n_features]
data to check validity of clustering
labels (np.array): clustering assignments for data X
dist_dunction (func): function to determine distance between objects
func args must be [np.array, np.array] where each array is a point
Returns: cluster_validity (float)
score in range[-1, 1] indicating validity of clustering assignments
"""
graph = _mutual_reach_dist_graph(X, labels, dist_function)
mst = _mutual_reach_dist_MST(graph)
cluster_validity = _clustering_validity_index(mst, labels)
return cluster_validity
def _core_dist(point, neighbors, dist_function):
"""
Computes the core distance of a point.
Core distance is the inverse density of an object.
Args:
point (np.array): array of dimensions (n_features,)
point to compute core distance of
neighbors (np.ndarray): array of dimensions (n_neighbors, n_features):
array of all other points in object class
dist_dunction (func): function to determine distance between objects
func args must be [np.array, np.array] where each array is a point
Returns: core_dist (float)
inverse density of point
"""
n_features = np.shape(point)[0]
n_neighbors = np.shape(neighbors)[1]
distance_vector = cdist(point.reshape(1, -1), neighbors)
distance_vector = distance_vector[distance_vector != 0]
numerator = ((1/distance_vector)**n_features).sum()
core_dist = (numerator / (n_neighbors)) ** (-1/n_features)
return core_dist
def _mutual_reachability_dist(point_i, point_j, neighbors_i,
neighbors_j, dist_function):
""".
Computes the mutual reachability distance between points
Args:
point_i (np.array): array of dimensions (n_features,)
point i to compare to point j
point_j (np.array): array of dimensions (n_features,)
point i to compare to point i
neighbors_i (np.ndarray): array of dims (n_neighbors, n_features):
array of all other points in object class of point i
neighbors_j (np.ndarray): array of dims (n_neighbors, n_features):
array of all other points in object class of point j
dist_dunction (func): function to determine distance between objects
func args must be [np.array, np.array] where each array is a point
Returns: mutual_reachability (float)
mutual reachability between points i and j
"""
core_dist_i = _core_dist(point_i, neighbors_i, dist_function)
core_dist_j = _core_dist(point_j, neighbors_j, dist_function)
dist = dist_function(point_i, point_j)
mutual_reachability = np.max([core_dist_i, core_dist_j, dist])
return mutual_reachability
def _mutual_reach_dist_graph(X, labels, dist_function):
"""
Computes the mutual reach distance complete graph.
Graph of all pair-wise mutual reachability distances between points
Args:
X (np.ndarray): ndarray with dimensions [n_samples, n_features]
data to check validity of clustering
labels (np.array): clustering assignments for data X
dist_dunction (func): function to determine distance between objects
func args must be [np.array, np.array] where each array is a point
Returns: graph (np.ndarray)
array of dimensions (n_samples, n_samples)
Graph of all pair-wise mutual reachability distances between points.
"""
n_samples = np.shape(X)[0]
graph = []
counter = 0
for row in range(n_samples):
graph_row = []
for col in range(n_samples):
point_i = X[row]
point_j = X[col]
class_i = labels[row]
class_j = labels[col]
members_i = _get_label_members(X, labels, class_i)
members_j = _get_label_members(X, labels, class_j)
dist = _mutual_reachability_dist(point_i, point_j,
members_i, members_j,
dist_function)
graph_row.append(dist)
counter += 1
graph.append(graph_row)
graph = np.array(graph)
return graph
def _mutual_reach_dist_MST(dist_tree):
"""
Computes minimum spanning tree of the mutual reach distance complete graph
Args:
dist_tree (np.ndarray): array of dimensions (n_samples, n_samples)
Graph of all pair-wise mutual reachability distances
between points.
Returns: minimum_spanning_tree (np.ndarray)
array of dimensions (n_samples, n_samples)
minimum spanning tree of all pair-wise mutual reachability
distances between points.
"""
mst = minimum_spanning_tree(dist_tree).toarray()
return mst + np.transpose(mst)
def _cluster_density_sparseness(MST, labels, cluster):
"""
Computes the cluster density sparseness, the minimum density
within a cluster
Args:
MST (np.ndarray): minimum spanning tree of all pair-wise
mutual reachability distances between points.
labels (np.array): clustering assignments for data X
cluster (int): cluster of interest
Returns: cluster_density_sparseness (float)
value corresponding to the minimum density within a cluster
"""
indices = np.where(labels == cluster)[0]
cluster_MST = MST[indices][:, indices]
cluster_density_sparseness = np.max(cluster_MST)
return cluster_density_sparseness
def _cluster_density_separation(MST, labels, cluster_i, cluster_j):
"""
Computes the density separation between two clusters, the maximum
density between clusters.
Args:
MST (np.ndarray): minimum spanning tree of all pair-wise
mutual reachability distances between points.
labels (np.array): clustering assignments for data X
cluster_i (int): cluster i of interest
cluster_j (int): cluster j of interest
Returns: density_separation (float):
value corresponding to the maximum density between clusters
"""
indices_i = np.where(labels == cluster_i)[0]
indices_j = np.where(labels == cluster_j)[0]
shortest_paths = csgraph.dijkstra(MST, indices=indices_i)
relevant_paths = shortest_paths[:, indices_j]
density_separation = np.min(relevant_paths)
return density_separation
def _cluster_validity_index(MST, labels, cluster):
"""
Computes the validity of a cluster (validity of assignmnets)
Args:
MST (np.ndarray): minimum spanning tree of all pair-wise
mutual reachability distances between points.
labels (np.array): clustering assignments for data X
cluster (int): cluster of interest
Returns: cluster_validity (float)
value corresponding to the validity of cluster assignments
"""
min_density_separation = np.inf
for cluster_j in np.unique(labels):
if cluster_j != cluster:
cluster_density_separation = _cluster_density_separation(MST,
labels,
cluster,
cluster_j)
if cluster_density_separation < min_density_separation:
min_density_separation = cluster_density_separation
cluster_density_sparseness = _cluster_density_sparseness(MST,
labels,
cluster)
numerator = min_density_separation - cluster_density_sparseness
denominator = np.max([min_density_separation, cluster_density_sparseness])
cluster_validity = numerator / denominator
return cluster_validity
def _clustering_validity_index(MST, labels):
"""
Computes the validity of all clustering assignments for a
clustering algorithm
Args:
MST (np.ndarray): minimum spanning tree of all pair-wise
mutual reachability distances between points.
labels (np.array): clustering assignments for data X
Returns: validity_index (float):
score in range[-1, 1] indicating validity of clustering assignments
"""
n_samples = len(labels)
validity_index = 0
for label in np.unique(labels):
fraction = np.sum(labels == label) / float(n_samples)
cluster_validity = _cluster_validity_index(MST, labels, label)
validity_index += fraction * cluster_validity
return validity_index
def _get_label_members(X, labels, cluster):
"""
Helper function to get samples of a specified cluster.
Args:
X (np.ndarray): ndarray with dimensions [n_samples, n_features]
data to check validity of clustering
labels (np.array): clustering assignments for data X
cluster (int): cluster of interest
Returns: members (np.ndarray)
array of dimensions (n_samples, n_features) of samples of the
specified cluster.
"""
indices = np.where(labels == cluster)[0]
members = X[indices]
return members
```
| github_jupyter |
# sift down
```
# python3
class HeapBuilder:
def __init__(self):
self._swaps = [] #array of tuples or arrays
self._data = []
def ReadData(self):
n = int(input())
self._data = [int(s) for s in input().split()]
assert n == len(self._data)
def WriteResponse(self):
print(len(self._swaps))
for swap in self._swaps:
print(swap[0], swap[1])
def swapdown(self,i):
n = len(self._data)
min_index = i
l = 2*i+1 if (2*i+1<n) else -1
r = 2*i+2 if (2*i+2<n) else -1
if l != -1 and self._data[l] < self._data[min_index]:
min_index = l
if r != - 1 and self._data[r] < self._data[min_index]:
min_index = r
if i != min_index:
self._swaps.append((i, min_index))
self._data[i], self._data[min_index] = \
self._data[min_index], self._data[i]
self.swapdown(min_index)
def GenerateSwaps(self):
for i in range(len(self._data)//2 ,-1,-1):
self.swapdown(i)
def Solve(self):
self.ReadData()
self.GenerateSwaps()
self.WriteResponse()
if __name__ == '__main__':
heap_builder = HeapBuilder()
heap_builder.Solve()
```
# sift up initialized swap array
```
%%time
# python3
class HeapBuilder:
# index =0
# global index
def __init__(self,n):
self._swaps = [[None ,None]]*4*n #array of tuples or arrays
self._data = []
self.n = n
self.index = 0
def ReadData(self):
# n = int(input())
self._data = [int(s) for s in input().split()]
assert self.n == len(self._data)
def WriteResponse(self):
print(self.index)
for _ in range(self.index):
print(self._swaps[_][0],self._swaps[_][1])
# print(len(self._swaps))
# for swap in self._swaps:
# print(swap[0], swap[1])
def swapup(self,i):
if i !=0:
# print(self._data[int((i-1)/2)], self._data[i])
# print(self.index)
# print(i)
if self._data[int((i-1)/2)]> self._data[i]:
# print('2')
# self._swaps.append(((int((i-1)/2)),i))
self._swaps[self.index] = ((int((i-1)/2)),i)
# print(((int((i-1)/2)),i))
self.index+=1
# print(self.index)
self._data[int((i-1)/2)], self._data[i] = self._data[i],self._data[int((i-1)/2)]
self.swapup(int((i-1)/2))
def GenerateSwaps(self):
# The following naive implementation just sorts
# the given sequence using selection sort algorithm
# and saves the resulting sequence of swaps.
# This turns the given array into a heap,
# but in the worst case gives a quadratic number of swaps.
#
# TODO: replace by a more efficient implementation
# efficient implementation is complete binary tree. but here you're not getting data 1 by 1, instead everything at once
# so for i in range(0,n), implement swap up ai < a2i+1 ai < a2i+2
for i in range(self.n-1,0,-1):
# print(i)
self.swapup(i)
# print('1')
# for j in range(i + 1, len(self._data)):
# if self._data[i] > self._data[j]:
# self._swaps.append((i, j))
# self._data[i], self._data[j] = self._data[j], self._data[i]
def Solve(self):
self.ReadData()
self.GenerateSwaps()
self.WriteResponse()
if __name__ == '__main__':
n = int(input())
heap_builder = HeapBuilder(n)
heap_builder.Solve()
assert(len(heap_builder._swaps)<=4*len(heap_builder._data))
a = [None]*4
for i in range(4):
a[i] = (i,i+1000000)
print(a)
k = 100
for i in range(k,0,-1):
print(i,end = ' ')
%%time
# python3
class HeapBuilder:
def __init__(self):
self._swaps = [] #array of tuples or arrays
self._data = []
def ReadData(self):
n = int(input())
self._data = [int(s) for s in input().split()]
assert n == len(self._data)
def WriteResponse(self):
print(len(self._swaps))
for swap in self._swaps:
print(swap[0], swap[1])
def swapup(self,i):
if i !=0:
if self._data[int((i-1)/2)]> self._data[i]:
self._swaps.append(((int((i-1)/2)),i))
self._data[int((i-1)/2)], self._data[i] = self._data[i],self._data[int((i-1)/2)]
self.swapup(int((i-1)/2))
def GenerateSwaps(self):
for i in range(len(self._data)-1,0,-1):
self.swapup(i)
def Solve(self):
self.ReadData()
self.GenerateSwaps()
self.WriteResponse()
if __name__ == '__main__':
heap_builder = HeapBuilder()
heap_builder.Solve()
26148864/536870912
0.30/3.00
# python3
class HeapBuilder:
"""Converts an array of integers into a min-heap.
A binary heap is a complete binary tree which satisfies the heap ordering
property: the value of each node is greater than or equal to the value of
its parent, with the minimum-value element at the root.
Samples:
>>> heap = HeapBuilder()
>>> heap.array = [5, 4, 3, 2, 1]
>>> heap.generate_swaps()
>>> heap.swaps
[(1, 4), (0, 1), (1, 3)]
>>> # Explanation: After swapping elements 4 in position 1 and 1 in position
>>> # 4 the array becomes 5 1 3 2 4. After swapping elements 5 in position 0
>>> # and 1 in position 1 the array becomes 1 5 3 2 4. After swapping
>>> # elements 5 in position 1 and 2 in position 3 the array becomes
>>> # 1 2 3 5 4, which is already a heap, because a[0] = 1 < 2 = a[1],
>>> # a[0] = 1 < 3 = a[2], a[1] = 2 < 5 = a[3], a[1] = 2 < 4 = a[4].
>>> heap = HeapBuilder()
>>> heap.array = [1, 2, 3, 4, 5]
>>> heap.generate_swaps()
>>> heap.swaps
[]
>>> # Explanation: The input array is already a heap, because it is sorted
>>> # in increasing order.
"""
def __init__(self):
self.swaps = []
self.array = []
@property
def size(self):
return len(self.array)
def read_data(self):
"""Reads data from standard input."""
n = int(input())
self.array = [int(s) for s in input().split()]
assert n == self.size
def write_response(self):
"""Writes the response to standard output."""
print(len(self.swaps))
for swap in self.swaps:
print(swap[0], swap[1])
def l_child_index(self, index):
"""Returns the index of left child.
If there's no left child, returns -1.
"""
l_child_index = 2 * index + 1
if l_child_index >= self.size:
return -1
return l_child_index
def r_child_index(self, index):
"""Returns the index of right child.
If there's no right child, returns -1.
"""
r_child_index = 2 * index + 2
if r_child_index >= self.size:
return -1
return r_child_index
def sift_down(self, i):
"""Sifts i-th node down until both of its children have bigger value.
At each step of swapping, indices of swapped nodes are appended
to HeapBuilder.swaps attribute.
"""
min_index = i
l = self.l_child_index(i)
r = self.r_child_index(i)
print(i,l,r)
if l != -1 and self.array[l] < self.array[min_index]:
min_index = l
if r != - 1 and self.array[r] < self.array[min_index]:
min_index = r
if i != min_index:
self.swaps.append((i, min_index))
self.array[i], self.array[min_index] = \
self.array[min_index], self.array[i]
self.sift_down(min_index)
def generate_swaps(self):
"""Heapify procedure.
It calls sift down procedure 'size // 2' times. It's enough to make
the heap completed.
"""
for i in range(self.size // 2, -1, -1):
self.sift_down(i)
def solve(self):
self.read_data()
self.generate_swaps()
self.write_response()
if __name__ == "__main__":
heap_builder = HeapBuilder()
heap_builder.solve()
```
| github_jupyter |
If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers and 🤗 Datasets as well as other dependencies. Uncomment the following cell and run it.
```
#! pip install datasets transformers rouge-score nltk
```
If you're opening this notebook locally, make sure your environment has the last version of those libraries installed.
You can find a script version of this notebook to fine-tune your model in a distributed fashion using multiple GPUs or TPUs [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq).
# Fine-tuning a model on a summarization task
In this notebook, we will see how to fine-tune one of the [🤗 Transformers](https://github.com/huggingface/transformers) model for a summarization task. We will use the [XSum dataset](https://arxiv.org/pdf/1808.08745.pdf) (for extreme summarization) which contains BBC articles accompanied with single-sentence summaries.

We will see how to easily load the dataset for this task using 🤗 Datasets and how to fine-tune a model on it using the `Trainer` API.
```
model_checkpoint = "t5-small"
```
This notebook is built to run with any model checkpoint from the [Model Hub](https://huggingface.co/models) as long as that model has a sequence-to-sequence version in the Transformers library. Here we picked the [`t5-small`](https://huggingface.co/t5-small) checkpoint.
## Loading the dataset
We will use the [🤗 Datasets](https://github.com/huggingface/datasets) library to download the data and get the metric we need to use for evaluation (to compare our model to the benchmark). This can be easily done with the functions `load_dataset` and `load_metric`.
```
from datasets import load_dataset, load_metric
raw_datasets = load_dataset("xsum")
metric = load_metric("rouge")
```
The `dataset` object itself is [`DatasetDict`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasetdict), which contains one key for the training, validation and test set:
```
raw_datasets
```
To access an actual element, you need to select a split first, then give an index:
```
raw_datasets["train"][0]
```
To get a sense of what the data looks like, the following function will show some examples picked randomly in the dataset.
```
import datasets
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_examples=5):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
for column, typ in dataset.features.items():
if isinstance(typ, datasets.ClassLabel):
df[column] = df[column].transform(lambda i: typ.names[i])
display(HTML(df.to_html()))
show_random_elements(raw_datasets["train"])
```
The metric is an instance of [`datasets.Metric`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Metric):
```
metric
```
You can call its `compute` method with your predictions and labels, which need to be list of decoded strings:
```
fake_preds = ["hello there", "general kenobi"]
fake_labels = ["hello there", "general kenobi"]
metric.compute(predictions=fake_preds, references=fake_labels)
```
## Preprocessing the data
Before we can feed those texts to our model, we need to preprocess them. This is done by a 🤗 Transformers `Tokenizer` which will (as the name indicates) tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary) and put it in a format the model expects, as well as generate the other inputs that the model requires.
To do all of this, we instantiate our tokenizer with the `AutoTokenizer.from_pretrained` method, which will ensure:
- we get a tokenizer that corresponds to the model architecture we want to use,
- we download the vocabulary used when pretraining this specific checkpoint.
That vocabulary will be cached, so it's not downloaded again the next time we run the cell.
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
```
By default, the call above will use one of the fast tokenizers (backed by Rust) from the 🤗 Tokenizers library.
You can directly call this tokenizer on one sentence or a pair of sentences:
```
tokenizer("Hello, this one sentence!")
```
Depending on the model you selected, you will see different keys in the dictionary returned by the cell above. They don't matter much for what we're doing here (just know they are required by the model we will instantiate later), you can learn more about them in [this tutorial](https://huggingface.co/transformers/preprocessing.html) if you're interested.
Instead of one sentence, we can pass along a list of sentences:
```
tokenizer(["Hello, this one sentence!", "This is another sentence."])
```
To prepare the targets for our model, we need to tokenize them inside the `as_target_tokenizer` context manager. This will make sure the tokenizer uses the special tokens corresponding to the targets:
```
with tokenizer.as_target_tokenizer():
print(tokenizer(["Hello, this one sentence!", "This is another sentence."]))
```
If you are using one of the five T5 checkpoints we have to prefix the inputs with "summarize:" (the model can also translate and it needs the prefix to know which task it has to perform).
```
if model_checkpoint in ["t5-small", "t5-base", "t5-larg", "t5-3b", "t5-11b"]:
prefix = "summarize: "
else:
prefix = ""
```
We can then write the function that will preprocess our samples. We just feed them to the `tokenizer` with the argument `truncation=True`. This will ensure that an input longer that what the model selected can handle will be truncated to the maximum length accepted by the model. The padding will be dealt with later on (in a data collator) so we pad examples to the longest length in the batch and not the whole dataset.
```
max_input_length = 1024
max_target_length = 128
def preprocess_function(examples):
inputs = [prefix + doc for doc in examples["document"]]
model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True)
# Setup the tokenizer for targets
with tokenizer.as_target_tokenizer():
labels = tokenizer(examples["summary"], max_length=max_target_length, truncation=True)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
```
This function works with one or several examples. In the case of several examples, the tokenizer will return a list of lists for each key:
```
preprocess_function(raw_datasets['train'][:2])
```
To apply this function on all the pairs of sentences in our dataset, we just use the `map` method of our `dataset` object we created earlier. This will apply the function on all the elements of all the splits in `dataset`, so our training, validation and testing data will be preprocessed in one single command.
```
tokenized_datasets = raw_datasets.map(preprocess_function, batched=True)
```
Even better, the results are automatically cached by the 🤗 Datasets library to avoid spending time on this step the next time you run your notebook. The 🤗 Datasets library is normally smart enough to detect when the function you pass to map has changed (and thus requires to not use the cache data). For instance, it will properly detect if you change the task in the first cell and rerun the notebook. 🤗 Datasets warns you when it uses cached files, you can pass `load_from_cache_file=False` in the call to `map` to not use the cached files and force the preprocessing to be applied again.
Note that we passed `batched=True` to encode the texts by batches together. This is to leverage the full benefit of the fast tokenizer we loaded earlier, which will use multi-threading to treat the texts in a batch concurrently.
## Fine-tuning the model
Now that our data is ready, we can download the pretrained model and fine-tune it. Since our task is of the sequence-to-sequence kind, we use the `AutoModelForSeq2SeqLM` class. Like with the tokenizer, the `from_pretrained` method will download and cache the model for us.
```
from transformers import AutoModelForSeq2SeqLM, DataCollatorForSeq2Seq, Seq2SeqTrainingArguments, Seq2SeqTrainer
model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
```
Note that we don't get a warning like in our classification example. This means we used all the weights of the pretrained model and there is no randomly initialized head in this case.
To instantiate a `Seq2SeqTrainer`, we will need to define three more things. The most important is the [`Seq2SeqTrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.Seq2SeqTrainingArguments), which is a class that contains all the attributes to customize the training. It requires one folder name, which will be used to save the checkpoints of the model, and all other arguments are optional:
```
batch_size = 16
args = Seq2SeqTrainingArguments(
"test-summarization",
evaluation_strategy = "epoch",
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
weight_decay=0.01,
save_total_limit=3,
num_train_epochs=1,
predict_with_generate=True,
fp16=True,
)
```
Here we set the evaluation to be done at the end of each epoch, tweak the learning rate, use the `batch_size` defined at the top of the cell and customize the weight decay. Since the `Seq2SeqTrainer` will save the model regularly and our dataset is quite large, we tell it to make three saves maximum. Lastly, we use the `predict_with_generate` option (to properly generate summaries) and activate mixed precision training (to go a bit faster).
Then, we need a special kind of data collator, which will not only pad the inputs to the maximum length in the batch, but also the labels:
```
data_collator = DataCollatorForSeq2Seq(tokenizer, model=model)
```
The last thing to define for our `Seq2SeqTrainer` is how to compute the metrics from the predictions. We need to define a function for this, which will just use the `metric` we loaded earlier, and we have to do a bit of pre-processing to decode the predictions into texts:
```
import nltk
import numpy as np
def compute_metrics(eval_pred):
predictions, labels = eval_pred
decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True)
# Replace -100 in the labels as we can't decode them.
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Rouge expects a newline after each sentence
decoded_preds = ["\n".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds]
decoded_labels = ["\n".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels]
result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)
# Extract a few results
result = {key: value.mid.fmeasure * 100 for key, value in result.items()}
# Add mean generated length
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions]
result["gen_len"] = np.mean(prediction_lens)
return {k: round(v, 4) for k, v in result.items()}
```
Then we just need to pass all of this along with our datasets to the `Seq2SeqTrainer`:
```
trainer = Seq2SeqTrainer(
model,
args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics
)
```
We can now finetune our model by just calling the `train` method:
```
trainer.train()
```
Don't forget to [upload your model](https://huggingface.co/transformers/model_sharing.html) on the [🤗 Model Hub](https://huggingface.co/models). You can then use it only to generate results like the one shown in the first picture of this notebook!
| github_jupyter |
# Welcome to Python 101
<a href="http://pyladies.org"><img align="right" src="http://www.pyladies.com/assets/images/pylady_geek.png" alt="Pyladies" style="position:relative;top:-80px;right:30px;height:50px;" /></a>
Welcome! This notebook is appropriate for people who have never programmed before. A few tips:
- To execute a cell, click in it and then type `[shift]` + `[enter]`
- This notebook's kernel will restart if the page becomes idle for 10 minutes, meaning you'll have to rerun steps again
- Try.jupyter.org is awesome, and <a href="http://rackspace.com">Rackspace</a> is awesome for hosting this, but you will want your own Python on your computer too. Hopefully you are in a class and someone helped you install. If not:
+ [Anaconda][anaconda-download] is great if you use Windows
or will only use Python for data analysis.
+ If you want to contribute to open source code, you want the standard
Python release. (Follow
the [Hitchhiker's Guide to Python][python-guide].)
## Outline
- Operators and functions
- Data and container types
- Control structures
- I/O, including basic web APIs
- How to write and run a Python script
[anaconda-download]: http://continuum.io/downloads
[python-guide]: http://docs.python-guide.org/
### First, try Python as a calculator.
Python can be used as a shell interpreter. After you install Python, you can open a command line terminal (*e.g.* powershell or bash), type `python3` or `python`, and a Python shell will open.
For now, we are using the notebook.
Here is simple math. Go to town!
```
1 + 1
3 / 4 # caution: in Python 2 the result will be an integer
7 ** 3
```
## Challenge for you
The arithmetic operators in Python are:
```python
+ - * / ** % //
```
Use the Python interpreter to calculate:
- 16 times 26515
- 1835 [modulo][wiki-modulo] 163
<p style="font-size:smaller">(psst...)
If you're stuck, try</p>
```python
help()
```
<p style="font-size:smaller">and then in the interactive box, type <tt>symbols</tt>
</p>
[wiki-modulo]: https://en.wikipedia.org/wiki/Modulo_operation
## More math requires the math module
```
import math
print("The square root of 3 is:", math.sqrt(3))
print("pi is:", math.pi)
print("The sin of 90 degrees is:", math.sin(math.radians(90)))
```
- The `import` statement imports the module into the namespace
- Then access functions (or constants) by using:
```python
<module>.<function>
```
- And get help on what is in the module by using:
```python
help(<module>)
```
## Challenge for you
Hint: `help(math)` will show all the functions...
- What is the arc cosine of `0.743144` in degrees?
```
from math import acos, degrees # use 'from' sparingly
int(degrees(acos(0.743144))) # 'int' to make an integer
```
## Math takeaways
- Operators are what you think
- Be careful of unintended integer math
- the `math` module has the remaining functions
# Strings
(Easier in Python than in any other language ever. Even Perl.)
## Strings
Use `help(str)` to see available functions for string objects. For help on a particular function from the class, type the class name and the function name: `help(str.join)`
String operations are easy:
```
s = "foobar"
"bar" in s
s.find("bar")
index = s.find("bar")
s[:index]
s[index:] + " this is intuitive! Hooray!"
s[-1] # The last element in the list or string
```
Strings are **immutable**, meaning they cannot be modified, only copied or replaced. (This is related to memory use, and interesting for experienced programmers ... don't worry if you don't get what this means.)
```
# Here's to start.
s = "foobar"
"bar" in s
# You try out the other ones!
s.find("bar")
```
## Challenge for you
Using only string addition (concatenation) and the function `str.join`, combine `declaration` and `sayings` :
```python
declaration = "We are the knights who say:\n"
sayings = ['"icky"'] * 3 + ['"p\'tang"']
# the (\') escapes the quote
```
to a variable, `sentence`, that when printed does this:
```python
>>> print(sentence)
We are the knights who say:
"icky", "icky", "icky", "p'tang"!
```
```
help(str.join)
declaration = "We are now the knights who say:\n"
sayings = ['"icky"'] * 3 + ['"p\'tang"']
# You do the rest -- fix the below :-)
print(sayings)
```
### Don't peek until you're done with your own code!
```
sentence = declaration + ", ".join(sayings) + "!"
print(sentence)
print() # empty 'print' makes a newline
# By the way, you use 'split' to split a string...
# (note what happens to the commas):
print(" - ".join(['ni'] * 12))
print("\n".join("icky, icky, icky, p'tang!".split(", ")))
```
## String formatting
There are a bunch of ways to do string formatting:
- C-style:
```python
"%s is: %.3f (or %d in Indiana)" % \
("Pi", math.pi, math.pi)
# %s = string
# %0.3f = floating point number, 3 places to the left of the decimal
# %d = decimal number
#
# Style notes:
# Line continuation with '\' works but
# is frowned upon. Indent twice
# (8 spaces) so it doesn't look
# like a control statement
```
```
print("%s is: %.3f (well, %d in Indiana)" % ("Pi", math.pi, math.pi))
```
- New in Python 2.6, `str.format` doesn't require types:
```python
"{0} is: {1} ({1:3.2} truncated)".format(
"Pi", math.pi)
# More style notes:
# Line continuation in square or curly
# braces or parenthesis is better.
```
```
# Use a colon and then decimals to control the
# number of decimals that print out.
#
# Also note the number {1} appears twice, so that
# the argument `math.pi` is reused.
print("{0} is: {1} ({1:.3} truncated)".format("Pi", math.pi))
```
- And Python 2.7+ allows named specifications:
```python
"{pi} is {pie:05.3}".format(
pi="Pi", pie=math.pi)
# 05.3 = zero-padded number, with
# 5 total characters, and
# 3 significant digits.
```
```
# Go to town -- change the decimal places!
print("{pi} is: {pie:05.2}".format(pi="Pi", pie=math.pi))
```
## String takeaways
- `str.split` and `str.join`, plus the **regex** module (pattern matching tools for strings), make Python my language of choice for data manipulation
- There are many ways to format a string
- `help(str)` for more
# Quick look at other types
```
# Boolean
x = True
type(x)
```
## Python has containers built in...
Lists, dictionaries, sets. We will talk about them later.
There is also a library [`collections`][collections] with additional specialized container types.
[collections]: https://docs.python.org/3/library/collections.html
```
# Lists can contain multiple types
x = [True, 1, 1.2, 'hi', [1], (1,2,3), {}, None]
type(x)
# (the underscores are for special internal variables)
# List access. Try other numbers!
x[1]
print("x[0] is:", x[0], "... and x[1] is:", x[1]) # Python is zero-indexed
x.append(set(["a", "b", "c"]))
for item in x:
print(item, "... type =", type(item))
```
If you need to check an object's type, do this:
```python
isinstance(x, list)
isinstance(x[1], bool)
```
```
# You do it!
isinstance(x, tuple)
```
## Caveat
Lists, when copied, are copied by pointer. What that means is every symbol that points to a list, points to that same list.
Same with dictionaries and sets.
### Example:
```python
fifth_element = x[4]
fifth_element.append("Both!")
print(fifth_element)
print(x)
```
Why? The assignment (`=`) operator copies the pointer to the place on the computer where the list (or dictionary or set) is: it does not copy the actual contents of the whole object, just the address where the data is in the computer. This is efficent because the object could be megabytes big.
```
# You do it!
fifth_element = x[4]
print(fifth_element)
fifth_element.append("Both!")
print(fifth_element)
# and see, the original list is changed too!
print(x)
```
### To make a duplicate copy you must do it explicitly
[The copy module ] [copy]
Example:
```python
import copy
# -------------------- A shallow copy
x[4] = ["list"]
shallow_copy_of_x = copy.copy(x)
shallow_copy_of_x[0] = "Shallow copy"
fifth_element = x[4]
fifth_element.append("Both?")
def print_list(l):
print("-" * 10)
for elem in l:
print(elem)
print()
# look at them
print_list(shallow_copy_of_x)
print_list(x)
fifth_element
```
[copy]: https://docs.python.org/3/library/copy.html
```
import copy
# -------------------- A shallow copy
x[4] = ["list"]
shallow_copy_of_x = copy.copy(x)
shallow_copy_of_x[0] = "Shallow copy"
fifth_element = x[4]
fifth_element.append("Both?")
# look at them
def print_list(l):
print("-" * 8, "the list, element-by-element", "-" * 8)
for elem in l:
print(elem)
print()
print_list(shallow_copy_of_x)
print_list(x)
```
## And here is a deep copy
```python
# -------------------- A deep copy
x[4] = ["list"]
deep_copy_of_x = copy.deepcopy(x)
deep_copy_of_x[0] = "Deep copy"
fifth_element = deep_copy_of_x[4]
fifth_element.append("Both?")
# look at them
print_list(deep_copy_of_x)
print_list(x)
fifth_element
```
```
# -------------------- A deep copy
x[4] = ["list"]
deep_copy_of_x = copy.deepcopy(x)
deep_copy_of_x[0] = "Deep copy"
fifth_element = deep_copy_of_x[4]
fifth_element.append("Both? -- no, just this one got it!")
# look at them
print(fifth_element)
print("\nand...the fifth element in the original list:")
print(x[4])
```
## Common atomic types
<table style="border:3px solid white;"><tr>
<td> boolean</td>
<td> integer </td>
<td> float </td>
<td>string</td>
<td>None</td>
</tr><tr>
<td><tt>True</tt></td>
<td><tt>42</tt></td>
<td><tt>42.0</tt></td>
<td><tt>"hello"</tt></td>
<td><tt>None</tt></td>
</tr></table>
## Common container types
<table style="border:3px solid white;"><tr>
<td> list </td>
<td> tuple </td>
<td> set </td>
<td>dictionary</td>
</tr><tr style="font-size:smaller;">
<td><ul style="margin:5px 2px 0px 15px;"><li>Iterable</li><li>Mutable</li>
<li>No restriction on elements</li>
<li>Elements are ordered</li></ul></td>
<td><ul style="margin:5px 2px 0px 15px;"><li>Iterable</li><li>Immutable</li>
<li>Elements must be hashable</li>
<li>Elements are ordered</li></ul></td>
<td><ul style="margin:5px 2px 0px 15px;"><li>Iterable</li><li>Mutable</li>
<li>Elements are<br/>
unique and must<br/>
be hashable</li>
<li>Elements are not ordered</li></ul></td>
<td><ul style="margin:5px 2px 0px 15px;"><li>Iterable</li><li>Mutable</li>
<li>Key, value pairs.<br/>
Keys are unique and<br/>
must be hashable</li>
<li>Keys are not ordered</li></ul></td>
</tr></table>
### Iterable
You can loop over it
### Mutable
You can change it
### Hashable
A hash function converts an object to a number that will always be the same for the object. They help with identifying the object. A better explanation kind of has to go into the guts of the code...
# Container examples
## List
- To make a list, use square braces.
```
l = ["a", 0, [1, 2] ]
l[1] = "second element"
type(l)
print(l)
```
- Items in a list can be anything: <br/>
sets, other lists, dictionaries, atoms
```
indices = range(len(l))
print(indices)
# Iterate over the indices using i=0, i=1, i=2
for i in indices:
print(l[i])
# Or iterate over the items in `x` directly
for x in l:
print(x)
```
## Tuple
To make a tuple, use parenthesis.
```
t = ("a", 0, "tuple")
type(t)
for x in t:
print x
```
## Set
To make a set, wrap a list with the function `set()`.
- Items in a set are unique
- Lists, dictionaries, and sets cannot be in a set
```
s = set(['a', 0])
if 'b' in s:
print("has b")
s.add("b")
s.remove("a")
if 'b' in s:
print("has b")
l = [1,2,3]
try:
s.add(l)
except TypeError:
print("Could not add the list")
#raise # uncomment this to raise an error
```
## Dictionary
To make a dictionary, use curly braces.
- A dictionary is a set of key,value pairs where the keys
are unique.
- Lists, dictionaries, and sets cannot be dictionary keys
- To iterate over a dictionary use `items`
```
# two ways to do the same thing
d = {"mother":"hamster",
"father":"elderberries"}
d = dict(mother="hamster",
father="elderberries")
d['mother']
print("the dictionary keys:", d.keys())
print()
print("the dictionary values:", d.values())
# When iterating over a dictionary, use items() and two variables:
for k, v in d.items():
print("key: ", k, end=" ... ")
print("val: ", v)
# If you don't you will just get the keys:
for k in d:
print(k)
```
## Type takeaways
- Lists, tuples, dictionaries, sets all are base Python objects
- Be careful of duck typing
- Remember about copy / deepcopy
```python
# For more information, use help(object)
help(tuple)
help(set)
help()
```
## Function definition and punctuation
The syntax for creating a function is:
```python
def function_name(arg1, arg2, kwarg1=default1):
"""Docstring goes here -- triple quoted."""
pass # the 'pass' keyword means 'do nothing'
# The next thing unindented statement is outside
# of the function. Leave a blank line between the
# end of the function and the next statement.
```
- The **def** keyword begins a function declaration.
- The colon (`:`) finishes the signature.
- The body must be indented. The indentation must be exactly the same.
- There are no curly braces for function bodies in Python — white space at the beginning of a line tells Python that this line is "inside" the body of whatever came before it.
Also, at the end of a function, leave at least one blank line to separate the thought from the next thing in the script.
```
def function_name(arg1, arg2, kwarg1="my_default_value"):
"""Docstring goes here -- triple quoted."""
pass # the 'pass' keyword means 'do nothing'
# See the docstring appear when using `help`
help(function_name)
```
## Whitespace matters
The 'tab' character **'\t'** counts as one single character even if it looks like multiple characters in your editor.
**But indentation is how you denote nesting!**
So, this can seriously mess up your coding. The [Python style guide][pep8] recommends configuring your editor to make the tab keypress type four spaces automatically.
To set the spacing for Python code in Sublime, go to **Sublime Text** → **Preferences** → **Settings - More** → **Syntax Specific - User**
It will open up the file **Python.sublime-settings**. Please put this inside, then save and close.
```
{
"tab_size": 4,
"translate_tabs_to_spaces": true
}
```
[pep8]: https://www.python.org/dev/peps/pep-0008/
## Your first function
Copy this and paste it in the cell below
```python
def greet_person(person):
"""Greet the named person.
usage:
>>> greet_person("world")
hello world
"""
print('hello', person)
```
```
# Paste the function definition below:
# Here's the help statement
help(greet_person)
# And here's the function in action!
greet_person("world")
```
## Duck typing
Python's philosophy for handling data types is called **duck typing** (If it walks like a duck, and quacks like a duck, it's a duck). Functions do no type checking — they happily process an argument until something breaks. This is great for fast coding but can sometimes make for odd errors. (If you care to specify types, there is a [standard way to do it][pep484], but don't worry about this if you're a beginner.)
[pep484]: https://www.python.org/dev/peps/pep-0484/
## Challenge for you
Create another function named `greet_people` that takes a list of people and greets them all one by one. Hint: you can call the function `greet_person`.
```
# your function
def greet_people(list_of_people)
"""Documentation string goes here."""
# You do it here!
pass
```
### don't peek...
```
def greet_people(list_of_people):
for person in list_of_people:
greet_person(person)
greet_people(["world", "awesome python user!", "rockstar!!!"])
```
## Quack quack
Make a list of all of the people in your group and use your function to greet them:
```python
people = ["King Arthur",
"Sir Galahad",
"Sir Robin"]
greet_people(people)
# What do you think will happen if I do:
greet_people("pyladies")
```
```
# Try it!
```
## WTW?
Remember strings are iterable...
<div style="text-align:center;">quack!</div>
<div style="text-align:right;">quack!</div>
## Whitespace / duck typing takeways
- Indentation is how to denote nesting in Python
- Do not use tabs; expand them to spaces
- If it walks like a duck and quacks like a duck, it's a duck
# Control structures
### Common comparison operators
<table style="border:3px solid white;"><tr>
<td><tt>==</tt></td>
<td><tt>!=</tt></td>
<td><tt><=</tt> or <tt><</tt><br/>
<tt>>=</tt> or <tt>></tt></td>
<td><tt>x in (1, 2)</tt></td>
<td><tt>x is None<br/>
x is not None</tt></td>
</tr><tr style="font-size:smaller;">
<td>equals</td>
<td>not equals</td>
<td>less or<br/>equal, etc.</td>
<td>works for sets,<br/>
lists, tuples,<br/>
dictionary keys,<br/>
strings</td>
<td>just for <tt>None</tt></td>
</tr></table>
### If statement
The `if` statement checks whether the condition after `if` is true.
Note the placement of colons (`:`) and the indentation. These are not optional.
- If it is, it does the thing below it.
- Otherwise it goes to the next comparison.
- You do not need any `elif` or `else` statements if you only
want to do something if your test condition is true.
Advanced users, there is no switch statement in Python.
```
# Standard if / then / else statement.
#
# Go ahead and change `i`
i = 1
if i is None:
print("None!")
elif i % 2 == 0:
print("`i` is an even number!")
else:
print("`i` is neither None nor even")
# This format is for very short one-line if / then / else.
# It is called a `ternary` statement.
#
"Y" if i==1 else "N"
```
### While loop
The `while` loop requires you to set up something first. Then it
tests whether the statement after the `while` is true.
Again note the colon (`:`) and the indentation.
- If the condition is true, then the body of
the `while` loop will execute
- Otherwise it will break out of the loop and go on
to the next code underneath the `while` block
```
i = 0
while i < 3:
print("i is:", i)
i += 1
print("We exited the loop, and now i is:", i)
```
### For loop
The `for` loop iterates over the items after the `for`,
executing the body of the loop once per item.
```
for i in range(3):
print("in the for loop. `i` is:", i)
print()
print("outside the for loop. `i` is:", i)
# or loop directly over a list or tuple
for element in ("one", 2, "three"):
print("in the for loop. `element` is:", element)
print()
print("outside the for loop. `element` is:", element)
```
## Challenge for you
Please look at this code and think of what will happen, then copy it and run it. We introduce `break` and `continue`...can you tell what they do?
- When will it stop?
- What will it print out?
- What will `i` be at the end?
```python
for i in range(20):
if i == 15:
break
elif i % 2 == 0:
continue
for j in range(5):
print(i + j, end="...")
print() # newline
```
```
# Paste it here, and run!
```
# You are done, welcome to Python!
## ... and you rock!
### Now join (or start!) a friendly PyLadies group near you ...
[PyLadies locations][locations]
[locations]: http://www.pyladies.com/locations/
<div style="font-size:80%;color:#333333;text-align:center;">
<h4>Psst...contribute to this repo!</h4>
<span style="font-size:70%;">
Here is the
<a href="https://github.com/jupyter/docker-demo-images">
link to the github repo that hosts these
</a>.
Make them better!
</span>
</div>
| github_jupyter |
# Part 4: Create an approximate nearest neighbor index for the item embeddings
This notebook is the fourth of five notebooks that guide you through running the [Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN](https://github.com/GoogleCloudPlatform/analytics-componentized-patterns/tree/master/retail/recommendation-system/bqml-scann) solution.
Use this notebook to create an approximate nearest neighbor (ANN) index for the item embeddings by using the [ScaNN](https://github.com/google-research/google-research/tree/master/scann) framework. You create the index as a model, train the model on AI Platform Training, then export the index to Cloud Storage so that it can serve ANN information.
Before starting this notebook, you must run the [03_create_embedding_lookup_model](03_create_embedding_lookup_model.ipynb) notebook to process the item embeddings data and export it to Cloud Storage.
After completing this notebook, run the [05_deploy_lookup_and_scann_caip](05_deploy_lookup_and_scann_caip.ipynb) notebook to deploy the solution. Once deployed, you can submit song IDs to the solution and get similar song recommendations in return, based on the ANN index.
## Setup
Import the required libraries, configure the environment variables, and authenticate your GCP account.
```
!pip install -q scann
```
### Import libraries
```
import tensorflow as tf
import numpy as np
from datetime import datetime
```
### Configure GCP environment settings
Update the following variables to reflect the values for your GCP environment:
+ `PROJECT_ID`: The ID of the Google Cloud project you are using to implement this solution.
+ `BUCKET`: The name of the Cloud Storage bucket you created to use with this solution. The `BUCKET` value should be just the bucket name, so `myBucket` rather than `gs://myBucket`.
+ `REGION`: The region to use for the AI Platform Training job.
```
PROJECT_ID = 'yourProject' # Change to your project.
BUCKET = 'yourBucketName' # Change to the bucket you created.
REGION = 'yourTrainingRegion' # Change to your AI Platform Training region.
EMBEDDING_FILES_PREFIX = f'gs://{BUCKET}/bqml/item_embeddings/embeddings-*'
OUTPUT_INDEX_DIR = f'gs://{BUCKET}/bqml/scann_index'
```
### Authenticate your GCP account
This is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.
```
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except: pass
```
## Build the ANN index
Use the `build` method implemented in the [indexer.py](index_builder/builder/indexer.py) module to load the embeddings from the CSV files, create the ANN index model and train it on the embedding data, and save the SavedModel file to Cloud Storage. You pass the following three parameters to this method:
+ `embedding_files_path`, which specifies the Cloud Storage location from which to load the embedding vectors.
+ `num_leaves`, which provides the value for a hyperparameter that tunes the model based on the trade-off between retrieval latency and recall. A higher `num_leaves` value will use more data and provide better recall, but will also increase latency. If `num_leaves` is set to `None` or `0`, the `num_leaves` value is the square root of the number of items.
+ `output_dir`, which specifies the Cloud Storage location to write the ANN index SavedModel file to.
Other configuration options for the model are set based on the [rules-of-thumb](https://github.com/google-research/google-research/blob/master/scann/docs/algorithms.md#rules-of-thumb) provided by ScaNN.
### Build the index locally
```
from index_builder.builder import indexer
indexer.build(EMBEDDING_FILES_PREFIX, OUTPUT_INDEX_DIR)
```
### Build the index using AI Platform Training
Submit an AI Platform Training job to build the ScaNN index at scale. The [index_builder](index_builder) directory contains the expected [training application packaging structure](https://cloud.google.com/ai-platform/training/docs/packaging-trainer) for submitting the AI Platform Training job.
```
if tf.io.gfile.exists(OUTPUT_INDEX_DIR):
print("Removing {} contents...".format(OUTPUT_INDEX_DIR))
tf.io.gfile.rmtree(OUTPUT_INDEX_DIR)
print("Creating output: {}".format(OUTPUT_INDEX_DIR))
tf.io.gfile.makedirs(OUTPUT_INDEX_DIR)
timestamp = datetime.utcnow().strftime('%y%m%d%H%M%S')
job_name = f'ks_bqml_build_scann_index_{timestamp}'
!gcloud ai-platform jobs submit training {job_name} \
--project={PROJECT_ID} \
--region={REGION} \
--job-dir={OUTPUT_INDEX_DIR}/jobs/ \
--package-path=index_builder/builder \
--module-name=builder.task \
--config='index_builder/config.yaml' \
--runtime-version=2.2 \
--python-version=3.7 \
--\
--embedding-files-path={EMBEDDING_FILES_PREFIX} \
--output-dir={OUTPUT_INDEX_DIR} \
--num-leaves=500
```
After the AI Platform Training job finishes, check that the `scann_index` folder has been created in your Cloud Storage bucket:
```
!gsutil ls {OUTPUT_INDEX_DIR}
```
## Test the ANN index
Test the ANN index by using the `ScaNNMatcher` class implemented in the [index_server/matching.py](index_server/matching.py) module.
Run the following code snippets to create an item embedding from random generated values and pass it to `scann_matcher`, which returns the items IDs for the five items that are the approximate nearest neighbors of the embedding you submitted.
```
from index_server.matching import ScaNNMatcher
scann_matcher = ScaNNMatcher(OUTPUT_INDEX_DIR)
vector = np.random.rand(50)
scann_matcher.match(vector, 5)
```
## License
Copyright 2020 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License. You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
**This is not an official Google product but sample code provided for an educational purpose**
| github_jupyter |
# Principal Component Analysis on Breast Cancer Dataset
<b>Transformation of data in order to find out what features explain the most variance in our data<b/>
<b>Import required libraries<b/>
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
```
<b>Load dataset<b/>
```
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
type(cancer)
cancer.keys()
print(cancer['DESCR'])
```
<b>Load data into Pandas Dataframe<b/>
```
df = pd.DataFrame(cancer['data'], columns= cancer['feature_names'])
df.head()
cancer['target']
cancer['target_names']
```
<b>Scale Data<b/>
```
#StandardScaling used over MinMax
#For example, in clustering analyses, standardization may be especially crucial in order to compare similarities between features based on certain distance measures. Another prominent example is the Principal Component Analysis (PCA), where we usually prefer standardization over Min-Max scaling since we are interested in the components that maximize the variance
#However, this doesn’t mean that Min-Max scaling is not useful at all! A popular application is image processing, where pixel intensities have to be normalized to fit within a certain range (i.e., 0 to 255 for the RGB colour range). Also, a typical neural network algorithm requires data that on a 0-1 scale.
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(df)
scaled_data = scaler.transform(df)
```
<b>Perform PCA<b/>
```
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(scaled_data)
trans_pca = pca.transform(scaled_data)
trans_pca.shape
scaled_data.shape
```
<b>Visualizing Data<b/>
```
plt.figure(figsize=(9,5))
plt.scatter(trans_pca[:,0], trans_pca[:,1])
plt.xlabel('First Principal Component')
plt.ylabel('Second Principal Component')
plt.figure(figsize=(9,5))
plt.scatter(trans_pca[:,0], trans_pca[:,1], c= cancer['target'])
plt.xlabel('First Principal Component')
plt.ylabel('Second Principal Component')
```
<b>Understanding principal components<b/>
```
pca.components_
pcomp_df = pd.DataFrame(pca.components_, columns=cancer['feature_names'])
pcomp_df.head()
plt.figure(figsize=(9,5))
sns.heatmap(pcomp_df)
plt.figure(figsize=(9,5))
sns.heatmap(pcomp_df, cmap='plasma')
```
<b>Each Principal Component is shown here as a row. Higher the number/more hotter looking color towards yellow, its more corelated to a specific feature in the colums<b/>
```
#Can use SVM over PCA for a classification problem
```
| github_jupyter |
# Working with Text data
```
%matplotlib inline
from preamble import *
```
# http://ai.stanford.edu/~amaas/data/sentiment/
## Example application: Sentiment analysis of movie reviews
```
!tree -L 2 data/aclImdb
from sklearn.datasets import load_files
reviews_train = load_files("data/aclImdb/train/")
# load_files returns a bunch, containing training texts and training labels
text_train, y_train = reviews_train.data, reviews_train.target
print("type of text_train: {}".format(type(text_train)))
print("length of text_train: {}".format(len(text_train)))
print("text_train[1]:\n{}".format(text_train[1]))
text_train = [doc.replace(b"<br />", b" ") for doc in text_train]
print("Samples per class (training): {}".format(np.bincount(y_train)))
reviews_test = load_files("data/aclImdb/test/")
text_test, y_test = reviews_test.data, reviews_test.target
print("Number of documents in test data: {}".format(len(text_test)))
print("Samples per class (test): {}".format(np.bincount(y_test)))
text_test = [doc.replace(b"<br />", b" ") for doc in text_test]
```
### Representing text data as Bag of Words

#### Applying bag-of-words to a toy dataset
```
bards_words = ["The fool doth think he is wise,",
"but the wise man knows himself to be a fool"]
from sklearn.feature_extraction.text import CountVectorizer
vect = CountVectorizer()
vect.fit(bards_words)
print("Vocabulary size: {}".format(len(vect.vocabulary_)))
print("Vocabulary content:\n {}".format(vect.vocabulary_))
bag_of_words = vect.transform(bards_words)
print("bag_of_words: {}".format(repr(bag_of_words)))
print("Dense representation of bag_of_words:\n{}".format(
bag_of_words.toarray()))
vect.get_feature_names()
vect.inverse_transform(bag_of_words)
```
### Bag-of-word for movie reviews
```
vect = CountVectorizer().fit(text_train)
X_train = vect.transform(text_train)
print("X_train:\n{}".format(repr(X_train)))
feature_names = vect.get_feature_names()
print("Number of features: {}".format(len(feature_names)))
print("First 20 features:\n{}".format(feature_names[:20]))
print("Features 20010 to 20030:\n{}".format(feature_names[20010:20030]))
print("Every 2000th feature:\n{}".format(feature_names[::2000]))
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
scores = cross_val_score(LogisticRegression(), X_train, y_train, cv=5)
print("Mean cross-validation accuracy: {:.2f}".format(np.mean(scores)))
from sklearn.model_selection import GridSearchCV
param_grid = {'C': [0.001, 0.01, 0.1, 1]}
grid = GridSearchCV(LogisticRegression(), param_grid, cv=5)
grid.fit(X_train, y_train)
print("Best cross-validation score: {:.2f}".format(grid.best_score_))
print("Best parameters: ", grid.best_params_)
X_test = vect.transform(text_test)
print("Test score: {:.2f}".format(grid.score(X_test, y_test)))
vect = CountVectorizer(min_df=5).fit(text_train)
X_train = vect.transform(text_train)
print("X_train with min_df: {}".format(repr(X_train)))
feature_names = vect.get_feature_names()
print("First 50 features:\n{}".format(feature_names[:50]))
print("Features 20010 to 20030:\n{}".format(feature_names[20010:20030]))
print("Every 700th feature:\n{}".format(feature_names[::700]))
grid = GridSearchCV(LogisticRegression(), param_grid, cv=5)
grid.fit(X_train, y_train)
print("Best cross-validation score: {:.2f}".format(grid.best_score_))
```
### Stop-words
```
from sklearn.feature_extraction.text import ENGLISH_STOP_WORDS
print("Number of stop words: {}".format(len(ENGLISH_STOP_WORDS)))
print("Every 10th stopword:\n{}".format(list(ENGLISH_STOP_WORDS)[::10]))
# specifying stop_words="english" uses the build-in list.
# We could also augment it and pass our own.
vect = CountVectorizer(min_df=5, stop_words="english").fit(text_train)
X_train = vect.transform(text_train)
print("X_train with stop words:\n{}".format(repr(X_train)))
grid = GridSearchCV(LogisticRegression(), param_grid, cv=5)
grid.fit(X_train, y_train)
print("Best cross-validation score: {:.2f}".format(grid.best_score_))
```
### Rescaling the data with TFIDF
\begin{equation*}
\text{tfidf}(w, d) = \text{tf} \log\big(\frac{N + 1}{N_w + 1}\big) + 1
\end{equation*}
```
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import make_pipeline
pipe = make_pipeline(TfidfVectorizer(min_df=5),
LogisticRegression())
param_grid = {'logisticregression__C': [0.001, 0.01, 0.1, 1, 10]}
grid = GridSearchCV(pipe, param_grid, cv=5)
grid.fit(text_train, y_train)
print("Best cross-validation score: {:.2f}".format(grid.best_score_))
vectorizer = grid.best_estimator_.named_steps["tfidfvectorizer"]
# transform the training dataset:
X_train = vectorizer.transform(text_train)
# find maximum value for each of the features over dataset:
max_value = X_train.max(axis=0).toarray().ravel()
sorted_by_tfidf = max_value.argsort()
# get feature names
feature_names = np.array(vectorizer.get_feature_names())
print("Features with lowest tfidf:\n{}".format(
feature_names[sorted_by_tfidf[:20]]))
print("Features with highest tfidf: \n{}".format(
feature_names[sorted_by_tfidf[-20:]]))
sorted_by_idf = np.argsort(vectorizer.idf_)
print("Features with lowest idf:\n{}".format(
feature_names[sorted_by_idf[:100]]))
```
#### Investigating model coefficients
```
plt.figure(figsize=(20, 5), dpi=300)
mglearn.tools.visualize_coefficients(
grid.best_estimator_.named_steps["logisticregression"].coef_,
feature_names, n_top_features=40)
```
# Bag of words with more than one word (n-grams)
```
print("bards_words:\n{}".format(bards_words))
cv = CountVectorizer(ngram_range=(1, 1)).fit(bards_words)
print("Vocabulary size: {}".format(len(cv.vocabulary_)))
print("Vocabulary:\n{}".format(cv.get_feature_names()))
cv = CountVectorizer(ngram_range=(2, 2)).fit(bards_words)
print("Vocabulary size: {}".format(len(cv.vocabulary_)))
print("Vocabulary:\n{}".format(cv.get_feature_names()))
print("Transformed data (dense):\n{}".format(cv.transform(bards_words).toarray()))
cv = CountVectorizer(ngram_range=(1, 3)).fit(bards_words)
print("Vocabulary size: {}".format(len(cv.vocabulary_)))
print("Vocabulary:{}\n".format(cv.get_feature_names()))
pipe = make_pipeline(TfidfVectorizer(min_df=5), LogisticRegression())
# running the grid-search takes a long time because of the
# relatively large grid and the inclusion of trigrams
param_grid = {'logisticregression__C': [0.001, 0.01, 0.1, 1, 10, 100],
"tfidfvectorizer__ngram_range": [(1, 1), (1, 2), (1, 3)]}
grid = GridSearchCV(pipe, param_grid, cv=5)
grid.fit(text_train, y_train)
print("Best cross-validation score: {:.2f}".format(grid.best_score_))
print("Best parameters:\n{}".format(grid.best_params_))
len(CountVectorizer().fit(text_train).get_feature_names())
len(CountVectorizer(min_df=5).fit(text_train).get_feature_names())
len(CountVectorizer(ngram_range=(1, 2)).fit(text_train).get_feature_names())
len(CountVectorizer(ngram_range=(1, 2), min_df=5).fit(text_train).get_feature_names())
len(CountVectorizer(ngram_range=(1, 2), min_df=5, stop_words="english").fit(text_train).get_feature_names())
# extract scores from grid_search
scores = grid.cv_results_['mean_test_score'].reshape(-1, 3).T
# visualize heatmap
heatmap = mglearn.tools.heatmap(
scores, xlabel="C", ylabel="ngram_range", cmap="viridis", fmt="%.3f",
xticklabels=param_grid['logisticregression__C'],
yticklabels=param_grid['tfidfvectorizer__ngram_range'])
plt.colorbar(heatmap)
# extract feature names and coefficients
vect = grid.best_estimator_.named_steps['tfidfvectorizer']
feature_names = np.array(vect.get_feature_names())
coef = grid.best_estimator_.named_steps['logisticregression'].coef_
mglearn.tools.visualize_coefficients(coef, feature_names, n_top_features=40)
plt.ylim(-22, 22)
# find 3-gram features
mask = np.array([len(feature.split(" ")) for feature in feature_names]) == 3
# visualize only 3-gram features:
mglearn.tools.visualize_coefficients(coef.ravel()[mask],
feature_names[mask], n_top_features=40)
plt.ylim(-22, 22)
```
# Exercise
Compare unigram and bigram models on the 20 newsgroup dataset
```
from sklearn.datasets import fetch_20newsgroups
categories = [
'alt.atheism',
'talk.religion.misc',
'comp.graphics',
'sci.space',
]
remove = ('headers', 'footers', 'quotes')
data_train = fetch_20newsgroups(subset='train', categories=categories,
shuffle=True, random_state=42,
remove=remove)
data_test = fetch_20newsgroups(subset='test', categories=categories,
shuffle=True, random_state=42,
remove=remove)
data_train.data[0]
```
| github_jupyter |
# Day 24 - Cellular automaton
We are back to [cellar automatons](https://en.wikipedia.org/wiki/Cellular_automaton), in a finite 2D grid, just like [day 18 of 2018](../2018/Day%2018.ipynb). I'll use similar techniques, with [`scipy.signal.convolve2d()`](https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.signal.convolve2d.html) to turn neighbor counts into the next state. Our state is simpler, a simple on or off so we can use simple boolean selections here.
```
from __future__ import annotations
from typing import Set, Sequence, Tuple
import numpy as np
from scipy.signal import convolve2d
def readmap(maplines: Sequence[str]) -> np.array:
return np.array([
c == "#" for line in maplines for c in line
]).reshape((5, -1))
def biodiversity_rating(matrix: np.array) -> int:
# booleans -> single int by multiplying with powers of 2, then summing
return (
matrix.reshape((-1)) *
np.logspace(0, matrix.size - 1, num=matrix.size, base=2, dtype=np.uint)
).sum()
def find_repeat(matrix: np.array) -> int:
# the four adjacent tiles matter, not the diagonals
kernel = np.array([[0, 1, 0], [1, 0, 1], [0, 1, 0]])
# previous states seen (matrix flattened to a tuple)
seen: Set[Tuple] = set()
while True:
counts = convolve2d(matrix, kernel, mode='same')
matrix = (
# A bug dies (becoming an empty space) unless there is exactly one bug adjacent to it.
(matrix & (counts == 1)) |
# An empty space becomes infested with a bug if exactly one or two bugs are adjacent to it.
(~matrix & ((counts == 1) | (counts == 2)))
)
key = tuple(matrix.flatten())
if key in seen:
return biodiversity_rating(matrix)
seen.add(key)
test_matrix = readmap("""\
....#
#..#.
#..##
..#..
#....""".splitlines())
assert find_repeat(test_matrix) == 2129920
import aocd
data = aocd.get_data(day=24, year=2019)
erismap = readmap(data.splitlines())
print("Part 1:", find_repeat(erismap))
# how fast is this?
%timeit find_repeat(erismap)
```
## Part 2, adding a 3rd dimension
I'm not sure if we might be able to use [`scipy.signal.convolve()`](https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.signal.convolve.html#scipy.signal.convolve) (the N-dimensional variant of `convolve2d()`) to count neighbours across multiple layers in one go. It works for counting neighbours across a single layer however, and for 200 steps, the additional 8 computations are not exactly strenuous.
I'm creating all layers needed to fit all the steps. An empty layer is filled across 2 steps; first the inner ring, then the outer ring, at which point another layer is needed. So for 200 steps we need 100 layers below and a 100 layers above, ending up with 201 layers. These are added by using [np.pad()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html).
Then use `convolve()` to count neighbours on the same level, and a few sums for additional counts from the levels above and below.
```
from scipy.signal import convolve
def run_multidimensional(matrix: np.array, steps: int = 200) -> int:
# 3d kernel; only those on the same level, not above or below
kernel = np.array([
[[0, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0, 1, 0], [1, 0, 1], [0, 1, 0]],
[[0, 0, 0], [0, 0, 0], [0, 0, 0]],
])
matrix = np.pad(matrix[None], [((steps + 1) // 2,), (0,), (0,)])
for _ in range(steps):
# count neighbours on the same layer, then clear the hole
counts = convolve(matrix, kernel, mode='same')
counts[:, 2, 2] = 0
# layer below, counts[:-1, ...] are updated from kernel[1:, ...].sum()s
counts[:-1, 1, 2] += matrix[1:, 0, :].sum(axis=1) # cell above hole += top row next level
counts[:-1, 3, 2] += matrix[1:, -1, :].sum(axis=1) # cell below hole += bottom row next level
counts[:-1, 2, 1] += matrix[1:, :, 0].sum(axis=1) # cell left of hole += left column next level
counts[:-1, 2, 3] += matrix[1:, :, -1].sum(axis=1) # cell right of hole += right column next level
# layer above, counts[1-:, ...] slices are updated from kernel[:-1, ...] indices (true -> 1)
counts[1:, 0, :] += matrix[:-1, 1, 2, None] # top row += cell above hole next level
counts[1:, -1, :] += matrix[:-1, 3, 2, None] # bottom row += cell below hole next level
counts[1:, :, 0] += matrix[:-1, 2, 1, None] # left column += cell left of hole next level
counts[1:, :, -1] += matrix[:-1, 2, 3, None] # right column += cell right of hole next level
# next step is the same as part 1:
matrix = (
# A bug dies (becoming an empty space) unless there is exactly one bug adjacent to it.
(matrix & (counts == 1)) |
# An empty space becomes infested with a bug if exactly one or two bugs are adjacent to it.
(~matrix & ((counts == 1) | (counts == 2)))
)
return matrix.sum()
assert run_multidimensional(test_matrix, 10) == 99
print("Part 2:", run_multidimensional(erismap))
# how fast is this?
%timeit run_multidimensional(erismap)
```
| github_jupyter |
```
import statsmodels.formula.api as smf
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
np.random.seed(123)
```
---
# Lecture 11: Regression discontinuity
---
---
## Lee (2008)
---
The author studies the "incumbency advantage", i.e. the overall causal impact of being the current incumbent party in a district on the votes obtained in the district's election.
* Lee, David S. (2008). Randomized experiments from non-random selection in U.S. House elections. Journal of Econometrics.
```
df_base = pd.read_csv("../../datasets/processed/msc/house.csv")
df_base.head()
```
---
## What are the basic characteristics of the dataset?
---
```
df_base.plot.scatter(x=0, y=1)
```
What is the re-election rate?
```
pd.crosstab(
df_base.vote_last > 0.0,
df_base.vote_next > 0.5,
margins=True,
normalize="columns",
)
```
---
## Regression discontinuity design
---
How does the average vote in the next election look like as we move along last year's election.
```
df_base["bin"] = pd.cut(df_base.vote_last, 200, labels=False)
df_base.groupby("bin").vote_next.mean().plot()
```
Now we turn to an explicit model of the conditional mean.
```
def fit_regression(incumbent, level=4):
assert incumbent in ["republican", "democratic"]
if incumbent == "republican":
df_incumbent = df_base.loc[df_base.vote_last < 0.0, :]
else:
df_incumbent = df_base.loc[df_base.vote_last > 0.0, :]
for level in range(2, level + 1):
label = "vote_last_{:}".format(level)
df_incumbent.loc[:, label] = df_incumbent["vote_last"] ** level
formula = "vote_next ~ vote_last + vote_last_2 + vote_last_3 + vote_last_4"
rslt = smf.ols(formula=formula, data=df_incumbent).fit()
return rslt
rslt = dict()
for incumbent in ["republican", "democratic"]:
rslt = fit_regression(incumbent, level=4)
title = "\n\n {:}\n".format(incumbent.capitalize())
print(title, rslt.summary())
```
How does the predictions look like?
```
for incumbent in ["republican", "democratic"]:
rslt = fit_regression(incumbent, level=4)
# For our predictions, we need to set up a grid for the evaluation.
if incumbent == "republican":
grid = np.linspace(-0.5, 0.0, 100)
else:
grid = np.linspace(+0.0, 0.5, 100)
df_grid = pd.DataFrame(grid, columns=["vote_last"])
for level in range(2, 5):
label = "vote_last_{:}".format(level)
df_grid.loc[:, label] = df_grid["vote_last"] ** level
ax = rslt.predict(df_grid).plot(title=incumbent.capitalize())
plt.show()
```
We can now compute the difference at the cutoffs to get an estimate for the treatment effect.
```
before_cutoff = df_base.groupby("bin").vote_next.mean()[99]
after_cutoff = df_base.groupby("bin").vote_next.mean()[100]
effect = after_cutoff - before_cutoff
print("Treatment Effect: {:5.3f}%".format(effect * 100))
```
---
## How does the estimated treatment effect depend on the choice of the bin width?
---
```
for num_bins in [100, 200]:
df = df_base.copy(deep=True)
df["bin"] = pd.cut(df_base.vote_last, num_bins, labels=False)
info = df.groupby("bin").vote_next.mean()
lower = (num_bins / 2) - 1
effect = info[lower + 1] - info[lower]
print(
" Number of bins: {:}, Width {:>5}, Effect {:5.2f}%".format(
num_bins, 1.0 / num_bins, effect * 100
)
)
```
---
## Regression
---
There are several alternatives to estimate the conditional mean functions.
* pooled regressions
* local linear regressions
```
# It will be useful to split the sample by the cutoff value
# for easier access going forward.
df_base["D"] = df_base.vote_last > 0
```
### Pooled regression
We estimate the conditinal mean using the whole function.
\begin{align*}
Y = \alpha_r + \tau D + \beta X + \epsilon
\end{align*}
This allows for a difference in levels but not slope.
```
smf.ols(formula="vote_next ~ vote_last + D", data=df_base).fit().summary()
```
### Local linear regression
We now turn to local regressions by restricting the estimation to observations close to the cutoff.
\begin{align*}
Y = \alpha_r + \tau D + \beta X + \gamma X D + \epsilon,
\end{align*}
where $-h \geq X \geq h$. This allows for a difference in levels and slope.
```
for h in [0.3, 0.2, 0.1, 0.05, 0.01]:
# We restrict the sample to observations close
# to the cutoff.
df = df_base[df_base.vote_last.between(-h, h)]
formula = "vote_next ~ D + vote_last + D * vote_last"
rslt = smf.ols(formula=formula, data=df).fit()
info = [h, rslt.params[1] * 100, rslt.pvalues[1]]
print(
" Bandwidth: {:>4} Effect {:5.3f}% pvalue {:5.3f}".format(*info)
)
```
There exists some work that can guide the choice of the bandwidth.
Now, let's return to the slides to summarize the key issues and some review best practices.
---
## Resources
---
* **Lee, D. S. (2008)**. [Randomized experiments from non-random selection in us house elections](https://reader.elsevier.com/reader/sd/pii/S0304407607001121?token=B2B8292E08E07683C3CAFB853380CD4C1E5D1FD17982228079F6EE672298456ED7D6692F0598AA50D54463AC0A849065). In *Journal of Econometrics*, 142(2), 675–697.
* **Lee, D. S., and Lemieux, T. (2010)**. [Regression discontinuity designs in economics](https://www.princeton.edu/~davidlee/wp/RDDEconomics.pdf). In *Journal of Economic Literature*, 48(2), 281–355.
| github_jupyter |
```
"""
Please run notebook locally (if you have all the dependencies and a GPU).
Technically you can run this notebook on Google Colab but you need to set up microphone for Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
5. Set up microphone for Colab
"""
# If you're using Google Colab and not running locally, run this cell.
## Install dependencies
!pip install wget
!apt-get install sox libsndfile1 ffmpeg portaudio19-dev
!pip install unidecode
!pip install pyaudio
# ## Install NeMo
BRANCH = 'main'
!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[asr]
## Install TorchAudio
!pip install torchaudio>=0.10.0 -f https://download.pytorch.org/whl/torch_stable.html
```
This notebook demonstrates offline and online (from a microphone's stream in NeMo) speech commands recognition
The notebook requires PyAudio library to get a signal from an audio device.
For Ubuntu, please run the following commands to install it:
```
sudo apt-get install -y portaudio19-dev
pip install pyaudio
```
This notebook requires the `torchaudio` library to be installed for MatchboxNet. Please follow the instructions available at the [torchaudio Github page](https://github.com/pytorch/audio#installation) to install the appropriate version of torchaudio.
If you would like to install the latest version, please run the following command to install it:
```
conda install -c pytorch torchaudio
```
```
import numpy as np
import pyaudio as pa
import os, time
import librosa
import IPython.display as ipd
import matplotlib.pyplot as plt
%matplotlib inline
import nemo
import nemo.collections.asr as nemo_asr
# sample rate, Hz
SAMPLE_RATE = 16000
```
## Restore the model from NGC
```
mbn_model = nemo_asr.models.EncDecClassificationModel.from_pretrained("commandrecognition_en_matchboxnet3x1x64_v2")
```
Since speech commands model MatchBoxNet doesn't consider non-speech scenario,
here we use a Voice Activity Detection (VAD) model to help reduce false alarm for background noise/silence. When there is speech activity detected, the speech command inference will be activated.
**Please note the VAD model is not perfect for various microphone input and you might need to finetune on your input and play with different parameters.**
```
vad_model = nemo_asr.models.EncDecClassificationModel.from_pretrained('vad_marblenet')
```
## Observing the config of the model
```
from omegaconf import OmegaConf
import copy
# Preserve a copy of the full config
vad_cfg = copy.deepcopy(vad_model._cfg)
mbn_cfg = copy.deepcopy(mbn_model._cfg)
print(OmegaConf.to_yaml(mbn_cfg))
```
## What classes can this model recognize?
Before we begin inference on the actual audio stream, let's look at what are the classes this model was trained to recognize.
**MatchBoxNet model is not designed to recognize words out of vocabulary (OOV).**
```
labels = mbn_cfg.labels
for i in range(len(labels)):
print('%-10s' % (labels[i]), end=' ')
```
## Setup preprocessor with these settings
```
# Set model to inference mode
mbn_model.eval();
vad_model.eval();
```
## Setting up data for Streaming Inference
```
from nemo.core.classes import IterableDataset
from nemo.core.neural_types import NeuralType, AudioSignal, LengthsType
import torch
from torch.utils.data import DataLoader
# simple data layer to pass audio signal
class AudioDataLayer(IterableDataset):
@property
def output_types(self):
return {
'audio_signal': NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),
'a_sig_length': NeuralType(tuple('B'), LengthsType()),
}
def __init__(self, sample_rate):
super().__init__()
self._sample_rate = sample_rate
self.output = True
def __iter__(self):
return self
def __next__(self):
if not self.output:
raise StopIteration
self.output = False
return torch.as_tensor(self.signal, dtype=torch.float32), \
torch.as_tensor(self.signal_shape, dtype=torch.int64)
def set_signal(self, signal):
self.signal = signal.astype(np.float32)/32768.
self.signal_shape = self.signal.size
self.output = True
def __len__(self):
return 1
data_layer = AudioDataLayer(sample_rate=mbn_cfg.train_ds.sample_rate)
data_loader = DataLoader(data_layer, batch_size=1, collate_fn=data_layer.collate_fn)
```
## inference method for audio signal (single instance)
```
def infer_signal(model, signal):
data_layer.set_signal(signal)
batch = next(iter(data_loader))
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(model.device), audio_signal_len.to(model.device)
logits = model.forward(input_signal=audio_signal, input_signal_length=audio_signal_len)
return logits
```
we don't include postprocessing techniques here.
```
# class for streaming frame-based ASR
# 1) use reset() method to reset FrameASR's state
# 2) call transcribe(frame) to do ASR on
# contiguous signal's frames
class FrameASR:
def __init__(self, model_definition,
frame_len=2, frame_overlap=2.5,
offset=0):
'''
Args:
frame_len (seconds): Frame's duration
frame_overlap (seconds): Duration of overlaps before and after current frame.
offset: Number of symbols to drop for smooth streaming.
'''
self.task = model_definition['task']
self.vocab = list(model_definition['labels'])
self.sr = model_definition['sample_rate']
self.frame_len = frame_len
self.n_frame_len = int(frame_len * self.sr)
self.frame_overlap = frame_overlap
self.n_frame_overlap = int(frame_overlap * self.sr)
timestep_duration = model_definition['AudioToMFCCPreprocessor']['window_stride']
for block in model_definition['JasperEncoder']['jasper']:
timestep_duration *= block['stride'][0] ** block['repeat']
self.buffer = np.zeros(shape=2*self.n_frame_overlap + self.n_frame_len,
dtype=np.float32)
self.offset = offset
self.reset()
@torch.no_grad()
def _decode(self, frame, offset=0):
assert len(frame)==self.n_frame_len
self.buffer[:-self.n_frame_len] = self.buffer[self.n_frame_len:]
self.buffer[-self.n_frame_len:] = frame
if self.task == 'mbn':
logits = infer_signal(mbn_model, self.buffer).to('cpu').numpy()[0]
decoded = self._mbn_greedy_decoder(logits, self.vocab)
elif self.task == 'vad':
logits = infer_signal(vad_model, self.buffer).to('cpu').numpy()[0]
decoded = self._vad_greedy_decoder(logits, self.vocab)
else:
raise("Task should either be of mbn or vad!")
return decoded[:len(decoded)-offset]
def transcribe(self, frame=None,merge=False):
if frame is None:
frame = np.zeros(shape=self.n_frame_len, dtype=np.float32)
if len(frame) < self.n_frame_len:
frame = np.pad(frame, [0, self.n_frame_len - len(frame)], 'constant')
unmerged = self._decode(frame, self.offset)
return unmerged
def reset(self):
'''
Reset frame_history and decoder's state
'''
self.buffer=np.zeros(shape=self.buffer.shape, dtype=np.float32)
self.mbn_s = []
self.vad_s = []
@staticmethod
def _mbn_greedy_decoder(logits, vocab):
mbn_s = []
if logits.shape[0]:
class_idx = np.argmax(logits)
class_label = vocab[class_idx]
mbn_s.append(class_label)
return mbn_s
@staticmethod
def _vad_greedy_decoder(logits, vocab):
vad_s = []
if logits.shape[0]:
probs = torch.softmax(torch.as_tensor(logits), dim=-1)
probas, preds = torch.max(probs, dim=-1)
vad_s = [preds.item(), str(vocab[preds]), probs[0].item(), probs[1].item(), str(logits)]
return vad_s
```
# Streaming Inference
## offline inference
Here we show an example of offline streaming inference. you can use your file or download the provided demo audio file.
Streaming inference depends on a few factors, such as the frame length (STEP) and buffer size (WINDOW SIZE). Experiment with a few values to see their effects in the below cells.
```
STEP = 0.25
WINDOW_SIZE = 1.28 # input segment length for NN we used for training
import wave
def offline_inference(wave_file, STEP = 0.25, WINDOW_SIZE = 0.31):
"""
Arg:
wav_file: wave file to be performed inference on.
STEP: infer every STEP seconds
WINDOW_SIZE : lenght of audio to be sent to NN.
"""
FRAME_LEN = STEP
CHANNELS = 1 # number of audio channels (expect mono signal)
RATE = SAMPLE_RATE # sample rate, 16000 Hz
CHUNK_SIZE = int(FRAME_LEN * SAMPLE_RATE)
mbn = FrameASR(model_definition = {
'task': 'mbn',
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': mbn_cfg.preprocessor,
'JasperEncoder': mbn_cfg.encoder,
'labels': mbn_cfg.labels
},
frame_len=FRAME_LEN, frame_overlap = (WINDOW_SIZE - FRAME_LEN)/2,
offset=0)
wf = wave.open(wave_file, 'rb')
data = wf.readframes(CHUNK_SIZE)
while len(data) > 0:
data = wf.readframes(CHUNK_SIZE)
signal = np.frombuffer(data, dtype=np.int16)
mbn_result = mbn.transcribe(signal)
if len(mbn_result):
print(mbn_result)
mbn.reset()
demo_wave = 'SpeechCommands_demo.wav'
if not os.path.exists(demo_wave):
!wget "https://dldata-public.s3.us-east-2.amazonaws.com/SpeechCommands_demo.wav"
wave_file = demo_wave
CHANNELS = 1
audio, sample_rate = librosa.load(wave_file, sr=SAMPLE_RATE)
dur = librosa.get_duration(audio)
print(dur)
ipd.Audio(audio, rate=sample_rate)
# Ground-truth is Yes No
offline_inference(wave_file, STEP, WINDOW_SIZE)
```
## Online inference through microphone
Please note MatchBoxNet and VAD model are not perfect for various microphone input and you might need to finetune on your input and play with different parameter. \
**We also recommend to use a headphone.**
```
vad_threshold = 0.8
STEP = 0.1
WINDOW_SIZE = 0.15
mbn_WINDOW_SIZE = 1
CHANNELS = 1
RATE = SAMPLE_RATE
FRAME_LEN = STEP # use step of vad inference as frame len
CHUNK_SIZE = int(STEP * RATE)
vad = FrameASR(model_definition = {
'task': 'vad',
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': vad_cfg.preprocessor,
'JasperEncoder': vad_cfg.encoder,
'labels': vad_cfg.labels
},
frame_len=FRAME_LEN, frame_overlap=(WINDOW_SIZE - FRAME_LEN) / 2,
offset=0)
mbn = FrameASR(model_definition = {
'task': 'mbn',
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': mbn_cfg.preprocessor,
'JasperEncoder': mbn_cfg.encoder,
'labels': mbn_cfg.labels
},
frame_len=FRAME_LEN, frame_overlap = (mbn_WINDOW_SIZE-FRAME_LEN)/2,
offset=0)
vad.reset()
mbn.reset()
# Setup input device
p = pa.PyAudio()
print('Available audio input devices:')
input_devices = []
for i in range(p.get_device_count()):
dev = p.get_device_info_by_index(i)
if dev.get('maxInputChannels'):
input_devices.append(i)
print(i, dev.get('name'))
if len(input_devices):
dev_idx = -2
while dev_idx not in input_devices:
print('Please type input device ID:')
dev_idx = int(input())
def callback(in_data, frame_count, time_info, status):
"""
callback function for streaming audio and performing inference
"""
signal = np.frombuffer(in_data, dtype=np.int16)
vad_result = vad.transcribe(signal)
mbn_result = mbn.transcribe(signal)
if len(vad_result):
# if speech prob is higher than threshold, we decide it contains speech utterance
# and activate MatchBoxNet
if vad_result[3] >= vad_threshold:
print(mbn_result) # print mbn result when speech present
else:
print("no-speech")
return (in_data, pa.paContinue)
# streaming
stream = p.open(format=pa.paInt16,
channels=CHANNELS,
rate=SAMPLE_RATE,
input=True,
input_device_index=dev_idx,
stream_callback=callback,
frames_per_buffer=CHUNK_SIZE)
print('Listening...')
stream.start_stream()
# Interrupt kernel and then speak for a few more words to exit the pyaudio loop !
try:
while stream.is_active():
time.sleep(0.1)
finally:
stream.stop_stream()
stream.close()
p.terminate()
print()
print("PyAudio stopped")
else:
print('ERROR: No audio input device found.')
```
## ONNX Deployment
You can also export the model to ONNX file and deploy it to TensorRT or MS ONNX Runtime inference engines. If you don't have one installed yet, please run:
```
!pip install --upgrade onnxruntime # for gpu, use onnxruntime-gpu
# !mkdir -p ort
# %cd ort
# !git clone --depth 1 --branch v1.8.0 https://github.com/microsoft/onnxruntime.git .
# !./build.sh --skip_tests --config Release --build_shared_lib --parallel --use_cuda --cuda_home /usr/local/cuda --cudnn_home /usr/lib/x86_64-linux-gnu --build_wheel
# !pip install ./build/Linux/Release/dist/onnxruntime*.whl
# %cd ..
```
Then just replace `infer_signal` implementation with this code:
```
import onnxruntime
mbn_model.export('mbn.onnx')
ort_session = onnxruntime.InferenceSession('mbn.onnx')
def to_numpy(tensor):
return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
def infer_signal(signal):
data_layer.set_signal(signal)
batch = next(iter(data_loader))
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(mbn_model.device), audio_signal_len.to(mbn_model.device)
processed_signal, processed_signal_len = mbn_model.preprocessor(
input_signal=audio_signal, length=audio_signal_len,
)
ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(processed_signal), }
ologits = ort_session.run(None, ort_inputs)
alogits = np.asarray(ologits)
logits = torch.from_numpy(alogits[0])
return logits
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
from os import mkdir
from os.path import join
bov_counter = 0
def writeBOV(g):
"""g is presumed to be a numpy 2D array of doubles"""
global bov_counter
bovNm = 'file_%03d.bov' % bov_counter
dataNm = 'file_%03d.doubles' % bov_counter
bov_counter += 1
try:
mkdir('frames')
except FileExistsError:
pass
with open(join('frames', bovNm), 'w') as f:
f.write('TIME: %g\n' % float(bov_counter))
f.write('DATA_FILE: %s\n' % dataNm)
f.write('DATA_SIZE: %d %d 1\n' % g.shape)
f.write('DATA_FORMAT: DOUBLE\n')
f.write('VARIABLE: U\n')
f.write('DATA_ENDIAN: LITTLE\n')
f.write('CENTERING: ZONAL\n')
f.write('BRICK_ORIGIN: 0. 0. 0.\n')
f.write('BRICK_SIZE: 1.0 1.0 1.0\n')
with open(join('frames', dataNm), 'w') as f:
g.T.tofile(f) # BOV format expects Fortran order
#
# Scaling constants
#
# You'll have to pick a value for dt which produces stable evolution
# for your stencil!
XDIM = 101
YDIM = 101
tMax = 5.0
dx = 0.1
dy = 0.1
dt = 0.025 # FIX ME!
vel = 1.0
xMin = -(XDIM//2)*dx
yMin = -(YDIM//2)*dy
def initialize():
"""Create the grid and apply the initial condition"""
U = np.zeros([YDIM, XDIM]) # We just use this for shape
ctrX= 0.0
ctrY= 0.0
sigma= 0.25
maxU= 5.0
grid = np.indices(U.shape)
x = (grid[1] * dx) + xMin # a full grid of X coordinates
y = (grid[0] * dy) + yMin # a full grid of Y coordinates
distSqr = np.square(x - ctrX) + np.square(y - ctrY)
U = maxU * np.exp(-distSqr/(sigma*sigma))
return U
# test writeBOV
bov_counter = 0
writeBOV(initialize())
def doTimeStep(U, UOld):
"""
Step your solution forward in time. You need to calculate
UNew in the grid area [1:-1, 1:-1]. The 'patch the boundaries'
bit below will take care of the edges at i=0, i=XDIM-1, j=0,
and j=YDIM-1. Note that the array indices are ordered like U[j][i]!
"""
xRatioSqr= (dt*dt*vel*vel)/(dx*dx)
yRatioSqr= (dt*dt*vel*vel)/(dy*dy)
UNew = np.empty_like(U)
dxxterm = xRatioSqr * (U[1:-1, 2:] + U[1:-1, 0:-2] - 2*U[1:-1, 1:-1])
dyyterm = yRatioSqr * (U[2:, 1:-1] + U[0:-2, 1:-1] - 2*U[1:-1, 1:-1])
UNew[1:-1, 1:-1] = 2*U[1:-1,1:-1] + (dxxterm + dyyterm) - UOld[1:-1, 1:-1]
# Patch the boundaries. This mapping makes the surface into a torus.
UNew[:, 0] = UNew[:, 1]
UNew[:, -1] = UNew[:, -2]
UNew[0, :] = UNew[1, :]
UNew[-1, :] = UNew[-2, :]
return UNew
def timeToOutput(t, count):
"""A little test to tell how often to dump output"""
return (count % 4 == 0)
U = initialize()
UOld = np.copy(U)
t = 0.0
count = 0
while t < tMax:
if timeToOutput(t, count):
writeBOV(U)
print ('Output at t = %s: min = %f, max = %f'
% (t, np.amin(U), np.amax(U)))
UNew = doTimeStep(U, UOld)
UOld = U
U = UNew
t += dt
count += 1
```
| github_jupyter |
# Домашнее задание 2 по обработке текстов
Рассмотрим задачу бинарной классификации. Пусть дано два списка имен: мужские и женские имена. Требуется разработать классификатор, который по данному имени будет определять мужское оно или женское.
Данные:
* Женские имена: female.txt
* Мужские имена: male.txt
```
# plots
from matplotlib import pyplot as plt
import seaborn as sns
from pylab import rcParams
from plotly.offline import init_notebook_mode, iplot
import plotly
import plotly.graph_objs as go
init_notebook_mode(connected=True)
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
%pylab inline
%config InlineBackend.figure_format = 'png'
rcParams['figure.figsize'] = (16, 6)
! ls
import pandas as pd
import numpy as np
```
## Часть 1. Предварительная обработка данных
1. Удалите неоднозначные имена (те имена, которые являются и мужскими, и женскими дновременно), если такие есть;
2. Создайте обучающее и тестовое множество так, чтобы в обучающем множестве классы были сбалансированы, т.е. к классу принадлежало бы одинаковое количество имен;
```
df_male = pd.read_csv('male.txt', sep=",", header=None, names=['name'])
display(df_male.head(), df_male.describe(), df_male.info())
df_female = pd.read_csv('female.txt', sep=",", header=None, names=['name'])
display(df_female.head(), df_female.describe(), df_female.info())
df_male['male'] = 1
df_female['male'] = 0
df_all = df_male.append(df_female)
df_all['name'] = df_all['name'].str.lower()
df_all.drop_duplicates(subset=['name'], inplace=True, keep=False)
display(df_all.head(), df_all.describe(), df_all.info())
from sklearn.model_selection import train_test_split
df_train, df_test = train_test_split(df_all, test_size=0.2, random_state=42, stratify=df_all['male'])
df_train.reset_index(inplace = True, drop = True)
df_test.reset_index(inplace = True, drop = True)
display(df_train["male"].value_counts(), df_test["male"].value_counts())
```
## Часть 2. Базовый метод классификации
Используйте метод наивного Байеса или логистическую регрессию для классификации имен: в качестве признаков используйте символьные $n$-граммы. Сравните результаты, получаемые при разных $n=2,3,4$ по $F$-мере и аккуратности. В каких случаях метод ошибается?
Для генерации $n$-грамм используйте:
```
# from nltk.util import ngrams
from sklearn.metrics import *
def print_score(y_test, y_pred):
print("Accuracy: {0:.2f}".format(accuracy_score(y_test, y_pred)))
print("F1-measure: {0:.2f}".format(f1_score(y_test, y_pred, average='macro')))
print("Precision: {0:.2f}".format(precision_score(y_test, y_pred)))
print("Recall: {0:.2f}".format(recall_score(y_test, y_pred)))
print(classification_report(y_test, y_pred, target_names=['female', 'male']))
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.naive_bayes import MultinomialNB
clf = Pipeline([
('vectorizer', CountVectorizer(analyzer='char_wb')),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB()),
])
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import StratifiedKFold
params = {
'vectorizer__ngram_range': [(1, 1), (1, 3), (2, 2), (2, 3), (2, 4)],
'tfidf__use_idf': (True, False),
'clf__alpha': (0.001, 0.01, 0.1, 1)
}
cv = StratifiedKFold(n_splits=5, shuffle=True, random_state=42)
clf = GridSearchCV(clf, params, scoring='f1', cv=cv, n_jobs=-1)
clf.fit(df_train['name'], df_train['male'])
predictions = clf.best_estimator_.predict(df_test.name)
print_score(df_test['male'].values, predictions)
```
## Часть 3. Нейронная сеть
Используйте реккурентную нейронную сеть с LSTM для решения задачи. В ней может быть несколько слоев с LSTM, несколько слоев c Bidirectional(LSTM). У нейронной сети один выход, определяющий класс имени.
Представление имени для классификации в этом случае: бинарная матрица размера (количество букв в алфавите $\times$ максимальная длина имени). Обозначим его через $x$. Если первая буква имени a, то $x[1][1] = 1$, если вторая – b, то $x[2][1] = 1$.
Не забудьте про регуляризацию нейронной сети дропаутами.
Сравните результаты классификации разными методами. Какой метод лучше и почему?
Сравните результаты, получаемые при разных значениях дропаута, разных числах узлов на слоях нейронной сети по $F$-мере и аккуратности. В каких случаях нейронная сеть ошибается?
Если совсем не получается запрограммировать нейронную сеть самостоятельно, обратитесь к туториалу тут: https://github.com/divamgupta/lstm-gender-predictor
```
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout, Bidirectional
from keras.layers import LSTM
from keras.utils import to_categorical
longest_name_length = df_all['name'].str.len().max()
chars = sorted(list(set("".join(df_train['name']))))
print('total chars:', len(chars))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
X_train = np.zeros((len(df_train), longest_name_length, len(chars)), dtype=np.int)
y_train = np.zeros((len(df_train), 1), dtype=np.int)
for i in range(len(df_train)):
for t, char in enumerate(df_train['name'][i]):
X_train[i, t, char_indices[char]] = 1
y_train[i] = df_train['male'][i]
X_test = np.zeros((len(df_test), longest_name_length, len(chars)), dtype=np.int)
y_test = np.zeros((len(df_test), 1), dtype=np.int)
for i in range(len(df_test)):
for t, char in enumerate(df_test['name'][i]):
X_test[i, t, char_indices[char]] = 1
y_test[i] = df_test['male'][i]
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
model = Sequential()
model.add(LSTM(256, return_sequences=True, input_shape=(longest_name_length, len(chars))))
model.add(Dropout(0.2))
model.add(LSTM(256, return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(32))
model.add(Dropout(0.2))
model.add(Dense(2))
model.add(Activation('softmax'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model, show_shapes=True, rankdir='LR').create(prog='dot', format='svg'))
model.fit(X_train, y_train, batch_size=16, epochs=30)
y_pred = model.predict_classes(X_test)
print_score(y_test, y_pred)
```
| github_jupyter |
# Quantum Counting
To understand this algorithm, it is important that you first understand both Grover’s algorithm and the quantum phase estimation algorithm. Whereas Grover’s algorithm attempts to find a solution to the Oracle, the quantum counting algorithm tells us how many of these solutions there are. This algorithm is interesting as it combines both quantum search and quantum phase estimation.
## Contents
1. [Overview](#overview)
1.1 [Intuition](#intuition)
1.2 [A Closer Look](#closer_look)
2. [The Code](#code)
2.1 [Initialising our Code](#init_code)
2.2 [The Controlled-Grover Iteration](#cont_grover)
2.3 [The Inverse QFT](#inv_qft)
2.4 [Putting it Together](#putting_together)
3. [Simulating](#simulating)
4. [Finding the Number of Solutions](#finding_m)
5. [Exercises](#exercises)
6. [References](#references)
## 1. Overview <a id='overview'></a>
### 1.1 Intuition <a id='intuition'></a>
In quantum counting, we simply use the quantum phase estimation algorithm to find an eigenvalue of a Grover search iteration. You will remember that an iteration of Grover’s algorithm, $G$, rotates the state vector by $\theta$ in the $|\omega\rangle$, $|s’\rangle$ basis:

The percentage number of solutions in our search space affects the difference between $|s\rangle$ and $|s’\rangle$. For example, if there are not many solutions, $|s\rangle$ will be very close to $|s’\rangle$ and $\theta$ will be very small. It turns out that the eigenvalues of the Grover iterator are $e^{\pm i\theta}$, and we can extract this using quantum phase estimation (QPE) to estimate the number of solutions ($M$).
### 1.2 A Closer Look <a id='closer_look'></a>
In the $|\omega\rangle$,$|s’\rangle$ basis we can write the Grover iterator as the matrix:
$$
G =
\begin{pmatrix}
\cos{\theta} && -\sin{\theta}\\
\sin{\theta} && \cos{\theta}
\end{pmatrix}
$$
The matrix $G$ has eigenvectors:
$$
\begin{pmatrix}
-i\\
1
\end{pmatrix}
,
\begin{pmatrix}
i\\
1
\end{pmatrix}
$$
With the aforementioned eigenvalues $e^{\pm i\theta}$. Fortunately, we do not need to prepare our register in either of these states, the state $|s\rangle$ is in the space spanned by $|\omega\rangle$, $|s’\rangle$, and thus is a superposition of the two vectors.
$$
|s\rangle = \alpha |\omega\rangle + \beta|s'\rangle
$$
As a result, the output of the QPE algorithm will be a superposition of the two phases, and when we measure the register we will obtain one of these two values! We can then use some simple maths to get our estimate of $M$.

## 2. The Code <a id='code'></a>
### 2.1 Initialising our Code <a id='init_code'></a>
First, let’s import everything we’re going to need:
```
import matplotlib.pyplot as plt
import numpy as np
import math
# importing Qiskit
import qiskit
from qiskit import IBMQ, Aer
from qiskit import QuantumCircuit, execute
# import basic plot tools
from qiskit.visualization import plot_histogram
```
In this guide will choose to ‘count’ on the first 4 qubits on our circuit (we call the number of counting qubits $t$, so $t = 4$), and to 'search' through the last 4 qubits ($n = 4$). With this in mind, we can start creating the building blocks of our circuit.
### 2.2 The Controlled-Grover Iteration <a id='cont_grover'></a>
We have already covered Grover iterations in the Grover’s algorithm section. Here is an example with an Oracle we know has 5 solutions ($M = 5$) of 16 states ($N = 2^n = 16$), combined with a diffusion operator:
```
def example_grover_iteration():
"""Small circuit with 5/16 solutions"""
# Do circuit
qc = QuantumCircuit(4)
# Oracle
qc.h([2,3])
qc.ccx(0,1,2)
qc.h(2)
qc.x(2)
qc.ccx(0,2,3)
qc.x(2)
qc.h(3)
qc.x([1,3])
qc.h(2)
qc.mct([0,1,3],2)
qc.x([1,3])
qc.h(2)
# Diffuser
qc.h(range(3))
qc.x(range(3))
qc.z(3)
qc.mct([0,1,2],3)
qc.x(range(3))
qc.h(range(3))
qc.z(3)
return qc
```
Notice the python function takes no input and returns a `QuantumCircuit` object with 4 qubits. In the past the functions you created might have modified an existing circuit, but a function like this allows us to turn the `QuantmCircuit` object into a single gate we can then control.
We can use `.to_gate()` and `.control()` to create a controlled gate from a circuit. We will call our Grover iterator `grit` and the controlled Grover iterator `cgrit`:
```
# Create controlled-Grover
grit = example_grover_iteration().to_gate()
cgrit = grit.control()
cgrit.label = "Grover"
```
### 2.3 The Inverse QFT <a id='inv_qft'></a>
We now need to create an inverse QFT. This code implements the QFT on n qubits:
```
def qft(n):
"""Creates an n-qubit QFT circuit"""
circuit = QuantumCircuit(4)
def swap_registers(circuit, n):
for qubit in range(n//2):
circuit.swap(qubit, n-qubit-1)
return circuit
def qft_rotations(circuit, n):
"""Performs qft on the first n qubits in circuit (without swaps)"""
if n == 0:
return circuit
n -= 1
circuit.h(n)
for qubit in range(n):
circuit.cu1(np.pi/2**(n-qubit), qubit, n)
qft_rotations(circuit, n)
qft_rotations(circuit, n)
swap_registers(circuit, n)
return circuit
```
Again, note we have chosen to return another `QuantumCircuit` object, this is so we can easily invert the gate. We create the gate with t = 4 qubits as this is the number of counting qubits we have chosen in this guide:
```
qft_dagger = qft(4).to_gate().inverse()
qft_dagger.label = "QFT†"
```
### 2.4 Putting it Together <a id='putting_together'></a>
We now have everything we need to complete our circuit! Let’s put it together.
First we need to put all qubits in the $|+\rangle$ state:
```
# Create QuantumCircuit
t = 4 # no. of counting qubits
n = 4 # no. of searching qubits
qc = QuantumCircuit(n+t, t) # Circuit with n+t qubits and t classical bits
# Initialise all qubits to |+>
for qubit in range(t+n):
qc.h(qubit)
# Begin controlled Grover iterations
iterations = 1
for qubit in range(t):
for i in range(iterations):
qc.append(cgrit, [qubit] + [*range(t, n+t)])
iterations *= 2
# Do inverse QFT on counting qubits
qc.append(qft_dagger, range(t))
# Measure counting qubits
qc.measure(range(t), range(t))
# Display the circuit
qc.draw()
```
Great! Now let’s see some results.
## 3. Simulating <a id='simulating'></a>
```
# Execute and see results
emulator = Aer.get_backend('qasm_simulator')
job = execute(qc, emulator, shots=2048 )
hist = job.result().get_counts()
plot_histogram(hist)
```
We can see two values stand out, having a much higher probability of measurement than the rest. These two values correspond to $e^{i\theta}$ and $e^{-i\theta}$, but we can’t see the number of solutions yet. We need to little more processing to get this information, so first let us get our output into something we can work with (an `int`).
We will get the string of the most probable result from our output data:
```
measured_str = max(hist, key=hist.get)
```
Let us now store this as an integer:
```
measured_int = int(measured_str,2)
print("Register Output = %i" % measured_int)
```
## 4. Finding the Number of Solutions (M) <a id='finding_m'></a>
We will create a function, `calculate_M()` that takes as input the decimal integer output of our register, the number of counting qubits ($t$) and the number of searching qubits ($n$).
First we want to get $\theta$ from `measured_int`. You will remember that QPE gives us a measured $\text{value} = 2^n \phi$ from the eigenvalue $e^{2\pi i\phi}$, so to get $\theta$ we need to do:
$$
\theta = \text{value}\times\frac{2\pi}{2^t}
$$
Or, in code:
```
theta = (measured_int/(2**t))*math.pi*2
print("Theta = %.5f" % theta)
```
You may remember that we can get the angle $\theta/2$ can from the inner product of $|s\rangle$ and $|s’\rangle$:

$$
\langle s'|s\rangle = \cos{\tfrac{\theta}{2}}
$$
And that the inner product of these vectors is:
$$
\langle s'|s\rangle = \sqrt{\frac{N-M}{N}}
$$
We can combine these equations, then use some trigonometry and algebra to show:
$$
N\sin^2{\frac{\theta}{2}} = M
$$
From the [Grover's algorithm](https://qiskit.org/textbook/ch-algorithms/grover.html) chapter, you will remember that a common way to create a diffusion operator, $U_s$, is actually to implement $-U_s$. This implementation is used in the Grover iteration provided in this chapter. In a normal Grover search, this phase is global and can be ignored, but now we are controlling our Grover iterations, this phase does have an effect. The result is that we have effectively searched for the states that are _not_ solutions, and our quantum counting algorithm will tell us how many states are _not_ solutions. To fix this, we simply calculate $N-M$.
And in code:
```
N = 2**n
M = N * (math.sin(theta/2)**2)
print("No. of Solutions = %.1f" % (N-M))
```
And we can see we have (approximately) the correct answer! We can approximately calculate the error in this answer using:
```
m = t - 1 # Upper bound: Will be less than this
err = (math.sqrt(2*M*N) + N/(2**(m-1)))*(2**(-m))
print("Error < %.2f" % err)
```
Explaining the error calculation is outside the scope of this article, but an explanation can be found in [1].
Finally, here is the finished function `calculate_M()`:
```
def calculate_M(measured_int, t, n):
"""For Processing Output of Quantum Counting"""
# Calculate Theta
theta = (measured_int/(2**t))*math.pi*2
print("Theta = %.5f" % theta)
# Calculate No. of Solutions
N = 2**n
M = N * (math.sin(theta/2)**2)
print("No. of Solutions = %.1f" % (N-M))
# Calculate Upper Error Bound
m = t - 1 #Will be less than this (out of scope)
err = (math.sqrt(2*M*N) + N/(2**(m-1)))*(2**(-m))
print("Error < %.2f" % err)
```
## 5. Exercises <a id='exercises'></a>
1. Can you create an oracle with a different number of solutions? How does the accuracy of the quantum counting algorithm change?
2. Can you adapt the circuit to use more or less counting qubits to get a different precision in your result?
## 6. References <a id='references'></a>
[1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA.
```
import qiskit
qiskit.__qiskit_version__
```
| github_jupyter |
# CORD-19 overview
In this notebook, we provide an overview of publication medatata for CORD-19.
```
%matplotlib inline
import matplotlib.pyplot as plt
# magics and warnings
%load_ext autoreload
%autoreload 2
import warnings; warnings.simplefilter('ignore')
import os, random, codecs, json
import pandas as pd
import numpy as np
seed = 99
random.seed(seed)
np.random.seed(seed)
import nltk, sklearn
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="white")
sns.set_context("notebook", font_scale=1.2, rc={"lines.linewidth": 2.5})
# load metadata
df_meta = pd.read_csv("datasets_output/df_pub.csv",compression="gzip")
df_datasource = pd.read_csv("datasets_output/sql_tables/datasource.csv",sep="\t",header=None,names=['datasource_metadata_id', 'datasource', 'url'])
df_pub_datasource = pd.read_csv("datasets_output/sql_tables/pub_datasource.csv",sep="\t",header=None,names=['pub_id','datasource_metadata_id'])
df_cord_meta = pd.read_csv("datasets_output/sql_tables/cord19_metadata.csv",sep="\t",header=None,names=[ 'cord19_metadata_id', 'source', 'license', 'ms_academic_id',
'who_covidence', 'sha', 'full_text', 'pub_id'])
df_meta.head()
df_meta.columns
df_datasource
df_pub_datasource.head()
df_cord_meta.head()
```
#### Select just CORD-19
```
df_meta = df_meta.merge(df_pub_datasource, how="inner", left_on="pub_id", right_on="pub_id")
df_meta = df_meta.merge(df_datasource, how="inner", left_on="datasource_metadata_id", right_on="datasource_metadata_id")
df_cord19 = df_meta[df_meta.datasource_metadata_id==0]
df_cord19 = df_cord19.merge(df_cord_meta, how="inner", left_on="pub_id", right_on="pub_id")
df_meta.shape
df_cord19.shape
df_cord19.head()
```
#### Publication years
```
import re
def clean_year(s):
if pd.isna(s):
return np.nan
if not (s>1900):
return np.nan
elif s>2020:
return 2020
return s
df_cord19["publication_year"] = df_cord19["publication_year"].apply(clean_year)
df_cord19.publication_year.describe()
sns.distplot(df_cord19.publication_year.tolist(), bins=60, kde=False)
plt.xlabel("Publication year", fontsize=15)
plt.ylabel("Publication count", fontsize=15)
plt.tight_layout()
plt.savefig("figures/publication_year_all.pdf")
sns.distplot(df_cord19[(pd.notnull(df_cord19.publication_year)) & (df_cord19.publication_year > 2000)].publication_year.tolist(), bins=20, hist=True, kde=False)
plt.xlabel("Publication year", fontsize=15)
plt.ylabel("Publication count", fontsize=15)
plt.tight_layout()
plt.savefig("figures/publication_year_2000.pdf")
which = "PMC"
sns.distplot(df_cord19[(pd.notnull(df_cord19.publication_year)) & (df_cord19.publication_year > 2000) & (df_cord19.source == which)].publication_year.tolist(), bins=20, hist=True, kde=False)
plt.xlabel("Publication year", fontsize=15)
plt.ylabel("Publication count", fontsize=15)
plt.tight_layout()
# recent uptake
df_cord19[df_cord19.publication_year>2018].groupby([(df_cord19.publication_year),(df_cord19.publication_month)]).count().pub_id
```
#### Null values
```
df_cord19.shape
df_cord19["abstract_length"] = df_cord19.abstract.str.len()
df_cord19[df_cord19.abstract_length>0].shape
sum(pd.notnull(df_cord19.abstract))
sum(pd.notnull(df_cord19.doi))
sum(pd.notnull(df_cord19.pmcid))
sum(pd.notnull(df_cord19.pmid))
sum(pd.notnull(df_cord19.journal))
```
#### Journals
```
df_cord19.journal.value_counts()[:30]
df_sub = df_cord19[df_cord19.journal.isin(df_cord19.journal.value_counts()[:20].index.tolist())]
b = sns.countplot(y="journal", data=df_sub, order=df_sub['journal'].value_counts().index)
#b.axes.set_title("Title",fontsize=50)
b.set_xlabel("Publication count",fontsize=15)
b.set_ylabel("Journal",fontsize=15)
b.tick_params(labelsize=12)
plt.tight_layout()
plt.savefig("figures/journals.pdf")
```
#### Sources and licenses
```
# source
df_sub = df_cord19[df_cord19.source.isin(df_cord19.source.value_counts()[:10].index.tolist())]
b = sns.countplot(y="source", data=df_sub, order=df_sub['source'].value_counts().index)
#b.axes.set_title("Title",fontsize=50)
b.set_xlabel("Publication count",fontsize=15)
b.set_ylabel("Source",fontsize=15)
b.tick_params(labelsize=12)
plt.tight_layout()
plt.savefig("figures/sources.pdf")
# license
df_sub = df_cord19[df_cord19.license.isin(df_cord19.license.value_counts()[:30].index.tolist())]
b = sns.countplot(y="license", data=df_sub, order=df_sub['license'].value_counts().index)
#b.axes.set_title("Title",fontsize=50)
b.set_xlabel("Publication count",fontsize=15)
b.set_ylabel("License",fontsize=15)
b.tick_params(labelsize=12)
plt.tight_layout()
plt.savefig("figures/licenses.pdf")
```
#### Full text availability
```
df_cord19["has_full_text"] = pd.notnull(df_cord19.full_text)
df_cord19["has_full_text"].sum()
# full text x source
df_plot = df_cord19.groupby(['has_full_text', 'source']).size().reset_index().pivot(columns='has_full_text', index='source', values=0)
df_plot.plot(kind='bar', stacked=True)
plt.xlabel("Source", fontsize=15)
plt.ylabel("Publication count", fontsize=15)
#plt.tight_layout()
plt.savefig("figures/source_ft.pdf")
# full text x journal
df_sub = df_cord19[df_cord19.journal.isin(df_cord19.journal.value_counts()[:20].index.tolist())]
df_plot = df_sub.groupby(['has_full_text', 'journal']).size().reset_index().pivot(columns='has_full_text', index='journal', values=0)
df_plot.plot(kind='bar', stacked=True)
plt.xlabel("Source", fontsize=15)
plt.ylabel("Publication count", fontsize=15)
#plt.tight_layout()
plt.savefig("figures/journal_ft.pdf")
# full text x year
df_sub = df_cord19[(pd.notnull(df_cord19.publication_year)) & (df_cord19.publication_year > 2000)]
df_plot = df_sub.groupby(['has_full_text', 'publication_year']).size().reset_index().pivot(columns='has_full_text', index='publication_year', values=0)
df_plot.plot(kind='bar', stacked=True)
plt.xticks(np.arange(20), [int(x) for x in df_plot.index.values], rotation=45)
plt.xlabel("Publication year", fontsize=15)
plt.ylabel("Publication count", fontsize=15)
plt.tight_layout()
plt.savefig("figures/year_ft.pdf")
```
## Dimensions
```
# load Dimensions data (you will need to download it on your own!)
directory_name = "datasets_output/json_dimensions_cwts"
all_dimensions = list()
for root, dirs, files in os.walk(directory_name):
for file in files:
if ".json" in file:
all_data = codecs.open(os.path.join(root,file)).read()
for record in all_data.split("\n"):
if record:
all_dimensions.append(json.loads(record))
df_dimensions = pd.DataFrame.from_dict({
"id":[r["id"] for r in all_dimensions],
"publication_type":[r["publication_type"] for r in all_dimensions],
"doi":[r["doi"] for r in all_dimensions],
"pmid":[r["pmid"] for r in all_dimensions],
"issn":[r["journal"]["issn"] for r in all_dimensions],
"times_cited":[r["times_cited"] for r in all_dimensions],
"relative_citation_ratio":[r["relative_citation_ratio"] for r in all_dimensions],
"for_top":[r["for"][0]["first_level"]["name"] if len(r["for"])>0 else "" for r in all_dimensions],
"for_bottom":[r["for"][0]["second_level"]["name"] if len(r["for"])>0 else "" for r in all_dimensions],
"open_access_versions":[r["open_access_versions"] for r in all_dimensions]
})
df_dimensions.head()
df_dimensions.pmid = df_dimensions.pmid.astype(float)
df_dimensions.shape
df_joined_doi = df_cord19[pd.notnull(df_cord19.doi)].merge(df_dimensions[pd.notnull(df_dimensions.doi)], how="inner", left_on="doi", right_on="doi")
df_joined_doi.shape
df_joined_pmid = df_cord19[pd.isnull(df_cord19.doi) & pd.notnull(df_cord19.pmid)].merge(df_dimensions[pd.isnull(df_dimensions.doi) & pd.notnull(df_dimensions.pmid)], how="inner", left_on="pmid", right_on="pmid")
df_joined_pmid.shape
df_joined = pd.concat([df_joined_doi,df_joined_pmid])
# nearly all publications from CORD-19 are in Dimensions
df_joined.shape
df_cord19.shape
# publication type
df_sub = df_joined[df_joined.publication_type.isin(df_joined.publication_type.value_counts()[:10].index.tolist())]
b = sns.countplot(y="publication_type", data=df_sub, order=df_sub['publication_type'].value_counts().index)
#b.axes.set_title("Title",fontsize=50)
b.set_xlabel("Publication count",fontsize=15)
b.set_ylabel("Publication type",fontsize=15)
b.tick_params(labelsize=12)
plt.tight_layout()
plt.savefig("figures/dim_pub_type.pdf")
```
#### Citation counts
```
# scatter of citations vs time of publication
sns.scatterplot(df_joined.publication_year.to_list(),df_joined.times_cited.to_list())
plt.xlabel("Publication year", fontsize=15)
plt.ylabel("Citation count", fontsize=15)
plt.tight_layout()
plt.savefig("figures/dim_citations_year.png")
# most cited papers
df_joined[["title","times_cited","relative_citation_ratio","journal","publication_year","doi"]].sort_values("times_cited",ascending=False).head(20)
# same but in 2020; note that duplicates are due to SI or pre-prints with different PMIDs
df_joined[df_joined.publication_year>2019][["title","times_cited","relative_citation_ratio","journal","publication_year","doi"]].sort_values("times_cited",ascending=False).head(10)
# most cited journals
df_joined[['journal','times_cited']].groupby('journal').sum().sort_values('times_cited',ascending=False).head(20)
```
#### Categories
```
# FOR jeywords distribution, TOP
df_sub = df_joined[df_joined.for_top.isin(df_joined.for_top.value_counts()[:10].index.tolist())]
b = sns.countplot(y="for_top", data=df_sub, order=df_sub['for_top'].value_counts().index)
#b.axes.set_title("Title",fontsize=50)
b.set_xlabel("Publication count",fontsize=15)
b.set_ylabel("FOR first level",fontsize=15)
b.tick_params(labelsize=12)
plt.tight_layout()
plt.savefig("figures/dim_for_top.pdf")
# FOR jeywords distribution, TOP
df_sub = df_joined[df_joined.for_bottom.isin(df_joined.for_bottom.value_counts()[:10].index.tolist())]
b = sns.countplot(y="for_bottom", data=df_sub, order=df_sub['for_bottom'].value_counts().index)
#b.axes.set_title("Title",fontsize=50)
b.set_xlabel("Publication count",fontsize=15)
b.set_ylabel("FOR second level",fontsize=15)
b.tick_params(labelsize=12)
plt.tight_layout()
plt.savefig("figures/dim_for_bottom.pdf")
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import os
#import data
from biom import load_table
from gneiss.util import match
#deicode
from deicode.optspace import OptSpace
from deicode.preprocessing import rclr
from deicode.ratios import log_ratios
#skbio
import warnings; warnings.simplefilter('ignore') #for PCoA warning
from skbio import DistanceMatrix
from skbio.stats.ordination import pcoa
from scipy.stats import pearsonr
from matplotlib import cm
from skbio.stats.composition import clr,centralize
#plotting
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import matplotlib.gridspec as gridspec
from matplotlib import ticker
import matplotlib.colors as mcolors
plt.style.use('seaborn-paper')
paper_rc = {'lines.linewidth': 1.5}
sns.set_context("paper", rc = paper_rc)
plt.rcParams["axes.labelsize"] = 25
plt.rcParams['xtick.labelsize'] = 25
plt.rcParams['ytick.labelsize'] = 25
def plot_pcoa(samples, md, ax, factor_, colors_map):
"""
Parameters
----------
samples : pd.DataFrame
Contains PCoA coordinates
md : pd.Dataframe
Metadata object
ax : matplotlib.Axes
Contains matplotlib axes object
"""
classes=np.sort(list(set(md[factor_].values)))
cmap_out={}
for sub_class,color_ in zip(classes,colors_map):
idx = md[factor_] == sub_class
ax.scatter(samples.loc[idx, 'PC1'],
samples.loc[idx, 'PC2'],
label=sub_class.replace('stressed','Stressed'),
facecolors=color_,
edgecolors=color_,
alpha=.8,linewidth=3)
cmap_out[sub_class]=color_
ax.grid()
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.set_xticks([])
ax.set_yticks([])
ax.set_xlabel('PC1',fontsize=15)
ax.set_ylabel('PC2',fontsize=15)
return ax,cmap_out
%matplotlib inline
```
### Case Study Benchmark Sub-sample
```
# store info
both_perm_res={}
both_perm_res['Sponges']=pd.read_csv('subsample_results/Sponges_health_status_fstat.csv', index_col=[0,1,2])
both_perm_res['Sleep_Apnea']=pd.read_csv('subsample_results/Sleep_Apnea_exposure_type_fstat.csv', index_col=[0,1,2])
both_nn={}
both_nn['Sponges']=pd.read_csv('subsample_results/Sponges_health_status_classifier.csv', index_col=[0,1,2])
both_nn['Sleep_Apnea']=pd.read_csv('subsample_results/Sleep_Apnea_exposure_type_classifier.csv', index_col=[0,1,2])
factor={}
factor['Sponges']='health_status'
factor['Sleep_Apnea']='exposure_type'
#clean up the dataframes
rename_m={'Bray_Curtis':'Bray-Curtis',
'GUniFrac_Alpha_Half':'Generalized UniFrac $\\alpha$=0.5',
'GUniFrac_Alpha_One':'Generalized UniFrac $\\alpha$=1.0',
'GUniFrac_Alpha_Zero':'Generalized UniFrac $\\alpha$=0.0',
'Jaccard':'Jaccard',
'Robust_Aitchison':'Robust Aitchison'}
#colors to use later
colors_={'Bray-Curtis':'#1f78b4',
'Generalized UniFrac $\\alpha$=0.5':'#e31a1c',
'Generalized UniFrac $\\alpha$=1.0':'#984ea3',
'Generalized UniFrac $\\alpha$=0.0':'#ff7f00',
'Jaccard':'#e6ab02',
'Robust Aitchison':'#33a02c'}
for dataset_,results_permanova in both_perm_res.items():
df_ = pd.DataFrame(results_permanova.copy().stack())
df_.reset_index(inplace=True)
df_.columns = ['Fold','N-Samples','Metric','Method','Values']
df_['Method'] = [rename_m[x] for x in df_.Method]
df_=df_[df_['N-Samples']>=70]
both_perm_res[dataset_]=df_[df_.Metric.isin(['test statistic'])]
for dataset_,results_nn in both_nn.items():
df_ = pd.DataFrame(results_nn.copy().stack())
df_.reset_index(inplace=True)
df_.columns = ['Fold','N-Samples','Metric','Method','Values']
df_['Method'] = [rename_m[x] for x in df_.Method]
df_=df_[df_['N-Samples']>=70]
both_nn[dataset_]=df_[df_.Metric.isin(['R^{2}'])]
```
# Figure 3
```
plt.rcParams["axes.labelsize"] = 14
plt.rcParams['xtick.labelsize'] = 14
plt.rcParams['ytick.labelsize'] = 14
colors_map=['#1f78b4','#e31a1c']
subpath_sp='sub_sample/biom_tables_Sponges'
subpath_sl='sub_sample/biom_tables_Sleep_Apnea'
fontsize_ = 18
fig = plt.figure(figsize=(20, 15), facecolor='white')
gs = gridspec.GridSpec(300, 240)
x_1=10+45
x_2=x_1+10
x_3=x_2+45
x_4=x_3+30
x_5=x_4+45
x_6=x_5+10
x_7=x_6+45
# benchmarking (clasification)
fstat_ax1 = plt.subplot(gs[:50, 10:x_1])
clasif_ax2 = plt.subplot(gs[:50:, x_2:x_3])
fstat_ax3 = plt.subplot(gs[:50, x_4:x_5])
clasif_ax4 = plt.subplot(gs[:50:, x_6:x_7])
# RPCA
RPCA_ax1 = plt.subplot(gs[100:145, 10:x_1])
RPCA_ax2 = plt.subplot(gs[100:145:, x_2:x_3])
RPCA_ax3 = plt.subplot(gs[100:145, x_4:x_5])
RPCA_ax4 = plt.subplot(gs[100:145:, x_6:x_7])
# WUNI
WUNI_ax1 = plt.subplot(gs[175:220, 10:x_1])
WUNI_ax2 = plt.subplot(gs[175:220:, x_2:x_3])
WUNI_ax3 = plt.subplot(gs[175:220, x_4:x_5])
WUNI_ax4 = plt.subplot(gs[175:220:, x_6:x_7])
# BC
BC_ax1 = plt.subplot(gs[240:285, 10:x_1])
BC_ax2 = plt.subplot(gs[240:285:, x_2:x_3])
BC_ax3 = plt.subplot(gs[240:285, x_4:x_5])
BC_ax4 = plt.subplot(gs[240:285:, x_6:x_7])
# plot benchmarking
fstat_ax1.set_title('PERMANOVA F-statistic', fontsize=fontsize_)
sns.pointplot(x='N-Samples',y='Values',hue='Method',
data=both_perm_res['Sponges'].sort_values('Method',ascending=False),
palette=colors_, ci=0, ax=fstat_ax1)
fstat_ax1.legend_.remove()
clasif_ax2.set_title('KNN Classification Accuracy', fontsize=fontsize_)
sns.pointplot(x='N-Samples',y='Values',hue='Method',
data=both_nn['Sponges'].sort_values('Method',ascending=False),
palette=colors_, ci=0, ax=clasif_ax2)
clasif_ax2.legend(loc=2,
bbox_to_anchor=(-1.3, 1.95),
prop={'size':26},
fancybox=True, framealpha=0.5,ncol=4
, markerscale=2, facecolor="grey")
fstat_ax3.set_title('PERMANOVA F-statistic', fontsize=fontsize_)
sns.pointplot(x='N-Samples',y='Values',hue='Method',
data=both_perm_res['Sleep_Apnea'].sort_values('Method',ascending=False),
palette=colors_, ci=0, ax=fstat_ax3)
fstat_ax3.legend_.remove()
clasif_ax4.set_title('KNN Classification Accuracy', fontsize=fontsize_)
sns.pointplot(x='N-Samples',y='Values',hue='Method',
data=both_nn['Sleep_Apnea'].sort_values('Method',ascending=False),
palette=colors_, ci=0, ax=clasif_ax4)
clasif_ax4.legend_.remove()
fstat_ax1.set_ylabel('')
clasif_ax2.set_ylabel('')
fstat_ax3.set_ylabel('')
clasif_ax4.set_ylabel('')
# set titles for case-study
fstat_ax1.annotate('Sponges',(2.5,770),
annotation_clip=False,
fontsize=fontsize_+20)
fstat_ax3.annotate('Sleep Apnea',(2,168),
annotation_clip=False,
fontsize=fontsize_+20)
# 30 samples total sponge
meta_ = pd.read_table(os.path.join(subpath_sp,'1_70','metadata.tsv'), index_col=0)
rpca_tmp = pd.read_table(os.path.join(subpath_sp,'1_70','Robust_Aitchison_Distance.tsv'),
index_col=0,low_memory=False).reindex(index=meta_.index,columns=meta_.index)
rpca_tmp=pcoa(DistanceMatrix(rpca_tmp)).samples[['PC1','PC2']]
rpca_tmp.index=meta_.index
bc_tmp = pd.read_table(os.path.join(subpath_sp,'1_70','Bray_Distance.tsv'),
index_col=0,low_memory=False).reindex(index=meta_.index,columns=meta_.index)
bc_tmp=pcoa(DistanceMatrix(bc_tmp)).samples[['PC1','PC2']]
bc_tmp.index=meta_.index
wun_tmp = pd.read_table(os.path.join(subpath_sp,'1_70','GUniFrac_alpha_one_Distance.tsv'),
index_col=0,low_memory=False).reindex(index=meta_.index,columns=meta_.index)
wun_tmp=pcoa(DistanceMatrix(wun_tmp)).samples[['PC1','PC2']]
wun_tmp.index=meta_.index
plot_pcoa(rpca_tmp, meta_, RPCA_ax1, factor['Sponges'], colors_map)
plot_pcoa(wun_tmp, meta_, WUNI_ax1, factor['Sponges'], colors_map)
plot_pcoa(bc_tmp, meta_, BC_ax1, factor['Sponges'], colors_map)
RPCA_ax1.legend(loc=2,
bbox_to_anchor=(0, 1.8),
prop={'size':26},
fancybox=True, framealpha=0.5,ncol=2
, markerscale=2, facecolor="grey")
RPCA_ax1.set_title('RPCA (70-Samples)', fontsize=fontsize_)
WUNI_ax1.set_title('W-UniFrac (70-Samples)', fontsize=fontsize_)
BC_ax1.set_title('Bray-Curtis (70-Samples)', fontsize=fontsize_)
# 30 samples total sleep
meta_ = pd.read_table(os.path.join(subpath_sl,'1_70','metadata.tsv'), index_col=0)
rpca_tmp = pd.read_table(os.path.join(subpath_sl,'1_70','Robust_Aitchison_Distance.tsv'),
index_col=0,low_memory=False).reindex(index=meta_.index,columns=meta_.index)
rpca_tmp=pcoa(DistanceMatrix(rpca_tmp)).samples[['PC1','PC2']]
rpca_tmp.index=meta_.index
bc_tmp = pd.read_table(os.path.join(subpath_sl,'1_70','Bray_Distance.tsv'),
index_col=0,low_memory=False).reindex(index=meta_.index,columns=meta_.index)
bc_tmp=pcoa(DistanceMatrix(bc_tmp)).samples[['PC1','PC2']]
bc_tmp.index=meta_.index
wun_tmp = pd.read_table(os.path.join(subpath_sl,'1_70','GUniFrac_alpha_one_Distance.tsv'),
index_col=0,low_memory=False).reindex(index=meta_.index,columns=meta_.index)
wun_tmp=pcoa(DistanceMatrix(wun_tmp)).samples[['PC1','PC2']]
wun_tmp.index=meta_.index
plot_pcoa(rpca_tmp, meta_, RPCA_ax3, factor['Sleep_Apnea'], colors_map)
plot_pcoa(wun_tmp, meta_, WUNI_ax3, factor['Sleep_Apnea'], colors_map)
plot_pcoa(bc_tmp, meta_, BC_ax3, factor['Sleep_Apnea'], colors_map)
RPCA_ax3.set_title('RPCA (70-Samples)', fontsize=fontsize_)
WUNI_ax3.set_title('W-UniFrac (70-Samples)', fontsize=fontsize_)
BC_ax3.set_title('Bray-Curtis (70-Samples)', fontsize=fontsize_)
RPCA_ax3.legend(loc=2,
bbox_to_anchor=(0.25, 1.8),
prop={'size':26},
fancybox=True, framealpha=0.5,ncol=2
, markerscale=2, facecolor="grey")
# max samp sponge
meta_ = pd.read_table(os.path.join(subpath_sp,'1_158','metadata.tsv'), index_col=0)
rpca_tmp = pd.read_table(os.path.join(subpath_sp,'1_158','Robust_Aitchison_Distance.tsv'),
index_col=0,low_memory=False).reindex(index=meta_.index,columns=meta_.index)
rpca_tmp=pcoa(DistanceMatrix(rpca_tmp)).samples[['PC1','PC2']]
rpca_tmp.index=meta_.index
bc_tmp = pd.read_table(os.path.join(subpath_sp,'1_158','Bray_Distance.tsv'),
index_col=0,low_memory=False).reindex(index=meta_.index,columns=meta_.index)
bc_tmp=pcoa(DistanceMatrix(bc_tmp)).samples[['PC1','PC2']]
bc_tmp.index=meta_.index
wun_tmp = pd.read_table(os.path.join(subpath_sp,'1_158','GUniFrac_alpha_one_Distance.tsv'),
index_col=0,low_memory=False).reindex(index=meta_.index,columns=meta_.index)
wun_tmp=pcoa(DistanceMatrix(wun_tmp)).samples[['PC1','PC2']]
wun_tmp.index=meta_.index
plot_pcoa(rpca_tmp, meta_, RPCA_ax2, factor['Sponges'], colors_map)
plot_pcoa(wun_tmp, meta_, WUNI_ax2, factor['Sponges'], colors_map)
plot_pcoa(bc_tmp, meta_, BC_ax2, factor['Sponges'], colors_map)
RPCA_ax2.set_title('RPCA (158-Samples)', fontsize=fontsize_)
WUNI_ax2.set_title('W-UniFrac (158-Samples)', fontsize=fontsize_)
BC_ax2.set_title('Bray-Curtis (158-Samples)', fontsize=fontsize_)
# max samp sleep
meta_ = pd.read_table(os.path.join(subpath_sl,'1_184','metadata.tsv'), index_col=0)
rpca_tmp = pd.read_table(os.path.join(subpath_sl,'1_184','Robust_Aitchison_Distance.tsv'),
index_col=0,low_memory=False).reindex(index=meta_.index,columns=meta_.index)
rpca_tmp=pcoa(DistanceMatrix(rpca_tmp)).samples[['PC1','PC2']]
rpca_tmp.index=meta_.index
bc_tmp = pd.read_table(os.path.join(subpath_sl,'1_184','Bray_Distance.tsv'),
index_col=0,low_memory=False).reindex(index=meta_.index,columns=meta_.index)
bc_tmp=pcoa(DistanceMatrix(bc_tmp)).samples[['PC1','PC2']]
bc_tmp.index=meta_.index
wun_tmp = pd.read_table(os.path.join(subpath_sl,'1_184','GUniFrac_alpha_one_Distance.tsv'),
index_col=0,low_memory=False).reindex(index=meta_.index,columns=meta_.index)
wun_tmp=pcoa(DistanceMatrix(wun_tmp)).samples[['PC1','PC2']]
wun_tmp.index=meta_.index
plot_pcoa(rpca_tmp, meta_, RPCA_ax4, factor['Sleep_Apnea'], colors_map)
plot_pcoa(wun_tmp, meta_, WUNI_ax4, factor['Sleep_Apnea'], colors_map)
plot_pcoa(bc_tmp, meta_, BC_ax4, factor['Sleep_Apnea'], colors_map)
RPCA_ax4.set_title('RPCA (184-Samples)', fontsize=fontsize_)
WUNI_ax4.set_title('W-UniFrac (184-Samples)', fontsize=fontsize_)
BC_ax4.set_title('Bray-Curtis (184-Samples)', fontsize=fontsize_)
fig.savefig('figures/figure4.png',dpi=300,
bbox_inches='tight',facecolor='white')
plt.show()
```
# Figure 4
```
from numpy.polynomial.polynomial import polyfit
def plot_biplot(samples, md, ax, factor_, y_axis_, x_axis, regcol,colors_map=['#1f78b4','#e31a1c']):
"""
Parameters
----------
samples : pd.DataFrame
Contains PCoA coordinates
md : pd.Dataframe
Metadata object
ax : matplotlib.Axes
Contains matplotlib axes object
"""
cmap_out={}
classes=np.sort(list(set(md[factor_].values)))
for sub_class,color_ in zip(classes,colors_map):
idx = md[factor_] == sub_class
ax.scatter(samples.loc[idx, y_axis_],
samples.loc[idx, x_axis],
label=sub_class.replace('stressed','Stressed'),
facecolors=color_,
edgecolors=color_,
alpha=.8,linewidth=3)
cmap_out[sub_class]=color_
fit_=samples.dropna(subset=[y_axis_,x_axis])
x=fit_.loc[:, y_axis_]
y=fit_.loc[:, x_axis]
# Fit with polyfit
b, m = polyfit(x, y, 1)
ax.plot(x, b + m * x, '-', lw=2, color=regcol, label='_nolegend_')
ax.grid()
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.set_yticks([])
ax.xaxis.set_tick_params(labelsize=20)
return ax,cmap_out
class MidpointNormalize(colors.Normalize):
def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):
self.midpoint = midpoint
colors.Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
x, y = [self.vmin, self.midpoint, self.vmax], [0, 0.5, 1]
return np.ma.masked_array(np.interp(value, x, y))
from biom.util import biom_open
from biom import load_table
from gneiss.util import match
from skbio.stats.ordination import OrdinationResults
lr_datasets = {}
for dataset_,sub_ in zip(['Sponges','Sleep_Apnea'],
['biom_tables_Sponges/1_248','biom_tables_Sleep_Apnea/1_184']):
lr_datasets[dataset_]={}
# get table
in_biom = 'sub_sample/'+sub_+'/table.biom'
table = load_table(in_biom)
table = table.to_dataframe().T
#remove few read sotus to help lr
table = table.T[table.sum()>50].T
# get ordination file
in_ord = 'sub_sample/'+sub_+'/RPCA_Ordination.txt'
sample_loading = OrdinationResults.read(in_ord).samples
feat_loading = OrdinationResults.read(in_ord).features
# taxonomy file
tax_col = ['kingdom', 'phylum', 'class', 'order',
'family', 'genus', 'species']
taxon = pd.read_table('data/'+dataset_+'/taxonomy.tsv',index_col=0)
taxon = {i:pd.Series(j) for i,j in taxon['Taxon'].str.split(';').items()}
taxon = pd.DataFrame(taxon).T
taxon.columns = tax_col
# metadata
meta = pd.read_table('sub_sample/'+sub_+'/metadata.tsv',index_col=0)
#match em
table,meta=match(table,meta)
table,taxon=match(table.T,taxon)
feat_loading,table=match(feat_loading,table)
sample_loading,table=match(sample_loading,table.T)
table,meta=match(table,meta)
#sort em
table=table.T.sort_index().T
feat_loading=feat_loading.reindex(index=table.columns)
taxon=taxon.reindex(index=table.columns)
# relabel otus
oturelabel=['sOTU'+str(i) for i in range(len(feat_loading.index))]
feat_loading.index = oturelabel
taxa_mapback={ind_:[oturelabel[count_]] for count_,ind_ in enumerate(taxon.index)}
#taxon['sequence'] = taxon.index
taxon.index = feat_loading.index
table.columns = feat_loading.index
lr_datasets[dataset_]['maptaxa']=taxa_mapback
lr_datasets[dataset_]['table']=table.copy()
lr_datasets[dataset_]['meta']=meta.copy()
lr_datasets[dataset_]['fl']=feat_loading.copy()
lr_datasets[dataset_]['sl']=sample_loading.copy()
lr_datasets[dataset_]['taxon']=taxon.copy()
for dataset_,axsor in zip(['Sponges','Sleep_Apnea'],[0,1]):
tabletmp_=lr_datasets[dataset_]['table'].copy()
metatmp_=lr_datasets[dataset_]['meta'].copy()
fltmp=lr_datasets[dataset_]['fl'].copy()
sltmp=lr_datasets[dataset_]['sl'].copy()
txtmp=lr_datasets[dataset_]['taxon'].copy()
# remove some sparsity to get better lr
tabletmp_=tabletmp_.T[tabletmp_.sum()>50].T
#match em
tabletmp_,metatmp_=match(tabletmp_,metatmp_)
tabletmp_,txtmp=match(tabletmp_.T,txtmp)
fltmp,tabletmp_=match(fltmp,tabletmp_)
sltmp,tabletmp_=match(sltmp,tabletmp_.T)
tabletmp_,metatmp_=match(tabletmp_,metatmp_)
#save em
fltmp_=fltmp.copy()
fltmp_.columns = ['PC1','PC2','PC3'][:len(fltmp_.columns)]
savem_=pd.concat([fltmp_,txtmp],axis=1)
savem_ = savem_.sort_values('PC1')
savem_.to_csv(dataset_+'_ranking.tsv',sep='\t')
#sort em
tabletmp_=tabletmp_.T.sort_index().T
fltmp=fltmp.reindex(index=tabletmp_.columns)
txtmp=txtmp.reindex(index=tabletmp_.columns)
#table_clean = Table(tabletmp_.T.values,
# tabletmp_.T.index,
# tabletmp_.T.columns)
#with biom.util.biom_open(dataset_+'_table.biom', 'w') as f:
# table_clean.to_hdf5(f, "filtered")
#metatmp_.to_csv(dataset_+'_metadata.tsv',sep='\t')
logdf_tmp=log_ratios(tabletmp_.copy(),
fltmp.copy(),
sltmp.copy()
,taxa_tmp=txtmp
,axis_sort=axsor
,N_show=int(int(len(txtmp.index)/2)-5))
lr_datasets[dataset_]['lr']=pd.concat([logdf_tmp,metatmp_],axis=1).copy()
lr_datasets[dataset_]['fl_sub']=fltmp.copy()*-1
lr_datasets[dataset_]['table_sub']=tabletmp_.copy()
lr_datasets[dataset_]['meta_sub']=metatmp_.copy()
plt.rcParams["axes.labelsize"] = 14
plt.rcParams['xtick.labelsize'] = 14
plt.rcParams['ytick.labelsize'] = 14
colors_map=['#1f78b4','#e31a1c']
subpath_sp='sub_sample/biom_tables_Sponges'
subpath_sl='sub_sample/biom_tables_Sleep_Apnea'
fontsize_ = 18
fig = plt.figure(figsize=(15, 18), facecolor='white')
gs = gridspec.GridSpec(205, 100)
#biplots
ax1 = plt.subplot(gs[95:140, :45])
ax2 = plt.subplot(gs[95:140, 55:])
ax3 = plt.subplot(gs[160:, :45])
ax4 = plt.subplot(gs[160:, 55:])
# heatmaps
ax9 = plt.subplot(gs[:35, 3:44])
ax11 = plt.subplot(gs[:35, 58:99])
# heatmap bars
ax10 = plt.subplot(gs[:35, :3])
ax12 = plt.subplot(gs[:35, 55:58])
# heat map color bars
ax7 = plt.subplot(gs[:35, 44:45])
ax8 = plt.subplot(gs[:35, 99:100])
# rankings
ax5 = plt.subplot(gs[40:90, :45])
ax6 = plt.subplot(gs[40:90, 55:])
# iterating variables
axn=[ax1,ax2,ax3,ax4]
axbar=[ax5,ax6,ax5,ax6]
axmap=[ax9,ax11,ax9,ax1]
axmapbar=[ax10,ax12,ax10,ax12]
axmacbar=[ax7,ax8,ax7,ax8]
regcolor=['#969696','#969696','#969696','#969696']
dtst=['Sponges','Sleep_Apnea',
'Sponges','Sleep_Apnea']
lrs=['log(\\dfrac{Synechococcophycideae( c)_{ID:sOTU984}}{Nitrosopumilus( g)_{ID:sOTU14}})',
'log(\\dfrac{Coriobacteriaceae( f)_{ID:sOTU258}}{Clostridium( g)_{ID:sOTU133}})',
'log(\\dfrac{Bacteria(k)_{ID:sOTU1224}}{A4b( f)_{ID:sOTU30}})',
'log(\\dfrac{Ruminococcus( g)_{ID:sOTU124}}{Clostridiales( o)_{ID:sOTU256}})']
ys=[0,1,0,1]
text_loc=[[(.21,.1),(.8,.85)],
[(.24,.1),(.84,.85)],
[(.40,.3),(.7,.65)],
[(.35,.3),(.7,.65)]]
arrow_loc=[[(0.015,.2),(.99,.81)],
[(0.015,.2),(.99,.81)],
[(.365,.4),(.63,.61)],
[(.31,.4),(.685,.62)]]
for (count_,ax_),dataset_,lr_,y_tmp,x_,X_arrow in zip(enumerate(axn),
dtst,lrs,ys,text_loc,arrow_loc):
logdf_tmp=lr_datasets[dataset_]['lr']
factor_=factor[dataset_]
_,cmap_out=plot_biplot(logdf_tmp, logdf_tmp,
ax_, factor_, lr_, y_tmp, regcolor[count_])
ax_.set_ylabel('PC1',fontsize=16)
#r^2
logdf_tmp_p=logdf_tmp.dropna(subset=[lr_,y_tmp])
r_=pearsonr(logdf_tmp_p[lr_].values,logdf_tmp_p[y_tmp].values)
r_=np.around(r_[0],2)
ax_.annotate('$R^{2}$='+str(abs(r_)),(.7,.80),
xycoords='axes fraction',
fontsize=22,bbox=dict(facecolor='lightgray',
edgecolor='None',alpha=1.0))
#fix axis
X_1=lr_.split('{')[2].replace('}','').replace('ID:','')
Y_1=lr_.split('{')[4].replace('}','').replace('ID:','').replace(')','')
lr_title_=lr_.replace(X_1,'').replace(Y_1,'').replace('_{ID:}','').replace(' ','').replace('Bacteria(k)','Cereibacter(g)').replace('A4b(f)','Methylonatrum(g)').replace('Synechococcophycideae(c)','Synechococcus(g)')
X_1_sp=lr_title_.split('{')[1].replace('}','').replace('Bacteria(k)','Cereibacter(g)').replace('Synechococcophycideae(c)','Synechococcus(g)')
Y_1_sp=lr_title_.split('{')[2].replace('})','').replace('A4b(f)','Methylonatrum(g)')
#'sOTU1224'->'Cereibacter(g)' by blast
#'sOTU30'->'Methylonatrum(g)' by blast
ax_.set_xlabel('$'+lr_title_+'$',fontsize=16)
## barplot
axbar_=axbar[count_] #bar
fltmp=lr_datasets[dataset_]['fl_sub'].sort_values(y_tmp,ascending=False)
fltmp=fltmp[abs(fltmp[y_tmp])>0.8]
ind = np.arange(fltmp.shape[0])
nxy_=list(fltmp[~fltmp.index.isin([X_1,Y_1])].index)
fltmp_bars=fltmp.copy()
fltmp_bars.loc[nxy_,y_tmp]=0
fltmp_bars['group']=((fltmp_bars[y_tmp]<0).astype(int)*-1)+(fltmp_bars[y_tmp]>0).astype(int)
colorsmap={0:'#a6cee3',1:'#1f78b4',-1:'#e41a1c'}
fltmp_bars[y_tmp].plot(kind='bar',color=list(fltmp_bars['group'].map(colorsmap))
,width=int(len(fltmp_bars)/100)*2,
ax=axbar_)
axbar_.annotate(X_1_sp,
x_[0],
fontsize=16, ha='center',
xycoords='axes fraction',
bbox=dict(facecolor='#1f78b4',
edgecolor='None',alpha=.2))
axbar_.annotate('', xy=X_arrow[0], xycoords='axes fraction',
ha='center', xytext=(X_arrow[0][0],0.51),
arrowprops=dict(arrowstyle="<-", color='#1f78b4', lw=5,alpha=.8))
axbar_.annotate(Y_1_sp,
x_[1],
fontsize=16, ha='center',
xycoords='axes fraction',
bbox=dict(facecolor='#e41a1c',
edgecolor='None',alpha=.2))
axbar_.annotate('', xy=X_arrow[1], xycoords='axes fraction',
ha='center', xytext=(X_arrow[1][0],0.48),
arrowprops=dict(arrowstyle="<-", color='#e41a1c', lw=5,alpha=.8))
color_map_con = cm.Greys(np.linspace(0,1,len(fltmp)))
fltmp[y_tmp].plot(kind='area',color='black',stacked=False,ax=axbar_)
axbar_.axhline(0,c='black',lw=1,ls='-')
axbar_.set_ylim(-10, 10)
axbar_.set_xticks([])
if count_ in [0,1]:
ax_.legend(loc='upper center',
bbox_to_anchor=(0.5, 3.4),
prop={'size':22},
fancybox=True, framealpha=0.5,ncol=2
, markerscale=2, facecolor="grey")
# set titles for case-study
ax_.annotate(dataset_.replace('_',' '),(0.5,3.5),
annotation_clip=False,ha='center',
xycoords='axes fraction',
fontsize=33)
# plot map
colors_map=['#1f78b4','#e31a1c']
table_tmp=lr_datasets[dataset_]['table_sub'].copy()
sort_meta=lr_datasets[dataset_]['meta_sub'][factor[dataset_]].sort_values()
sorted_df = table_tmp.reindex(index=sort_meta.index, columns=fltmp.index)
sorted_df = sorted_df.loc[:, sorted_df.sum(axis=0) > 10] #make clusters more evident
img = axmap[count_].imshow(clr(centralize(sorted_df+1)), aspect='auto',
norm=MidpointNormalize(midpoint=0.),
interpolation='nearest', cmap='PiYG')
axmap[count_].set_xticks([])
axmap[count_].set_yticks([])
# add color bar
fig.colorbar(img, cax=axmacbar[count_])
axmacbar[count_].tick_params(labelsize=8)
# color map-bars
unique_values = sorted(set(sort_meta.values))
colors_map=list(cmap_out.values())
vmap = { c : i for i, c in enumerate(unique_values) }
mapper = lambda t: vmap[str(t)]
cmap_object = mcolors.LinearSegmentedColormap.from_list('custom', colors_map, N=len(colors_map))
sns.heatmap(pd.DataFrame(sort_meta).applymap(mapper),
cmap=cmap_object,ax=axmapbar[count_],
yticklabels=False,xticklabels=False,cbar=False,alpha=.6)
axmapbar[count_].set_xlabel('')
axmapbar[count_].set_ylabel('')
fig.savefig('figures/figure5.png',dpi=300,
bbox_inches='tight',facecolor='white')
plt.show()
```
| github_jupyter |
```
from ppsim import Simulation, StatePlotter, time_trials
import numpy as np
import seaborn as sns
from matplotlib import pyplot as plt
# Either this backend or the qt backend is necessary to use the StatePlotter Snapshot object for dynamic visualization while the simulation runs
%matplotlib notebook
```
# 3-state oscillator
The 3-state rock-paper-scissors protocol is a simple set of rules that gives oscillatory dynamics:
```
r, p, s = 'rock', 'paper', 'scissors'
rps = {
(r,s): (r,r),
(p,r): (p,p),
(s,p): (s,s)
}
```
This rule has been studied in many different contexts, such as [evolutionary game theory](https://www.cambridge.org/core/books/evolutionary-games-and-population-dynamics/A8D94EBE6A16837E7CB3CED24E1948F8).
This exact protocol has also been [implemented experimentally using DNA strand displacement](https://science.sciencemag.org/content/358/6369/eaal2052)
<img src="https://science.sciencemag.org/content/sci/358/6369/eaal2052/F1.large.jpg" width="400" />
Let's take a look and what the dynamics do, starting from a uniform initial distribution:
```
def uniform_config(n):
return {r: n // 3, p: n // 3, s: n // 3}
n = 500
sim = Simulation(uniform_config(n), rps)
sim.run()
sim.history.plot()
```
We can see the amplitude of the fluctuations varies until one species dies out. [Several](https://arxiv.org/abs/1001.5235) [papers](https://arxiv.org/abs/q-bio/0605042) have analyzed these dynamics using stochastic differential equations to get analytic estimates for this time to extinction.

One considers the more general case of varied reaction rates:

We can simulate that by adding varied probabilies associated to each interaction:
```
p_r, p_p, p_s = 0.9, 0.6, 0.3
imbalanced_rps = {
(r,s): {(r,r): p_r},
(p,r): {(p,p): p_p},
(s,p): {(s,s): p_s}
}
n = 1000
sim = Simulation(uniform_config(n), imbalanced_rps)
sim.run()
sim.history.plot()
```
A [population protocols paper](https://hal.inria.fr/hal-01137486/document) gave some rigorous bounds on the behavior of this protocol.
They first showed that it belongs to a wider family of protocols that all become extinct in at most polynomial time:

They also show that for most initial configurations, the state we converge to is equally likely to be any of the three states, making the consensus decision a 'fair die roll':

Taking a look a larger simulation, we can see that the time to extinction is scaling with the population size:
```
n = 10 ** 4
sim = Simulation(uniform_config(n), rps)
sim.run()
sim.history.plot()
```
Let's run some trials to get a sense of what this rate of growth might be.
```
ns = [int(n) for n in np.geomspace(50, 10 ** 3, 10)]
df = time_trials(rps, ns, uniform_config, num_trials=1000, max_wallclock_time = 60)
fig, ax = plt.subplots()
sns.lineplot(x='n', y='time', data=df, ax = ax)
```
This suggests that the population becomes silent in linear time. This makes it pretty expensive to get larger population sizes, so we would need to increase `max_wallclock_time` and be more patient if we wanted to get good data that spans more orders of magnitude. We can take a look at the distributions of the times for each population size, which shows the silence time is pretty heavy tailed, so there seems to be a lot of variance in how long it takes.
```
fig, ax = plt.subplots()
ax = sns.violinplot(x="n", y="time", data=df, palette="muted", scale="count")
```
# 7-state oscillator
A 7-state variant of the rock-paper-scissors oscillator was defined in the paper [Universal Protocols for Information Dissemination Using Emergent Signals](https://arxiv.org/abs/1705.09798):
The first state added is a control state `x`. One of the goals of their paper was the detection problem, where they will detect the presence of `x` because of the way the presence or absence of even a single copy affect the global dynamics.
`x` brings the other agent to a random state, which serves to bring the system toward the equilibrium of equal rock/paper/scissors and keeps any states from becoming extinct.
```
x = 'x'
rpsx = {
(r,s): (r,r),
(p,r): (p,p),
(s,p): (s,s),
(x,r): {(x,r):1/3, (x,p): 1/3, (x,s): 1/3},
(x,p): {(x,r):1/3, (x,p): 1/3, (x,s): 1/3},
(x,s): {(x,r):1/3, (x,p): 1/3, (x,s): 1/3},
}
n = 100
# Start with 1 copy of x in an otherwise silent configuration
init_config = {x: 1, r: n - 1}
sim = Simulation(init_config, rpsx)
sim.run(500)
sim.history.plot()
```
The absence of `x` will take a relatively long time to have an effect in large populations, because we saw that the time to become silent is scaling linearly with population size. To speed this up, the 7-state oscillator adds a 'lazy' and 'aggressive' variant of each state, whose dynamics serve to more quickly reach extinction in the absence of `x`.

We can translate this pseudocode into a function that defines the rule:
```
# 7 states, the source, then 'rock', 'paper', 'scissors' in lazy '+' or aggressive '++' variants
states = ['x','0+','0++','1+','1++','2+','2++']
# The protocol is one-way, only
def seven_state_oscillator(a, b, p):
if p > 0.5:
return ValueError('p must be at most 0.5.')
# (5) The source converts any receiver into a lazy state of a uniformly random species:
if a == 'x' and b != 'x':
return {(a, str(i) + '+'): 1/3 for i in range(3)}
if b == 'x':
return
# (1) Interaction with an initiator from the same species makes receiver aggressive:
if a[0] == b[0]:
return a, b[0] + '++'
# (2) Interaction with an initator from a different species makes receiver lazy (case of no attack):
if int(b[0]) == (int(a[0]) + 1) % 3:
return a, b[0] + '+'
# (3) A lazy initiator has a probability p of performing a successful attack on its prey:
if int(b[0]) == (int(a[0]) - 1) % 3 and len(a) == 2:
return {(a, a[0] + '+'): p, (a, b[0] + '+'): 1-p}
# (3) An aggressive initiator has a probability 2p of performing a successful attack on its prey:
if int(b[0]) == (int(a[0]) - 1) % 3 and len(a) == 3:
return {(a, a[0] + '+'): 2*p, (a, b[0] + '+'): 1-2*p}
n = 10 ** 3
# Start with 1 copy of x in an otherwise silent configuration
init_config = {x: 1, '0+': n - 1}
sim = Simulation(init_config, seven_state_oscillator, p = 0.1)
```
We can confirm that we got the logic correct by looking at the reachable states and reactions:
```
print(sim.state_list)
print(sim.reactions)
sim.run(100 * int(np.log(n)))
sim.history.plot()
```
Now we can try adding and removing `x` mid-simulation to verify that the starts and stops the oscillations.
```
n = 10 ** 7
# Start with 1 copy of x in an otherwise silent configuration
init_config = {x: 1, '0+': n - 1}
sim = Simulation(init_config, seven_state_oscillator, p = 0.1)
sp = StatePlotter()
sim.add_snapshot(sp)
sp.ax.set_yscale('symlog')
```
If we want to watch the simulation in real time, we need to create this interactive figure before we tell it to run.
```
sim.run(100 * int(np.log(n)))
# Remove the one copy of x
print('removing x')
d = sim.config_dict
d[x] = 0
sim.set_config(d)
sim.run(100 * int(np.log(n)))
# Add back one copy of x
print('adding x')
d = sim.config_dict
d[x] = 1
sim.set_config(d)
sim.run(100 * int(np.log(n)))
sim.history.plot()
```
Notice the simulator was able to skip right through the middle period because once the clock shut down, there were no applicable transitions.
# Basis for a Phase Clock
This oscillator was used in the paper [Population Protocols are Fast](https://arxiv.org/abs/1802.06872) as the basis for a constant-state phase clock. For that, we need to create a small count of the signal `x`, and to be sure the clock keeps running, the count of `x` must stay positive. The simplest way to do this is simply to start with the entire population in state `x` and now allow multiple `x` to eliminate each other. This only requires changing one line of our protocol:
```
def seven_state_oscillator_leader_election(a, b, p):
if p > 0.5:
return ValueError('p must be at most 0.5.')
# (5) The source converts any receiver into a lazy state of a uniformly random species:
# Now this also applies to input pair (x, x), so the state x is doing simple leader election to eventually get down to one state
if a == 'x':
return {(a, str(i) + '+'): 1/3 for i in range(3)}
if b == 'x':
return
# (1) Interaction with an initiator from the same species makes receiver aggressive:
if a[0] == b[0]:
return a, b[0] + '++'
# (2) Interaction with an initator from a different species makes receiver lazy (case of no attack):
if int(b[0]) == (int(a[0]) + 1) % 3:
return a, b[0] + '+'
# (3) A lazy initiator has a probability p of performing a successful attack on its prey:
if int(b[0]) == (int(a[0]) - 1) % 3 and len(a) == 2:
return {(a, a[0] + '+'): p, (a, b[0] + '+'): 1-p}
# (3) An aggressive initiator has a probability 2p of performing a successful attack on its prey:
if int(b[0]) == (int(a[0]) - 1) % 3 and len(a) == 3:
return {(a, a[0] + '+'): 2*p, (a, b[0] + '+'): 1-2*p}
```
Once the count of `x` gets down to $O(n^{1-\epsilon})$, then we should start seeing oscillations start, which have a period of $O(\log n)$.
```
n = 10 ** 7
init_config = {x: n}
sim = Simulation(init_config, seven_state_oscillator_leader_election, p = 0.1)
sp = StatePlotter()
sim.add_snapshot(sp)
sp.ax.set_yscale('symlog')
sim.run(100 * int(np.log(n)))
sim.history.plot()
```
| github_jupyter |
```
import matplotlib
from matplotlib import pyplot as plt
import numpy as np
import os
import pandas as pd
from gPhoton import galextools as gt
plt.rcParams.update({'font.size': 18})
# Import the function definitions that accompany this notebook tutorial.
nb_funcdef_file = "function_defs.py"
if os.path.isfile(nb_funcdef_file):
from function_defs import listdir_contains, read_lightcurve, refine_flare_ranges, calculate_flare_energy
from function_defs import is_left_censored, is_right_censored, is_peak_censored, peak_flux, peak_time
else:
raise IOError("Could not find function definition file '" + nb_funcdef_file + "' that goes with this notebook.")
# Restore the output directory. Note: this assumes you've run the "generate_products" notebook already. If not you
# will need to specify the location of the products made from the "generate_products" notebook.
%store -r data_directory
# If you have not run the "generate_products" notebook during this session, uncomment the line below and specify
# the location of the output products.
data_directory = "./raw_files/"
# Restore the distance parameter. Note: this assumes you've run the "generate_products" notebook already. If not you
# will need to specify the distance to use.
%store -r distance
# If you have not run the "generate_products" notebook during this session, uncomment the line below and specify
# the distance to the system in parsecs.
distance = 1/(372.1631/1000) # parsecs
# Locate the photon files.
photon_files = {'NUV':listdir_contains(data_directory,'nd-30s.csv'),
'FUV':listdir_contains(data_directory,'fd-30s.csv')}
def get_flareranges_byhand(flare_num, orig_flare_ranges):
"""
In this notebook, we are going to break up flare events into individual flares based on the presence of peaks.
This is an alternative to our algorithm that defines a single flare event based on a return of the flux to the
INFF value. Instead, we break them up into components to consider the scenario where a complex flare morphology
is composed of multiple individual flares instead of a single flare with a complex shape.
The orig_flare_ranges is the flare range as defined by the original algorithm, in case there is a single peak
and we don't need to modify it at all.
If we modify the flare range by hand, then we need to turn off the extra checking about 3-sigma passes, since
if we split it into components those components won't necessarily pass those checks done by the original
algorithm used to define the flare range in the first place.
"""
modded = False
if flare_num==0:
flare_ranges = orig_flare_ranges
elif flare_num==1:
flare_ranges = [[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
[36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46],
[47, 48, 49, 50, 51, 52, 53, 54, 55, 56]]
modded=True
elif flare_num==2:
flare_ranges = [[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
[42, 43, 44, 45, 46, 47, 48, 49], [50, 51, 52, 53, 54, 55, 56]]
modded=True
elif flare_num==3:
flare_ranges = orig_flare_ranges
elif flare_num==4:
flare_ranges = orig_flare_ranges
elif flare_num==5:
flare_ranges = orig_flare_ranges
elif flare_num==6:
flare_ranges = [[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11],
[30, 31, 32, 33, 34, 35],
[37, 38, 39, 40, 41, 42, 43, 44]]
modded=True
elif flare_num==7:
flare_ranges = [[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11],
[36, 37, 38, 39, 40], [41, 42, 43],
[44, 45, 46, 47, 48, 49, 50, 51], [52, 53, 54, 55, 56]]
modded=True
elif flare_num==8:
flare_ranges = [[5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]]
else:
raise ValueError("More flares than expected passed to function.")
return (flare_ranges, modded)
# Also creates the figures used in the Appendix.
flare_table = pd.DataFrame()
n_visits = 9 # The last visit does not have a real flare in it that satisifes our basic criteria.
for i in np.arange(n_visits):
lc_nuv = read_lightcurve(photon_files['NUV'][i])
lc_fuv = read_lightcurve(photon_files['FUV'][i])
(flare_ranges, quiescence, quiescence_err) = refine_flare_ranges(lc_nuv, makeplot=False)
# Search for FUV flares, but use the NUV flare ranges rather than searching for new flare ranges
# based on the FUV light curve (a good choice for GJ 65, but not necessarily the case in general).
(flare_ranges_fuv, quiescence_fuv, quiescence_err_fuv) = refine_flare_ranges(lc_fuv, makeplot=False,
flare_ranges=flare_ranges)
# Override the algorithmic flare ranges by-hand instead, in this notebook.
(flare_ranges, modded) = get_flareranges_byhand(i, flare_ranges)
fig = plt.figure(figsize=(10, 8), constrained_layout=False)
gs = fig.add_gridspec(2, len(flare_ranges), height_ratios=[1,2], hspace=0.4)
ax1 = fig.add_subplot(gs[0,:])
# Ignore any data points that have bad time bins. NOTE: THIS ASSUMES A 30-SECOND BIN SIZE.
where_plot_fuv_full = np.where(lc_fuv['expt'] >= 20.0)[0]
where_plot_nuv_full = np.where(lc_nuv['expt'] >= 20.0)[0]
ax1.set_title('Visit #{i} - Full Light Curve'.format(i=i+1))
ax1.errorbar((lc_fuv['t0']-min(lc_nuv['t0']))[where_plot_fuv_full], lc_fuv['flux'][where_plot_fuv_full],
yerr=1.*lc_fuv['flux_err'][where_plot_fuv_full], fmt='-bo')
ax1.errorbar((lc_nuv['t0']-min(lc_nuv['t0']))[where_plot_nuv_full], lc_nuv['flux'][where_plot_nuv_full],
yerr=1.*lc_nuv['flux_err'][where_plot_nuv_full], fmt='-ko')
ylim = [min([min((lc_nuv['flux']-4*lc_nuv['flux_err'])[where_plot_nuv_full]),
min((lc_fuv['flux']-4*lc_fuv['flux_err'])[where_plot_fuv_full])]),
max([max((lc_nuv['flux']+4*lc_nuv['flux_err'])[where_plot_nuv_full]),
max((lc_fuv['flux']+4*lc_fuv['flux_err'])[where_plot_fuv_full])])]
n_found = 0
for flare_range in flare_ranges:
nuv_3sig = np.array(flare_range)[np.where((np.array(lc_nuv['cps'].iloc[flare_range].values)-
3*np.array(lc_nuv['cps_err'].iloc[flare_range].values) >= quiescence))[0]].tolist()
fuv_3sig = np.array(flare_range)[np.where((np.array(lc_fuv['cps'].iloc[flare_range].values)-
3*np.array(lc_fuv['cps_err'].iloc[flare_range].values) >= quiescence_fuv))[0]].tolist()
# Check that flux is simultaneously >3-sigma above quiescence in both bands (dual-band detection criteria),
# or there are at least TWO NUV fluxes at >3-sigma above quiescence (single-band detection criteria).
if not modded:
real = (any(set(nuv_3sig) & set(fuv_3sig)) or len(nuv_3sig)>1) # force detection conditions
else:
real = True
if not real:
continue
# Add a panel for the zoom-in view. No visit has more than three real flares in it.
n_found += 1
subax = fig.add_subplot(gs[1,n_found-1])
flare_data = {'visit_num':i,'flare_num':len(flare_table)+1,'duration':len(flare_range)*30}
# We pass the NUV quiescence values because we want to use the flare ranges found in the
# NUV flare search for *both* NUV and FUV. So we do NOT pass a quiescence parameter when
# calling the FUV energy calculation.
energy_nuv = calculate_flare_energy(lc_nuv, flare_range, distance, binsize=30, band='NUV',
quiescence=[quiescence, quiescence_err])
energy_fuv = calculate_flare_energy(lc_fuv, flare_range, distance, binsize=30, band='FUV')
nuv_sn = max(((np.array(lc_nuv['cps'].iloc[flare_range].values) -
3*np.array(lc_nuv['cps_err'].iloc[flare_range].values)) / quiescence))
flare_data['energy_nuv'] = energy_nuv[0]
flare_data['energy_err_nuv'] = energy_nuv[1]
flare_data['energy_fuv'] = energy_fuv[0]
flare_data['energy_err_fuv'] = energy_fuv[1]
flare_data['nuv_sn'] = nuv_sn
# If the flare is detected because it has at least one FUV and one NUV at the same time
# above 3*INFF, this will be True.
flare_data['detmeth_nf'] = any(set(nuv_3sig) & set(fuv_3sig))
# If the flare is detected because it has at least two NUV fluxes that are both
# above 3*INFF, this will be True.
flare_data['detmeth_nn'] = len(nuv_3sig) > 1
flare_data['left_censored'] = is_left_censored(flare_range)
flare_data['right_censored'] = is_right_censored(lc_nuv,flare_range)
flare_data['peak_flux_nuv'] = peak_flux(lc_nuv,flare_range)
flare_data['peak_t0_nuv'] = peak_time(lc_nuv,flare_range)
flare_data['peak_censored'] = is_peak_censored(lc_nuv,flare_range)
flare_data['peak_flux_fuv'] = peak_flux(lc_fuv,flare_range)
flare_data['peak_t0_fuv'] = peak_time(lc_fuv,flare_range)
flare_data['quiescence_nuv'] = quiescence
flare_data['quiescence_err_nuv'] = quiescence_err
flare_data['quiescence_fuv'] = quiescence_fuv
flare_data['quiescence_err_fuv'] = quiescence_err_fuv
flare_data['flare_range'] = flare_range
flare_table = flare_table.append(flare_data,ignore_index=True)
# Make plots
commentstr = 'Truncation: '
if flare_data['left_censored']:
commentstr += 'Left;'
if flare_data['right_censored']:
commentstr += 'Right;'
if flare_data['peak_censored']:
commentstr += 'Peak;'
detectstr = 'Detection: '
if flare_data['detmeth_nf']:
detectstr += 'FUV+NUV;'
if flare_data['detmeth_nn']:
detectstr += 'Multi NUV;'
# Added too much buffer in x-direction, use x-labels to identify what part of a visit
# this flare comes from.
t_buffer = 30.
# Ignore any data points that have bad time bins. NOTE: THIS ASSUMES A 30-SECOND BIN SIZE.
where_plot_fuv = list(set(list(np.where(lc_fuv['expt'] >= 20.0)[0])).intersection(flare_range))
where_plot_nuv = list(set(list(np.where(lc_nuv['expt'] >= 20.0)[0])).intersection(flare_range))
where_plot_fuv.sort()
where_plot_nuv.sort()
subax.errorbar((lc_fuv['t0'].iloc[where_plot_fuv]-min(lc_nuv['t0'])),
lc_fuv['flux'].iloc[where_plot_fuv],
yerr=1.*lc_fuv['flux_err'].iloc[where_plot_fuv], fmt='bo-', label="FUV")
subax.errorbar((lc_nuv['t0']-min(lc_nuv['t0'])).iloc[where_plot_nuv],
lc_nuv['flux'].iloc[where_plot_nuv],
yerr=1.*lc_nuv['flux_err'].iloc[where_plot_nuv], fmt='ko-', label="NUV")
subax.set_xlim([lc_nuv['t0'].iloc[flare_range].min()-min(lc_nuv['t0'])-t_buffer,
lc_nuv['t1'].iloc[flare_range].max()-min(lc_nuv['t0'])+t_buffer])
subax.set_ylim([min((lc_nuv['flux']-4*lc_nuv['flux_err']).iloc[flare_range].min(),
(lc_fuv['flux']-4*lc_fuv['flux_err']).iloc[flare_range].min()),
max((lc_nuv['flux']+4*lc_nuv['flux_err']).iloc[flare_range].max(),
(lc_fuv['flux']+4*lc_fuv['flux_err']).iloc[flare_range].max())])
subax.hlines(gt.counts2flux(quiescence,'NUV'), lc_nuv['t0'].min()-min(lc_nuv['t0']),
lc_nuv['t0'].max()-min(lc_nuv['t0']), label='NUV quiescence',linestyles='dashed',color='k')
vlinecolor = 'black'
if n_found == 2:
vlinecolor = "dimgrey"
ax1.vlines(lc_nuv['t0'].iloc[flare_range].min()-min(lc_nuv['t0']), -999, 999, color=vlinecolor)
ax1.vlines(lc_nuv['t0'].iloc[flare_range].max()-min(lc_nuv['t0']), -999, 999, color=vlinecolor)
ax1.text((lc_nuv['t0'].iloc[flare_range].max() - lc_nuv['t0'].iloc[flare_range].min())/2.-min(lc_nuv['t0']) +
lc_nuv['t0'].iloc[flare_range].min(), ylim[1]*0.95,
"Flare #{m}".format(m=len(flare_table)), color=vlinecolor)
subax.set_title('Flare #{m}'.format(m=len(flare_table)))
subax.legend(ncol=2, fontsize=8)
ax1.set_ylim(ylim)
ax1.set_xlabel('Seconds (from start of visit)')
ax1.set_ylabel('Flux (erg/s/cm^2)')
fig.savefig('figures/visit_{i}_byhand.eps'.format(i=i+1), dpi=600)
plt.close(fig)
# Creates the figure used in the main part of the paper.
visit_arr = [0,1,2,3,4]
fig, axs = plt.subplots(len(visit_arr), 1, figsize=(10, 8), constrained_layout=True)
for ii,i in enumerate(visit_arr):
lc_nuv = read_lightcurve(photon_files['NUV'][i])
lc_fuv = read_lightcurve(photon_files['FUV'][i])
# Ignore any data points that have bad time bins. NOTE: THIS ASSUMES A 30-SECOND BIN SIZE.
where_plot_fuv_full = np.where(lc_fuv['expt'] >= 20.0)[0]
where_plot_nuv_full = np.where(lc_nuv['expt'] >= 20.0)[0]
axs[ii].errorbar((lc_fuv['t0']-min(lc_nuv['t0']))[where_plot_fuv_full], lc_fuv['flux'][where_plot_fuv_full],
yerr=1.*lc_fuv['flux_err'][where_plot_fuv_full], fmt='-bo')
axs[ii].errorbar((lc_nuv['t0']-min(lc_nuv['t0']))[where_plot_nuv_full], lc_nuv['flux'][where_plot_nuv_full],
yerr=1.*lc_nuv['flux_err'][where_plot_nuv_full], fmt='-ko')
ylim = [min([min((lc_nuv['flux']-4*lc_nuv['flux_err'])[where_plot_nuv_full]),
min((lc_fuv['flux']-4*lc_fuv['flux_err'])[where_plot_fuv_full])]),
max([max((lc_nuv['flux']+4*lc_nuv['flux_err'])[where_plot_nuv_full]),
max((lc_fuv['flux']+4*lc_fuv['flux_err'])[where_plot_fuv_full])])]
axs[ii].set_ylim(ylim)
# Create shade regions for each flare in this visit.
for fr, vn in zip(flare_table['flare_range'], flare_table['visit_num']):
if int(vn) == i:
mint = (lc_nuv['t0']-lc_nuv['t0'][0])[min(fr)]
maxt = (lc_nuv['t0']-lc_nuv['t0'][0])[max(fr)]
axs[ii].fill([mint, maxt, maxt, mint], [-999, -999, 999, 999], '0.9')
fig.text(0.5, -0.02, 'Seconds (from start of visit)', ha='center', va='center')
fig.text(-0.02, 0.5, 'Flux (erg/s/cm^2/Angstrom)', ha='center', va='center', rotation='vertical')
fig.savefig('figures/all_visits_01_byhand.eps', dpi=600)
visit_arr = [5,6,7,8]
fig, axs = plt.subplots(len(visit_arr), 1, figsize=(10, 8), constrained_layout=True)
for ii,i in enumerate(visit_arr):
lc_nuv = read_lightcurve(photon_files['NUV'][i])
lc_fuv = read_lightcurve(photon_files['FUV'][i])
# Ignore any data points that have bad time bins. NOTE: THIS ASSUMES A 30-SECOND BIN SIZE.
where_plot_fuv_full = np.where(lc_fuv['expt'] >= 20.0)[0]
where_plot_nuv_full = np.where(lc_nuv['expt'] >= 20.0)[0]
axs[ii].errorbar((lc_fuv['t0']-min(lc_nuv['t0']))[where_plot_fuv_full], lc_fuv['flux'][where_plot_fuv_full],
yerr=1.*lc_fuv['flux_err'][where_plot_fuv_full], fmt='-bo')
axs[ii].errorbar((lc_nuv['t0']-min(lc_nuv['t0']))[where_plot_nuv_full], lc_nuv['flux'][where_plot_nuv_full],
yerr=1.*lc_nuv['flux_err'][where_plot_nuv_full], fmt='-ko')
ylim = [min([min((lc_nuv['flux']-4*lc_nuv['flux_err'])[where_plot_nuv_full]),
min((lc_fuv['flux']-4*lc_fuv['flux_err'])[where_plot_fuv_full])]),
max([max((lc_nuv['flux']+4*lc_nuv['flux_err'])[where_plot_nuv_full]),
max((lc_fuv['flux']+4*lc_fuv['flux_err'])[where_plot_fuv_full])])]
axs[ii].set_ylim(ylim)
# Create shade regions for each flare in this visit.
for fr, vn in zip(flare_table['flare_range'], flare_table['visit_num']):
if int(vn) == i:
mint = (lc_nuv['t0']-lc_nuv['t0'][0])[min(fr)]
maxt = (lc_nuv['t0']-lc_nuv['t0'][0])[max(fr)]
axs[ii].fill([mint, maxt, maxt, mint], [-999, -999, 999, 999], '0.9')
fig.text(0.5, -0.02, 'Seconds (from start of visit)', ha='center', va='center')
fig.text(-0.02, 0.5, 'Flux (erg/s/cm^2/Angstrom)', ha='center', va='center', rotation='vertical')
fig.savefig('figures/all_visits_02_byhand.eps', dpi=600)
# Make the table of flare properties in the paper.
# Reformat the energy measurements to include error bars and reasonable sigfigs
nuv_energy_string, fuv_energy_string = [], []
for e, e_err in zip(np.array(np.log10(flare_table['energy_nuv']), dtype='float16'),
np.array(np.log10(flare_table['energy_err_nuv']), dtype='float16')):
nuv_energy_string += ['{:4.2f} pm {:4.2f}'.format(e, e_err, 2)]
for e, e_err in zip(np.array(np.log10(flare_table['energy_fuv']), dtype='float16'),
np.array(np.log10(flare_table['energy_err_fuv']), dtype='float16')):
fuv_energy_string += ['{:4.2f} pm {:4.2f}'.format(e, e_err, 2)]
# Reformat the peak timestamps
t = pd.to_datetime(flare_table['peak_t0_nuv'] + gt.GPSSECS, unit='s')
t.iloc[np.where(np.array(flare_table['peak_censored'], dtype='bool'))] = 'peak not measured'
# Convert key columns in the flare table into a format suitable for printing in a LaTeX table
summary_table = pd.DataFrame({
'Flare':np.array(flare_table['flare_num'],dtype='int16'),
'Visit':np.array(flare_table['visit_num'],dtype='int16')+1,
'NUV Peak Time (UTC)':t,
'Duration':np.array(flare_table['duration']/60.,dtype='float16'),
'log(E_NUV)*':nuv_energy_string,
'log(E_FUV)':fuv_energy_string,
'NUV Strength':flare_table['nuv_sn'],
})
print(summary_table.to_latex(index=False))
```
| github_jupyter |
### 2.2 CNN Models - Test Cases
The trained CNN model was performed to a hold-out test set with 10,873 images.
The network obtained 0.743 and 0.997 AUC-PRC on the hold-out test set for cored plaque and diffuse plaque respectively.
```
import time, os
import torch
torch.manual_seed(42)
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.optim import lr_scheduler
import torchvision
from torchvision import transforms
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
CSV_DIR = 'data/CSVs/test.csv'
MODEL_DIR = 'models/CNN_model_parameters.pkl'
IMG_DIR = 'data/tiles/hold-out/'
NEGATIVE_DIR = 'data/seg/negatives/'
SAVE_DIR = 'data/outputs/'
if not os.path.exists(SAVE_DIR):
os.makedirs(SAVE_DIR)
batch_size = 32
num_workers = 8
norm = np.load('utils/normalization.npy', allow_pickle=True).item()
from torch.utils.data import Dataset
from PIL import Image
class MultilabelDataset(Dataset):
def __init__(self, csv_path, img_path, transform=None):
"""
Args:
csv_path (string): path to csv file
img_path (string): path to the folder where images are
transform: pytorch transforms for transforms and tensor conversion
"""
self.data_info = pd.read_csv(csv_path)
self.img_path = img_path
self.transform = transform
c=torch.Tensor(self.data_info.loc[:,'cored'])
d=torch.Tensor(self.data_info.loc[:,'diffuse'])
a=torch.Tensor(self.data_info.loc[:,'CAA'])
c=c.view(c.shape[0],1)
d=d.view(d.shape[0],1)
a=a.view(a.shape[0],1)
self.raw_labels = torch.cat([c,d,a], dim=1)
self.labels = (torch.cat([c,d,a], dim=1)>0.99).type(torch.FloatTensor)
def __getitem__(self, index):
# Get label(class) of the image based on the cropped pandas column
single_image_label = self.labels[index]
raw_label = self.raw_labels[index]
# Get image name from the pandas df
single_image_name = str(self.data_info.loc[index,'imagename'])
# Open image
try:
img_as_img = Image.open(self.img_path + single_image_name)
except:
img_as_img = Image.open(NEGATIVE_DIR + single_image_name)
# Transform image to tensor
if self.transform is not None:
img_as_img = self.transform(img_as_img)
# Return image and the label
return (img_as_img, single_image_label, raw_label, single_image_name)
def __len__(self):
return len(self.data_info.index)
data_transforms = {
'test' : transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(norm['mean'], norm['std'])
])
}
image_datasets = {'test': MultilabelDataset(CSV_DIR, IMG_DIR,
data_transforms['test'])}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x],
batch_size=batch_size,
shuffle=False,
num_workers=num_workers)
for x in ['test']}
dataset_sizes = {x: len(image_datasets[x]) for x in ['test']}
image_classes = ['cored','diffuse','CAA']
use_gpu = torch.cuda.is_available()
def imshow(inp, title=None):
"""Imshow for Tensor."""
inp = inp.numpy().transpose((1, 2, 0))
mean = np.array(norm['mean'])
std = np.array(norm['std'])
inp = std * inp + mean
inp = np.clip(inp, 0, 1)
plt.figure()
plt.imshow(inp)
if title is not None:
plt.title(title)
plt.pause(0.001) # pause a bit so that plots are updated
# Get a batch of training data
inputs, labels, raw_labels, names = next(iter(dataloaders['test']))
# Make a grid from batch
out = torchvision.utils.make_grid(inputs)
imshow(out)
class Net(nn.Module):
def __init__(self, fc_nodes=512, num_classes=3, dropout=0.5):
super(Net, self).__init__()
def forward(self, x):
x = self.features(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return x
def dev_model(model, criterion, phase='test', gpu_id=None):
phase = phase
since = time.time()
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=batch_size,
shuffle=False, num_workers=num_workers)
for x in [phase]}
model.train(False)
running_loss = 0.0
running_corrects = torch.zeros(len(image_classes))
running_preds = torch.Tensor(0)
running_predictions = torch.Tensor(0)
running_labels = torch.Tensor(0)
running_raw_labels = torch.Tensor(0)
# Iterate over data.
step = 0
for data in dataloaders[phase]:
step += 1
# get the inputs
inputs, labels, raw_labels, names = data
running_labels = torch.cat([running_labels, labels])
running_raw_labels = torch.cat([running_raw_labels, raw_labels])
# wrap them in Variable
if use_gpu:
inputs = Variable(inputs.cuda(gpu_id))
labels = Variable(labels.cuda(gpu_id))
else:
inputs, labels = Variable(inputs), Variable(labels)
# forward
outputs = model(inputs)
preds = F.sigmoid(outputs) #posibility for each class
#print(preds)
if use_gpu:
predictions = (preds>0.5).type(torch.cuda.FloatTensor)
else:
predictions = (preds>0.5).type(torch.FloatTensor)
loss = criterion(outputs, labels)
preds = preds.data.cpu()
predictions = predictions.data.cpu()
labels = labels.data.cpu()
# statistics
running_loss += loss.data[0]
running_corrects += torch.sum(predictions==labels, 0).type(torch.FloatTensor)
running_preds = torch.cat([running_preds, preds])
running_predictions = torch.cat([running_predictions, predictions])
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects / dataset_sizes[phase]
print('{} Loss: {:.4f}\n Cored: {:.4f} Diffuse: {:.4f} CAA: {:.4f}'.format(
phase, epoch_loss, epoch_acc[0], epoch_acc[1], epoch_acc[2]))
print()
time_elapsed = time.time() - since
print('Prediction complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
return epoch_acc, running_preds, running_predictions, running_labels
from sklearn.metrics import roc_curve, auc, precision_recall_curve
def plot_roc(preds, label, image_classes, size=20, path=None):
colors = ['pink','c','deeppink', 'b', 'g', 'm', 'y', 'r', 'k']
fig = plt.figure(figsize=(1.2*size, size))
ax = plt.axes()
for i in range(preds.shape[1]):
fpr, tpr, _ = roc_curve(label[:,i].ravel(), preds[:,i].ravel())
lw = 0.2*size
# Plot all ROC curves
ax.plot([0, 1], [0, 1], 'k--', lw=lw, label='random')
ax.plot(fpr, tpr,
label='ROC-curve of {}'.format(image_classes[i])+ '( area = {0:0.3f})'
''.format(auc(fpr, tpr)),
color=colors[(i+preds.shape[1])%len(colors)], linewidth=lw)
ax.set_xlim([0.0, 1.0])
ax.set_ylim([0.0, 1.05])
ax.set_xlabel('False Positive Rate', fontsize=1.8*size)
ax.set_ylabel('True Positive Rate', fontsize=1.8*size)
ax.set_title('Receiver operating characteristic Curve', fontsize=1.8*size, y=1.01)
ax.legend(loc=0, fontsize=1.5*size)
ax.xaxis.set_tick_params(labelsize=1.6*size, size=size/2, width=0.2*size)
ax.yaxis.set_tick_params(labelsize=1.6*size, size=size/2, width=0.2*size)
if path != None:
fig.savefig(path)
# plt.close(fig)
print('saved')
def plot_prc(preds, label, image_classes, size=20, path=None):
colors = ['pink','c','deeppink', 'b', 'g', 'm', 'y', 'r', 'k']
fig = plt.figure(figsize=(1.2*size,size))
ax = plt.axes()
for i in range(preds.shape[1]):
rp = (label[:,i]>0).sum()/len(label)
precision, recall, _ = precision_recall_curve(label[:,i].ravel(), preds[:,i].ravel())
lw=0.2*size
ax.plot(recall, precision,
label='PR-curve of {}'.format(image_classes[i])+ '( area = {0:0.3f})'
''.format(auc(recall, precision)),
color=colors[(i+preds.shape[1])%len(colors)], linewidth=lw)
ax.plot([0, 1], [rp, rp], 'k--', color=colors[(i+preds.shape[1])%len(colors)], lw=lw, label='random')
ax.set_xlim([0.0, 1.0])
ax.set_ylim([0.0, 1.05])
ax.set_xlabel('Recall', fontsize=1.8*size)
ax.set_ylabel('Precision', fontsize=1.8*size)
ax.set_title('Precision-Recall curve', fontsize=1.8*size, y=1.01)
ax.legend(loc="lower left", bbox_to_anchor=(0.01, 0.1), fontsize=1.5*size)
ax.xaxis.set_tick_params(labelsize=1.6*size, size=size/2, width=0.2*size)
ax.yaxis.set_tick_params(labelsize=1.6*size, size=size/2, width=0.2*size)
if path != None:
fig.savefig(path)
# plt.close(fig)
print('saved')
def auc_roc(preds, label):
aucroc = []
for i in range(preds.shape[1]):
fpr, tpr, _ = roc_curve(label[:,i].ravel(), preds[:,i].ravel())
aucroc.append(auc(fpr, tpr))
return aucroc
def auc_prc(preds, label):
aucprc = []
for i in range(preds.shape[1]):
precision, recall, _ = precision_recall_curve(label[:,i].ravel(), preds[:,i].ravel())
aucprc.append(auc(recall, precision))
return aucprc
criterion = nn.MultiLabelSoftMarginLoss(size_average=False)
model = torch.load(MODEL_DIR, map_location=lambda storage, loc: storage)
if use_gpu:
model = model.module.cuda()
# take 10s running on single GPU
try:
acc, pred, prediction, target = dev_model(model.module, criterion, phase='test', gpu_id=None)
except:
acc, pred, prediction, target = dev_model(model, criterion, phase='test', gpu_id=None)
label = target.numpy()
preds = pred.numpy()
output = {}
for i in range(3):
fpr, tpr, _ = roc_curve(label[:,i].ravel(), preds[:,i].ravel())
precision, recall, _ = precision_recall_curve(label[:,i].ravel(), preds[:,i].ravel())
output['{} fpr'.format(image_classes[i])] = fpr
output['{} tpr'.format(image_classes[i])] = tpr
output['{} precision'.format(image_classes[i])] = precision
output['{} recall'.format(image_classes[i])] = recall
outcsv = pd.DataFrame(dict([ (k,pd.Series(v)) for k,v in output.items() ]))
outcsv.to_csv(SAVE_DIR+'CNN_test_output.csv', index=False)
plot_roc(pred.numpy(), target.numpy(), image_classes, size=30)
plot_prc(pred.numpy(), target.numpy(), image_classes, size=30)
```
| github_jupyter |
TSG023 - Get all BDC objects (Kubernetes)
=========================================
Description
-----------
Get a summary of all Kubernetes resources for the system namespace and
the Big Data Cluster namespace
Steps
-----
### Common functions
Define helper functions used in this notebook.
```
# Define `run` function for transient fault handling, suggestions on error, and scrolling updates on Windows
import sys
import os
import re
import json
import platform
import shlex
import shutil
import datetime
from subprocess import Popen, PIPE
from IPython.display import Markdown
retry_hints = {} # Output in stderr known to be transient, therefore automatically retry
error_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help
install_hint = {} # The SOP to help install the executable if it cannot be found
first_run = True
rules = None
debug_logging = False
def run(cmd, return_output=False, no_output=False, retry_count=0, base64_decode=False, return_as_json=False):
"""Run shell command, stream stdout, print stderr and optionally return output
NOTES:
1. Commands that need this kind of ' quoting on Windows e.g.:
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name}
Need to actually pass in as '"':
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='"'data-pool'"')].metadata.name}
The ' quote approach, although correct when pasting into Windows cmd, will hang at the line:
`iter(p.stdout.readline, b'')`
The shlex.split call does the right thing for each platform, just use the '"' pattern for a '
"""
MAX_RETRIES = 5
output = ""
retry = False
global first_run
global rules
if first_run:
first_run = False
rules = load_rules()
# When running `azdata sql query` on Windows, replace any \n in """ strings, with " ", otherwise we see:
#
# ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)')
#
if platform.system() == "Windows" and cmd.startswith("azdata sql query"):
cmd = cmd.replace("\n", " ")
# shlex.split is required on bash and for Windows paths with spaces
#
cmd_actual = shlex.split(cmd)
# Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries
#
user_provided_exe_name = cmd_actual[0].lower()
# When running python, use the python in the ADS sandbox ({sys.executable})
#
if cmd.startswith("python "):
cmd_actual[0] = cmd_actual[0].replace("python", sys.executable)
# On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail
# with:
#
# UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128)
#
# Setting it to a default value of "en_US.UTF-8" enables pip install to complete
#
if platform.system() == "Darwin" and "LC_ALL" not in os.environ:
os.environ["LC_ALL"] = "en_US.UTF-8"
# When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc`
#
if cmd.startswith("kubectl ") and "AZDATA_OPENSHIFT" in os.environ:
cmd_actual[0] = cmd_actual[0].replace("kubectl", "oc")
# To aid supportability, determine which binary file will actually be executed on the machine
#
which_binary = None
# Special case for CURL on Windows. The version of CURL in Windows System32 does not work to
# get JWT tokens, it returns "(56) Failure when receiving data from the peer". If another instance
# of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost
# always the first curl.exe in the path, and it can't be uninstalled from System32, so here we
# look for the 2nd installation of CURL in the path)
if platform.system() == "Windows" and cmd.startswith("curl "):
path = os.getenv('PATH')
for p in path.split(os.path.pathsep):
p = os.path.join(p, "curl.exe")
if os.path.exists(p) and os.access(p, os.X_OK):
if p.lower().find("system32") == -1:
cmd_actual[0] = p
which_binary = p
break
# Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this
# seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound)
#
# NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split.
#
if which_binary == None:
which_binary = shutil.which(cmd_actual[0])
# Display an install HINT, so the user can click on a SOP to install the missing binary
#
if which_binary == None:
print(f"The path used to search for '{cmd_actual[0]}' was:")
print(sys.path)
if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None:
display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)")
else:
cmd_actual[0] = which_binary
start_time = datetime.datetime.now().replace(microsecond=0)
print(f"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)")
print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})")
print(f" cwd: {os.getcwd()}")
# Command-line tools such as CURL and AZDATA HDFS commands output
# scrolling progress bars, which causes Jupyter to hang forever, to
# workaround this, use no_output=True
#
# Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait
#
wait = True
try:
if no_output:
p = Popen(cmd_actual)
else:
p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1)
with p.stdout:
for line in iter(p.stdout.readline, b''):
line = line.decode()
if return_output:
output = output + line
else:
if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file
regex = re.compile(' "(.*)"\: "(.*)"')
match = regex.match(line)
if match:
if match.group(1).find("HTML") != -1:
display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"'))
else:
display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"'))
wait = False
break # otherwise infinite hang, have not worked out why yet.
else:
print(line, end='')
if rules is not None:
apply_expert_rules(line)
if wait:
p.wait()
except FileNotFoundError as e:
if install_hint is not None:
display(Markdown(f'HINT: Use {install_hint} to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e
exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait()
if not no_output:
for line in iter(p.stderr.readline, b''):
try:
line_decoded = line.decode()
except UnicodeDecodeError:
# NOTE: Sometimes we get characters back that cannot be decoded(), e.g.
#
# \xa0
#
# For example see this in the response from `az group create`:
#
# ERROR: Get Token request returned http error: 400 and server
# response: {"error":"invalid_grant",# "error_description":"AADSTS700082:
# The refresh token has expired due to inactivity.\xa0The token was
# issued on 2018-10-25T23:35:11.9832872Z
#
# which generates the exception:
#
# UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte
#
print("WARNING: Unable to decode stderr line, printing raw bytes:")
print(line)
line_decoded = ""
pass
else:
# azdata emits a single empty line to stderr when doing an hdfs cp, don't
# print this empty "ERR:" as it confuses.
#
if line_decoded == "":
continue
print(f"STDERR: {line_decoded}", end='')
if line_decoded.startswith("An exception has occurred") or line_decoded.startswith("ERROR: An error occurred while executing the following cell"):
exit_code_workaround = 1
# inject HINTs to next TSG/SOP based on output in stderr
#
if user_provided_exe_name in error_hints:
for error_hint in error_hints[user_provided_exe_name]:
if line_decoded.find(error_hint[0]) != -1:
display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.'))
# apply expert rules (to run follow-on notebooks), based on output
#
if rules is not None:
apply_expert_rules(line_decoded)
# Verify if a transient error, if so automatically retry (recursive)
#
if user_provided_exe_name in retry_hints:
for retry_hint in retry_hints[user_provided_exe_name]:
if line_decoded.find(retry_hint) != -1:
if retry_count < MAX_RETRIES:
print(f"RETRY: {retry_count} (due to: {retry_hint})")
retry_count = retry_count + 1
output = run(cmd, return_output=return_output, retry_count=retry_count)
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
elapsed = datetime.datetime.now().replace(microsecond=0) - start_time
# WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so
# don't wait here, if success known above
#
if wait:
if p.returncode != 0:
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n')
else:
if exit_code_workaround !=0 :
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(exit_code_workaround)}.\n')
print(f'\nSUCCESS: {elapsed}s elapsed.\n')
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
def load_json(filename):
"""Load a json file from disk and return the contents"""
with open(filename, encoding="utf8") as json_file:
return json.load(json_file)
def load_rules():
"""Load any 'expert rules' from the metadata of this notebook (.ipynb) that should be applied to the stderr of the running executable"""
# Load this notebook as json to get access to the expert rules in the notebook metadata.
#
try:
j = load_json("tsg023-run-kubectl-get-all.ipynb")
except:
pass # If the user has renamed the book, we can't load ourself. NOTE: Is there a way in Jupyter, to know your own filename?
else:
if "metadata" in j and \
"azdata" in j["metadata"] and \
"expert" in j["metadata"]["azdata"] and \
"expanded_rules" in j["metadata"]["azdata"]["expert"]:
rules = j["metadata"]["azdata"]["expert"]["expanded_rules"]
rules.sort() # Sort rules, so they run in priority order (the [0] element). Lowest value first.
# print (f"EXPERT: There are {len(rules)} rules to evaluate.")
return rules
def apply_expert_rules(line):
"""Determine if the stderr line passed in, matches the regular expressions for any of the 'expert rules', if so
inject a 'HINT' to the follow-on SOP/TSG to run"""
global rules
for rule in rules:
notebook = rule[1]
cell_type = rule[2]
output_type = rule[3] # i.e. stream or error
output_type_name = rule[4] # i.e. ename or name
output_type_value = rule[5] # i.e. SystemExit or stdout
details_name = rule[6] # i.e. evalue or text
expression = rule[7].replace("\\*", "*") # Something escaped *, and put a \ in front of it!
if debug_logging:
print(f"EXPERT: If rule '{expression}' satisfied', run '{notebook}'.")
if re.match(expression, line, re.DOTALL):
if debug_logging:
print("EXPERT: MATCH: name = value: '{0}' = '{1}' matched expression '{2}', therefore HINT '{4}'".format(output_type_name, output_type_value, expression, notebook))
match_found = True
display(Markdown(f'HINT: Use [{notebook}]({notebook}) to resolve this issue.'))
print('Common functions defined successfully.')
# Hints for binary (transient fault) retry, (known) error and install guide
#
retry_hints = {'kubectl': ['A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond']}
error_hints = {'kubectl': [['no such host', 'TSG010 - Get configuration contexts', '../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb'], ['No connection could be made because the target machine actively refused it', 'TSG056 - Kubectl fails with No connection could be made because the target machine actively refused it', '../repair/tsg056-kubectl-no-connection-could-be-made.ipynb']]}
install_hint = {'kubectl': ['SOP036 - Install kubectl command line interface', '../install/sop036-install-kubectl.ipynb']}
```
### Run kubectl get all for the system namespace
```
run("kubectl get all")
```
### Get the Kubernetes namespace for the big data cluster
Get the namespace of the Big Data Cluster use the kubectl command line
interface .
**NOTE:**
If there is more than one Big Data Cluster in the target Kubernetes
cluster, then either:
- set \[0\] to the correct value for the big data cluster.
- set the environment variable AZDATA\_NAMESPACE, before starting
Azure Data Studio.
```
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = run(f'kubectl get namespace --selector=MSSQL_CLUSTER -o jsonpath={{.items[0].metadata.name}}', return_output=True)
except:
from IPython.display import Markdown
print(f"ERROR: Unable to find a Kubernetes namespace with label 'MSSQL_CLUSTER'. SQL Server Big Data Cluster Kubernetes namespaces contain the label 'MSSQL_CLUSTER'.")
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print(f'The SQL Server Big Data Cluster Kubernetes namespace is: {namespace}')
```
### Run kubectl get all for the Big Data Cluster namespace
```
run(f"kubectl get all -n {namespace}")
print('Notebook execution complete.')
```
| github_jupyter |
# HMDA Data -- Regression Modeling
## Using ML with *scikit-learn* for modeling -- (02) Logistical Regression
This notebook explores the Home Mortgage Disclosure Act (HMDA) data for one year -- 2015. We use concepts from as well as tools from our own research and further readings to create a machine learning logistical regression model along with Naive Bayes classifers for predictive properties of loan approval rates.
*Note that as of July 12, 2019, HMDA data is publically available for 2007 - 2017.
https://www.consumerfinance.gov/data-research/hmda/explore
--
**Documentation:**
(1) See below in '02'
*There are many learning sources and prior work around similar topics: We draw inspiration from past Cohorts as well as learning materials from peer sources such as Kaggle and Towards Data Science*.
---
## Importing Libraries and Loading the Data
First, we need to import all the libraries we are going to utilize throughout this notebook. We import everything at the very top of this notebook for order and best practice.
```
# Importing Libraries.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import math
import os
import psycopg2
import pandas.io.sql as psql
import sqlalchemy
from sqlalchemy import create_engine
from sklearn import preprocessing
from sklearn import model_selection
from sklearn.preprocessing import LabelEncoder
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from scipy import stats
from pylab import*
from matplotlib.ticker import LogLocator
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
```
*-----*
Second, we establish the connection to the AWS PostgreSQL Relational Database System.
```
# Postgres (username, password, and database name) -- we define variables and put it into a function to easily call using an engine.
postgres_host = 'aws-pgsql-loan-canoe.cr3nrpkvgwaj.us-east-2.rds.amazonaws.com'
postgres_port = '5432'
postgres_username = 'reporting_user'
postgres_password = 'team_loan_canoe2019'
postgres_dbname = "paddle_loan_canoe"
postgres_str = ('postgresql://{username}:{password}@{host}:{port}/{dbname}'
.format(username = postgres_username,
password = postgres_password,
host = postgres_host,
port = postgres_port,
dbname = postgres_dbname)
)
# Creating the connection.
cnx = create_engine(postgres_str)
```
| github_jupyter |
# Chapter 10: Tuples
### Tuples are immutable
A tuple is a sequence of values much like a list. The values stored in a tuple can be **any type**, and they are **indexed by integers**.
```
tp1 = 'a', 'b', 'c', 'd', 'e'
type(tp1)
tp2 = ('a', 'b', 'c', 'd', 'e')
type(tpl2)
tp1 is tp2
# Without the comma Python treats ('a') as an expression with a string in parentheses that evaluates to a string:
st = ('Amin')
type(st)
# empty tuple
empty_tuple = tuple()
empty_tuple
dir(tp1)
tp1.count('a')
help(tp1.index)
# If the argument is a sequence (string, list, or tuple),
# the result of the call to tuple is a tuple with the elements of the sequence:
new_tuple = tuple('Hello World!')
new_tuple
new_tuple.count('l')
new_tuple.index('l',4)
new_tuple[0] = 'Amin'
new_tuple = 'new string is assigned'
print(new_tuple)
print('type is: ', type(new_tuple))
new_tuple = tuple('new string is assigned')
print(new_tuple)
print('type: ', type(new_tuple))
```
## Comparing tuples
The comparison operators work with tuples and other sequences. Python starts by comparing the first element from each sequence. If they are equal, it goes on to the next element, and so on, until it finds elements that differ. Subsequent elements are not considered (even if they are really big).
```
print((0, 1, 2) < (0, 3, 4))
print((0, 5, 0) < (0, 3, 4))
print((0, 1, 2000000) < (0, 1))
```
### IMPORTANT POINT
```
### IMPORTANT POINT
m = ('Amin', 'Oroji', 'Armin ')
x, y, z = m
print(x)
print(y)
print(z)
print(type(x), type(m))
```
The same as lists, it means:
x = m[0] and y = m[1]
```
a, b = 1, 2, 3
# example
addr = 'amin@prata-tech.com'
uname, domain = addr.split('@')
print('username: ', uname)
print('Domain: ', domain)
```
### Dictionaries and Tuples
```
### Dictionaries and Tuples
employees = {'CEO':'Amin', 'IT Support': 'Milad', 'DM Manager': 'Sahar', 'AI Eng': 'Armin',
'Graphic Designer':'Raana', 'UX Designer':'Narges'
}
items = list(employees.items())
items
type(items[0])
items.sort()
items
```
### Multiple assignment with dictionaries
```
for key, val in items:
print('%s : %s' %(val, key))
items.sort(reverse = True)
items
new_list = list()
for key, val in items:
new_list.append((val,key))
new_list
```
### Using Tuples as Keys in Dictionaries
Tuples are hashable unlike lists.
```
phonebook = {('Amin','Oroji'):'09981637510', ('Armin','Golzar'):'09981637520',('Milad','Gashtil'):'09981637530'}
phonebook
phonebook[('Amin','Oroji')]
```
## Excercise: Write a script to create your phonebook ((name,surname),number). update your phonebook with new entries.
| github_jupyter |
<a href="https://colab.research.google.com/github/araffin/rl-tutorial-jnrr19/blob/master/1_getting_started.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Stable Baselines Tutorial - Getting Started
Github repo: https://github.com/araffin/rl-tutorial-jnrr19
Stable-Baselines: https://github.com/hill-a/stable-baselines
Documentation: https://stable-baselines.readthedocs.io/en/master/
RL Baselines zoo: https://github.com/araffin/rl-baselines-zoo
Medium article: [https://medium.com/@araffin/stable-baselines-a-fork-of-openai-baselines-df87c4b2fc82](https://medium.com/@araffin/stable-baselines-a-fork-of-openai-baselines-df87c4b2fc82)
[RL Baselines Zoo](https://github.com/araffin/rl-baselines-zoo) is a collection of pre-trained Reinforcement Learning agents using Stable-Baselines.
It also provides basic scripts for training, evaluating agents, tuning hyperparameters and recording videos.
## Introduction
In this notebook, you will learn the basics for using stable baselines library: how to create a RL model, train it and evaluate it. Because all algorithms share the same interface, we will see how simple it is to switch from one algorithm to another.
## Install Dependencies and Stable Baselines Using Pip
List of full dependencies can be found in the [README](https://github.com/hill-a/stable-baselines).
```
sudo apt-get update && sudo apt-get install cmake libopenmpi-dev zlib1g-dev
```
```
pip install stable-baselines[mpi]
```
```
# Stable Baselines only supports tensorflow 1.x for now
%tensorflow_version 1.x
!apt-get install ffmpeg freeglut3-dev xvfb # For visualization
!pip install stable-baselines[mpi]==2.10.0
```
## Imports
Stable-Baselines works on environments that follow the [gym interface](https://stable-baselines.readthedocs.io/en/master/guide/custom_env.html).
You can find a list of available environment [here](https://gym.openai.com/envs/#classic_control).
It is also recommended to check the [source code](https://github.com/openai/gym) to learn more about the observation and action space of each env, as gym does not have a proper documentation.
Not all algorithms can work with all action spaces, you can find more in this [recap table](https://stable-baselines.readthedocs.io/en/master/guide/algos.html)
```
import gym
import numpy as np
```
The first thing you need to import is the RL model, check the documentation to know what you can use on which problem
```
from stable_baselines import PPO2
```
The next thing you need to import is the policy class that will be used to create the networks (for the policy/value functions).
This step is optional as you can directly use strings in the constructor:
```PPO2('MlpPolicy', env)``` instead of ```PPO2(MlpPolicy, env)```
Note that some algorithms like `SAC` have their own `MlpPolicy` (different from `stable_baselines.common.policies.MlpPolicy`), that's why using string for the policy is the recommened option.
```
from stable_baselines.common.policies import MlpPolicy
```
## Create the Gym env and instantiate the agent
For this example, we will use CartPole environment, a classic control problem.
"A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The system is controlled by applying a force of +1 or -1 to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of +1 is provided for every timestep that the pole remains upright. "
Cartpole environment: [https://gym.openai.com/envs/CartPole-v1/](https://gym.openai.com/envs/CartPole-v1/)

We chose the MlpPolicy because the observation of the CartPole task is a feature vector, not images.
The type of action to use (discrete/continuous) will be automatically deduced from the environment action space
Here we are using the [Proximal Policy Optimization](https://stable-baselines.readthedocs.io/en/master/modules/ppo2.html) algorithm (PPO2 is the version optimized for GPU), which is an Actor-Critic method: it uses a value function to improve the policy gradient descent (by reducing the variance).
It combines ideas from [A2C](https://stable-baselines.readthedocs.io/en/master/modules/a2c.html) (having multiple workers and using an entropy bonus for exploration) and [TRPO](https://stable-baselines.readthedocs.io/en/master/modules/trpo.html) (it uses a trust region to improve stability and avoid catastrophic drops in performance).
PPO is an on-policy algorithm, which means that the trajectories used to update the networks must be collected using the latest policy.
It is usually less sample efficient than off-policy alorithms like [DQN](https://stable-baselines.readthedocs.io/en/master/modules/dqn.html), [SAC](https://stable-baselines.readthedocs.io/en/master/modules/sac.html) or [TD3](https://stable-baselines.readthedocs.io/en/master/modules/td3.html), but is much faster regarding wall-clock time.
```
env = gym.make('CartPole-v1')
model = PPO2(MlpPolicy, env, verbose=0)
```
We create a helper function to evaluate the agent:
```
def evaluate(model, num_episodes=100):
"""
Evaluate a RL agent
:param model: (BaseRLModel object) the RL Agent
:param num_episodes: (int) number of episodes to evaluate it
:return: (float) Mean reward for the last num_episodes
"""
# This function will only work for a single Environment
env = model.get_env()
all_episode_rewards = []
for i in range(num_episodes):
episode_rewards = []
done = False
obs = env.reset()
while not done:
# _states are only useful when using LSTM policies
action, _states = model.predict(obs)
# here, action, rewards and dones are arrays
# because we are using vectorized env
obs, reward, done, info = env.step(action)
episode_rewards.append(reward)
all_episode_rewards.append(sum(episode_rewards))
mean_episode_reward = np.mean(all_episode_rewards)
print("Mean reward:", mean_episode_reward, "Num episodes:", num_episodes)
return mean_episode_reward
```
Let's evaluate the un-trained agent, this should be a random agent.
```
# Random Agent, before training
mean_reward_before_train = evaluate(model, num_episodes=100)
```
Stable-Baselines already provides you with that helper:
```
from stable_baselines.common.evaluation import evaluate_policy
mean_reward, std_reward = evaluate_policy(model, env, n_eval_episodes=100)
print(f"mean_reward:{mean_reward:.2f} +/- {std_reward:.2f}")
```
## Train the agent and evaluate it
```
# Train the agent for 10000 steps
model.learn(total_timesteps=10000)
# Evaluate the trained agent
mean_reward, std_reward = evaluate_policy(model, env, n_eval_episodes=100)
print(f"mean_reward:{mean_reward:.2f} +/- {std_reward:.2f}")
```
Apparently the training went well, the mean reward increased a lot !
### Prepare video recording
```
# Set up fake display; otherwise rendering will fail
import os
os.system("Xvfb :1 -screen 0 1024x768x24 &")
os.environ['DISPLAY'] = ':1'
import base64
from pathlib import Path
from IPython import display as ipythondisplay
def show_videos(video_path='', prefix=''):
"""
Taken from https://github.com/eleurent/highway-env
:param video_path: (str) Path to the folder containing videos
:param prefix: (str) Filter the video, showing only the only starting with this prefix
"""
html = []
for mp4 in Path(video_path).glob("{}*.mp4".format(prefix)):
video_b64 = base64.b64encode(mp4.read_bytes())
html.append('''<video alt="{}" autoplay
loop controls style="height: 400px;">
<source src="data:video/mp4;base64,{}" type="video/mp4" />
</video>'''.format(mp4, video_b64.decode('ascii')))
ipythondisplay.display(ipythondisplay.HTML(data="<br>".join(html)))
```
We will record a video using the [VecVideoRecorder](https://stable-baselines.readthedocs.io/en/master/guide/vec_envs.html#vecvideorecorder) wrapper, you will learn about those wrapper in the next notebook.
```
from stable_baselines.common.vec_env import VecVideoRecorder, DummyVecEnv
def record_video(env_id, model, video_length=500, prefix='', video_folder='videos/'):
"""
:param env_id: (str)
:param model: (RL model)
:param video_length: (int)
:param prefix: (str)
:param video_folder: (str)
"""
eval_env = DummyVecEnv([lambda: gym.make(env_id)])
# Start the video at step=0 and record 500 steps
eval_env = VecVideoRecorder(eval_env, video_folder=video_folder,
record_video_trigger=lambda step: step == 0, video_length=video_length,
name_prefix=prefix)
obs = eval_env.reset()
for _ in range(video_length):
action, _ = model.predict(obs)
obs, _, _, _ = eval_env.step(action)
# Close the video recorder
eval_env.close()
```
### Visualize trained agent
```
record_video('CartPole-v1', model, video_length=500, prefix='ppo2-cartpole')
show_videos('videos', prefix='ppo2')
```
## Bonus: Train a RL Model in One Line
The policy class to use will be inferred and the environment will be automatically created. This works because both are [registered](https://stable-baselines.readthedocs.io/en/master/guide/quickstart.html).
```
model = PPO2('MlpPolicy', "CartPole-v1", verbose=1).learn(1000)
```
## Train a DQN agent
In the previous example, we have used PPO, which one of the many algorithms provided by stable-baselines.
In the next example, we are going train a [Deep Q-Network agent (DQN)](https://stable-baselines.readthedocs.io/en/master/modules/dqn.html), and try to see possible improvements provided by its extensions (Double-DQN, Dueling-DQN, Prioritized Experience Replay).
The essential point of this section is to show you how simple it is to tweak hyperparameters.
The main advantage of stable-baselines is that it provides a common interface to use the algorithms, so the code will be quite similar.
DQN paper: https://arxiv.org/abs/1312.5602
Dueling DQN: https://arxiv.org/abs/1511.06581
Double-Q Learning: https://arxiv.org/abs/1509.06461
Prioritized Experience Replay: https://arxiv.org/abs/1511.05952
### Vanilla DQN: DQN without extensions
```
# Same as before we instantiate the agent along with the environment
from stable_baselines import DQN
# Deactivate all the DQN extensions to have the original version
# In practice, it is recommend to have them activated
kwargs = {'double_q': False, 'prioritized_replay': False, 'policy_kwargs': dict(dueling=False)}
# Note that the MlpPolicy of DQN is different from the one of PPO
# but stable-baselines handles that automatically if you pass a string
dqn_model = DQN('MlpPolicy', 'CartPole-v1', verbose=1, **kwargs)
# Random Agent, before training
mean_reward_before_train = evaluate(dqn_model, num_episodes=100)
# Train the agent for 10000 steps
dqn_model.learn(total_timesteps=10000, log_interval=10)
# Evaluate the trained agent
mean_reward = evaluate(dqn_model, num_episodes=100)
```
### DQN + Prioritized Replay
```
# Activate only the prioritized replay
kwargs = {'double_q': False, 'prioritized_replay': True, 'policy_kwargs': dict(dueling=False)}
dqn_per_model = DQN('MlpPolicy', 'CartPole-v1', verbose=1, **kwargs)
dqn_per_model.learn(total_timesteps=10000, log_interval=10)
# Evaluate the trained agent
mean_reward = evaluate(dqn_per_model, num_episodes=100)
```
### DQN + Prioritized Experience Replay + Double Q-Learning + Dueling
```
# Activate all extensions
kwargs = {'double_q': True, 'prioritized_replay': True, 'policy_kwargs': dict(dueling=True)}
dqn_full_model = DQN('MlpPolicy', 'CartPole-v1', verbose=1, **kwargs)
dqn_full_model.learn(total_timesteps=10000, log_interval=10)
mean_reward = evaluate(dqn_per_model, num_episodes=100)
```
In this particular example, the extensions does not seem to give any improvement compared to the simple DQN version.
They are several reasons for that:
1. `CartPole-v1` is a pretty simple environment
2. We trained DQN for very few timesteps, not enough to see any difference
3. The default hyperparameters for DQN are tuned for atari games, where the number of training timesteps is much larger (10^6) and input observations are images
4. We have only compared one random seed per experiment
## Conclusion
In this notebook we have seen:
- how to define and train a RL model using stable baselines, it takes only one line of code ;)
- how to use different RL algorithms and change some hyperparameters
```
```
| github_jupyter |
<img src="https://github.com/pmservice/ai-openscale-tutorials/raw/master/notebooks/images/banner.png" align="left" alt="banner">
# Working with Watson OpenScale - Custom Machine Learning Provider
This notebook should be run using with **Python 3.7.x** runtime environment. **If you are viewing this in Watson Studio and do not see Python 3.7.x in the upper right corner of your screen, please update the runtime now.** It requires service credentials for the following services:
* Watson OpenScale
* A Custom ML provider which is hosted in a VM that can be accessible from CPD PODs, specifically OpenScale PODs namely ML Gateway fairness, quality, drift, and explain.
* DB2 - as part of this notebook, we make use of an existing data mart.
The notebook will configure a OpenScale data mart subscription for Custom ML Provider deployment. We configure and execute the fairness, explain, quality and drift monitors.
## Custom Machine Learning Provider Setup
Following code can be used to start a gunicorn/flask application that can be hosted in a VM, such that it can be accessable from CPD system.
This code does the following:
* It wraps a Watson Machine Learning model that is deployed to a space.
* So the hosting application URL should contain the SPACE ID and the DEPLOYMENT ID. Then, the same can be used to talk to the target WML model/deployment.
* Having said that, this is only for this tutorial purpose, and you can define your Custom ML provider endpoint in any fashion you want, such that it wraps your own custom ML engine.
* The scoring request and response payload should confirm to the schema as described here at: https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-frameworks-custom.html
* To start the application using the below code, make sure you install following python packages in your VM:
python -m pip install gunicorn
python -m pip install flask
python -m pip install numpy
python -m pip install pandas
python -m pip install requests
python -m pip install joblib==0.11
python -m pip install scipy==0.19.1
python -m pip install --user numpy scipy matplotlib ipython jupyter pandas sympy nose
python -m pip install ibm_watson_machine_learning
-----------------
```
from flask import Flask, request, abort, jsonify
import json
import base64
import requests, io
import pandas as pd
from ibm_watson_machine_learning import APIClient
app = Flask(__name__)
WML_CREDENTIALS = {
"url": "https://namespace1-cpd-namespace1.apps.xxxxx.os.fyre.ibm.com",
"username": "admin",
"password" : "xxxx",
"instance_id": "wml_local",
"version" : "3.5"
}
@app.route('/spaces/<space_id>/deployments/<deployment_id>/predictions', methods=['POST'])
def wml_scoring(space_id, deployment_id):
if not request.json:
abort(400)
wml_credentials = WML_CREDENTIALS
payload_scoring = {
"input_data": [
request.json
]
}
wml_client = APIClient(wml_credentials)
wml_client.set.default_space(space_id)
records_list=[]
scoring_response = wml_client.deployments.score(deployment_id, payload_scoring)
return jsonify(scoring_response["predictions"][0])
if __name__ == '__main__':
app.run(host='xxxx.fyre.ibm.com', port=9443, debug=True)
```
-----------------
# Setup <a name="setup"></a>
## Package installation
```
import warnings
warnings.filterwarnings('ignore')
!pip install --upgrade pyspark==2.4 --no-cache | tail -n 1
!pip install --upgrade pandas==0.25.3 --no-cache | tail -n 1
!pip install --upgrade requests==2.23 --no-cache | tail -n 1
!pip install numpy==1.16.4 --no-cache | tail -n 1
!pip install scikit-learn==0.20 --no-cache | tail -n 1
!pip install SciPy --no-cache | tail -n 1
!pip install lime --no-cache | tail -n 1
!pip install --upgrade ibm-watson-machine-learning --user | tail -n 1
!pip install --upgrade ibm-watson-openscale --no-cache | tail -n 1
!pip install --upgrade ibm-wos-utils --no-cache | tail -n 1
```
### Action: restart the kernel!
## Configure credentials
- WOS_CREDENTIALS (CP4D)
- WML_CREDENTIALS (CP4D)
- DATABASE_CREDENTIALS (DB2 on CP4D or Cloud Object Storage (COS))
- SCHEMA_NAME
```
#masked
WOS_CREDENTIALS = {
"url": "https://namespace1-cpd-namespace1.apps.xxxxx.os.fyre.ibm.com",
"username": "admin",
"password": "xxxxx",
"version": "3.5"
}
CUSTOM_ML_PROVIDER_SCORING_URL = 'https://xxxxx.fyre.ibm.com:9443/spaces/$SPACE_ID/deployments/$DEPLOYMENT_ID/predictions'
scoring_url = CUSTOM_ML_PROVIDER_SCORING_URL
label_column="Risk"
model_type = "binary"
import os
import base64
import json
import requests
from requests.auth import HTTPBasicAuth
```
## Save training data to Cloud Object Storage
### Cloud object storage details¶
In next cells, you will need to paste some credentials to Cloud Object Storage. If you haven't worked with COS yet please visit getting started with COS tutorial. You can find COS_API_KEY_ID and COS_RESOURCE_CRN variables in Service Credentials in menu of your COS instance. Used COS Service Credentials must be created with Role parameter set as Writer. Later training data file will be loaded to the bucket of your instance and used as training refecence in subsription. COS_ENDPOINT variable can be found in Endpoint field of the menu.
```
IAM_URL="https://iam.ng.bluemix.net/oidc/token"
# masked
COS_API_KEY_ID = "*****"
COS_RESOURCE_CRN = "*****"
COS_ENDPOINT = "https://s3.us.cloud-object-storage.appdomain.cloud" # Current list avaiable at https://control.cloud-object-storage.cloud.ibm.com/v2/endpoints
BUCKET_NAME = "*****"
FILE_NAME = "german_credit_data_biased_training.csv"
```
# Load and explore data
```
!rm german_credit_data_biased_training.csv
!wget https://raw.githubusercontent.com/pmservice/ai-openscale-tutorials/master/assets/historical_data/german_credit_risk/wml/german_credit_data_biased_training.csv
```
## Explore data
```
training_data_references = [
{
"id": "Credit Risk",
"type": "s3",
"connection": {
"access_key_id": COS_API_KEY_ID,
"endpoint_url": COS_ENDPOINT,
"resource_instance_id":COS_RESOURCE_CRN
},
"location": {
"bucket": BUCKET_NAME,
"path": FILE_NAME,
}
}
]
```
## Construct the scoring payload
```
import pandas as pd
df = pd.read_csv("german_credit_data_biased_training.csv")
df.head()
cols_to_remove = [label_column]
def get_scoring_payload(no_of_records_to_score = 1):
for col in cols_to_remove:
if col in df.columns:
del df[col]
fields = df.columns.tolist()
values = df[fields].values.tolist()
payload_scoring ={"fields": fields, "values": values[:no_of_records_to_score]}
return payload_scoring
#debug
payload_scoring = get_scoring_payload(1)
payload_scoring
```
## Method to perform scoring
```
def custom_ml_scoring():
header = {"Content-Type": "application/json", "x":"y"}
print(scoring_url)
scoring_response = requests.post(scoring_url, json=payload_scoring, headers=header, verify=False)
jsonify_scoring_response = scoring_response.json()
return jsonify_scoring_response
```
## Method to perform payload logging
```
import uuid
scoring_id = None
from ibm_watson_openscale.supporting_classes.payload_record import PayloadRecord
def payload_logging(payload_scoring, scoring_response):
scoring_id = str(uuid.uuid4())
records_list=[]
#manual PL logging for custom ml provider
pl_record = PayloadRecord(scoring_id=scoring_id, request=payload_scoring, response=scoring_response, response_time=int(460))
records_list.append(pl_record)
wos_client.data_sets.store_records(data_set_id = payload_data_set_id, request_body=records_list)
time.sleep(5)
pl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id)
print("Number of records in the payload logging table: {}".format(pl_records_count))
return scoring_id
```
## Score the model and print the scoring response
### Sample Scoring
```
custom_ml_scoring()
```
# Configure OpenScale
The notebook will now import the necessary libraries and set up a Python OpenScale client.
```
from ibm_watson_openscale import APIClient
from ibm_watson_openscale.utils import *
from ibm_watson_openscale.supporting_classes import *
from ibm_watson_openscale.supporting_classes.enums import *
from ibm_watson_openscale.base_classes.watson_open_scale_v2 import *
from ibm_cloud_sdk_core.authenticators import CloudPakForDataAuthenticator
import json
import requests
import base64
from requests.auth import HTTPBasicAuth
import time
```
## Get a instance of the OpenScale SDK client
```
authenticator = CloudPakForDataAuthenticator(
url=WOS_CREDENTIALS['url'],
username=WOS_CREDENTIALS['username'],
password=WOS_CREDENTIALS['password'],
disable_ssl_verification=True
)
wos_client = APIClient(service_url=WOS_CREDENTIALS['url'],authenticator=authenticator)
wos_client.version
```
## Set up datamart
Watson OpenScale uses a database to store payload logs and calculated metrics. If database credentials were not supplied above, the notebook will use the free, internal lite database. If database credentials were supplied, the datamart will be created there unless there is an existing datamart and the KEEP_MY_INTERNAL_POSTGRES variable is set to True. If an OpenScale datamart exists in Db2 or PostgreSQL, the existing datamart will be used and no data will be overwritten.
Prior instances of the model will be removed from OpenScale monitoring.
```
wos_client.data_marts.show()
data_marts = wos_client.data_marts.list().result.data_marts
if len(data_marts) == 0:
raise Exception("Missing data mart.")
data_mart_id=data_marts[0].metadata.id
print('Using existing datamart {}'.format(data_mart_id))
data_mart_details = wos_client.data_marts.list().result.data_marts[0]
data_mart_details.to_dict()
wos_client.service_providers.show()
```
## Remove existing service provider connected with used WML instance.
Multiple service providers for the same engine instance are avaiable in Watson OpenScale. To avoid multiple service providers of used WML instance in the tutorial notebook the following code deletes existing service provder(s) and then adds new one.
```
SERVICE_PROVIDER_NAME = "Custom ML Provider Demo - All Monitors"
SERVICE_PROVIDER_DESCRIPTION = "Added by tutorial WOS notebook to showcase monitoring Fairness, Quality, Drift and Explainability against a Custom ML provider."
service_providers = wos_client.service_providers.list().result.service_providers
for service_provider in service_providers:
service_instance_name = service_provider.entity.name
if service_instance_name == SERVICE_PROVIDER_NAME:
service_provider_id = service_provider.metadata.id
wos_client.service_providers.delete(service_provider_id)
print("Deleted existing service_provider for WML instance: {}".format(service_provider_id))
```
## Add service provider
Watson OpenScale needs to be bound to the Watson Machine Learning instance to capture payload data into and out of the model.
Note: You can bind more than one engine instance if needed by calling wos_client.service_providers.add method. Next, you can refer to particular service provider using service_provider_id.
```
request_headers = {"Content-Type": "application/json", "Custom_header_X": "Custom_header_X_value_Y"}
MLCredentials = {}
added_service_provider_result = wos_client.service_providers.add(
name=SERVICE_PROVIDER_NAME,
description=SERVICE_PROVIDER_DESCRIPTION,
service_type=ServiceTypes.CUSTOM_MACHINE_LEARNING,
request_headers=request_headers,
operational_space_id = "production",
credentials=MLCredentials,
background_mode=False
).result
service_provider_id = added_service_provider_result.metadata.id
print(wos_client.service_providers.get(service_provider_id).result)
print('Data Mart ID : ' + data_mart_id)
print('Service Provider ID : ' + service_provider_id)
```
## Subscriptions
Remove existing credit risk subscriptions
This code removes previous subscriptions to the model to refresh the monitors with the new model and new data.
```
wos_client.subscriptions.show()
```
## Remove the existing subscription
```
SUBSCRIPTION_NAME = "Custom ML Subscription - All Monitors"
subscriptions = wos_client.subscriptions.list().result.subscriptions
for subscription in subscriptions:
if subscription.entity.asset.name == "[asset] " + SUBSCRIPTION_NAME:
sub_model_id = subscription.metadata.id
wos_client.subscriptions.delete(subscription.metadata.id)
print('Deleted existing subscription for model', sub_model_id)
```
This code creates the model subscription in OpenScale using the Python client API. Note that we need to provide the model unique identifier, and some information about the model itself.
```
feature_columns=["CheckingStatus","LoanDuration","CreditHistory","LoanPurpose","LoanAmount","ExistingSavings","EmploymentDuration","InstallmentPercent","Sex","OthersOnLoan","CurrentResidenceDuration","OwnsProperty","Age","InstallmentPlans","Housing","ExistingCreditsCount","Job","Dependents","Telephone","ForeignWorker"]
cat_features=["CheckingStatus","CreditHistory","LoanPurpose","ExistingSavings","EmploymentDuration","Sex","OthersOnLoan","OwnsProperty","InstallmentPlans","Housing","Job","Telephone","ForeignWorker"]
import uuid
asset_id = str(uuid.uuid4())
asset_name = '[asset] ' + SUBSCRIPTION_NAME
url = ''
asset_deployment_id = str(uuid.uuid4())
asset_deployment_name = asset_name
asset_deployment_scoring_url = scoring_url
scoring_endpoint_url = scoring_url
scoring_request_headers = {
"Content-Type": "application/json",
"Custom_header_X": "Custom_header_X_value_Y"
}
subscription_details = wos_client.subscriptions.add(
data_mart_id=data_mart_id,
service_provider_id=service_provider_id,
asset=Asset(
asset_id=asset_id,
name=asset_name,
url=url,
asset_type=AssetTypes.MODEL,
input_data_type=InputDataType.STRUCTURED,
problem_type=ProblemType.BINARY_CLASSIFICATION
),
deployment=AssetDeploymentRequest(
deployment_id=asset_deployment_id,
name=asset_deployment_name,
deployment_type= DeploymentTypes.ONLINE,
scoring_endpoint=ScoringEndpointRequest(
url=scoring_endpoint_url,
request_headers=scoring_request_headers
)
),
asset_properties=AssetPropertiesRequest(
label_column=label_column,
probability_fields=["probability"],
prediction_field="predictedLabel",
feature_fields = feature_columns,
categorical_fields = cat_features,
training_data_reference=TrainingDataReference(type="cos",
location=COSTrainingDataReferenceLocation(bucket = BUCKET_NAME,
file_name = FILE_NAME),
connection=COSTrainingDataReferenceConnection.from_dict({
"resource_instance_id": COS_RESOURCE_CRN,
"url": COS_ENDPOINT,
"api_key": COS_API_KEY_ID,
"iam_url": IAM_URL}))
)
).result
subscription_id = subscription_details.metadata.id
print('Subscription ID: ' + subscription_id)
import time
time.sleep(5)
payload_data_set_id = None
payload_data_set_id = wos_client.data_sets.list(type=DataSetTypes.PAYLOAD_LOGGING,
target_target_id=subscription_id,
target_target_type=TargetTypes.SUBSCRIPTION).result.data_sets[0].metadata.id
if payload_data_set_id is None:
print("Payload data set not found. Please check subscription status.")
else:
print("Payload data set id:", payload_data_set_id)
```
### Before the payload logging
wos_client.subscriptions.get(subscription_id).result.to_dict()
# Score the model so we can configure monitors
Now that the WML service has been bound and the subscription has been created, we need to send a request to the model before we configure OpenScale. This allows OpenScale to create a payload log in the datamart with the correct schema, so it can capture data coming into and out of the model.
```
no_of_records_to_score = 100
```
### Construct the scoring payload
```
payload_scoring = get_scoring_payload(no_of_records_to_score)
```
### Perform the scoring against the Custom ML Provider
```
scoring_response = custom_ml_scoring()
```
### Perform payload logging by passing the scoring payload and scoring response
```
scoring_id = payload_logging(payload_scoring, scoring_response)
```
### The scoring id, which would be later used for explanation of the randomly picked transactions
```
print('scoring_id: ' + str(scoring_id))
```
# Fairness configuration <a name="Fairness"></a>
The code below configures fairness monitoring for our model. It turns on monitoring for two features, sex and age. In each case, we must specify:
Which model feature to monitor One or more majority groups, which are values of that feature that we expect to receive a higher percentage of favorable outcomes One or more minority groups, which are values of that feature that we expect to receive a higher percentage of unfavorable outcomes The threshold at which we would like OpenScale to display an alert if the fairness measurement falls below (in this case, 80%) Additionally, we must specify which outcomes from the model are favourable outcomes, and which are unfavourable. We must also provide the number of records OpenScale will use to calculate the fairness score. In this case, OpenScale's fairness monitor will run hourly, but will not calculate a new fairness rating until at least 100 records have been added. Finally, to calculate fairness, OpenScale must perform some calculations on the training data, so we provide the dataframe containing the data.
### Create Fairness Monitor Instance
```
target = Target(
target_type=TargetTypes.SUBSCRIPTION,
target_id=subscription_id
)
parameters = {
"features": [
{"feature": "Sex",
"majority": ['male'],
"minority": ['female']
},
{"feature": "Age",
"majority": [[26, 75]],
"minority": [[18, 25]]
}
],
"favourable_class": ["No Risk"],
"unfavourable_class": ["Risk"],
"min_records": 100
}
thresholds = [{
"metric_id": "fairness_value",
"specific_values": [{
"applies_to": [{
"key": "feature",
"type": "tag",
"value": "Age"
}],
"value": 95
},
{
"applies_to": [{
"key": "feature",
"type": "tag",
"value": "Sex"
}],
"value": 95
}
],
"type": "lower_limit",
"value": 80.0
}]
fairness_monitor_details = wos_client.monitor_instances.create(
data_mart_id=data_mart_id,
background_mode=False,
monitor_definition_id=wos_client.monitor_definitions.MONITORS.FAIRNESS.ID,
target=target,
parameters=parameters,
thresholds=thresholds).result
fairness_monitor_instance_id = fairness_monitor_details.metadata.id
```
### Get Fairness Monitor Instance
```
wos_client.monitor_instances.show()
```
### Get run details
In case of production subscription, initial monitoring run is triggered internally. Checking its status
```
runs = wos_client.monitor_instances.list_runs(fairness_monitor_instance_id, limit=1).result.to_dict()
fairness_monitoring_run_id = runs["runs"][0]["metadata"]["id"]
run_status = None
while(run_status not in ["finished", "error"]):
run_details = wos_client.monitor_instances.get_run_details(fairness_monitor_instance_id, fairness_monitoring_run_id).result.to_dict()
run_status = run_details["entity"]["status"]["state"]
print('run_status: ', run_status)
if run_status in ["finished", "error"]:
break
time.sleep(10)
```
### Fairness run output
```
wos_client.monitor_instances.get_run_details(fairness_monitor_instance_id, fairness_monitoring_run_id).result.to_dict()
wos_client.monitor_instances.show_metrics(monitor_instance_id=fairness_monitor_instance_id)
```
# Configure Explainability <a name="explain"></a>
We provide OpenScale with the training data to enable and configure the explainability features.
```
target = Target(
target_type=TargetTypes.SUBSCRIPTION,
target_id=subscription_id
)
parameters = {
"enabled": True
}
explain_monitor_details = wos_client.monitor_instances.create(
data_mart_id=data_mart_id,
background_mode=False,
monitor_definition_id=wos_client.monitor_definitions.MONITORS.EXPLAINABILITY.ID,
target=target,
parameters=parameters
).result
explain_monitor_details.metadata.id
scoring_ids = []
sample_size = 2
import random
for i in range(0, sample_size):
n = random.randint(1,100)
scoring_ids.append(scoring_id + '-' + str(n))
print("Running explanations on scoring IDs: {}".format(scoring_ids))
explanation_types = ["lime", "contrastive"]
result = wos_client.monitor_instances.explanation_tasks(scoring_ids=scoring_ids, explanation_types=explanation_types).result
print(result)
```
### Explanation tasks
```
explanation_task_ids=result.metadata.explanation_task_ids
explanation_task_ids
```
### Wait for the explanation tasks to complete - all of them
```
import time
def finish_explanation_tasks():
finished_explanations = []
finished_explanation_task_ids = []
# Check for the explanation task status for finished status.
# If it is in-progress state, then sleep for some time and check again.
# Perform the same for couple of times, so that all tasks get into finished state.
for i in range(0, 5):
# for each explanation
print('iteration ' + str(i))
#check status for all explanation tasks
for explanation_task_id in explanation_task_ids:
if explanation_task_id not in finished_explanation_task_ids:
result = wos_client.monitor_instances.get_explanation_tasks(explanation_task_id=explanation_task_id).result
print(explanation_task_id + ' : ' + result.entity.status.state)
if (result.entity.status.state == 'finished' or result.entity.status.state == 'error') and explanation_task_id not in finished_explanation_task_ids:
finished_explanation_task_ids.append(explanation_task_id)
finished_explanations.append(result)
# if there is altest one explanation task that is not yet completed, then sleep for sometime,
# and check for all those tasks, for which explanation is not yet completeed.
if len(finished_explanation_task_ids) != sample_size:
print('sleeping for some time..')
time.sleep(10)
else:
break
return finished_explanations
```
### You may have to run the below multiple times till all explanation tasks are either finished or error'ed.
```
finished_explanations = finish_explanation_tasks()
len(finished_explanations)
def construct_explanation_features_map(feature_name, feature_weight):
if feature_name in explanation_features_map:
explanation_features_map[feature_name].append(feature_weight)
else:
explanation_features_map[feature_name] = [feature_weight]
explanation_features_map = {}
for result in finished_explanations:
print('\n>>>>>>>>>>>>>>>>>>>>>>\n')
print('explanation task: ' + str(result.metadata.explanation_task_id) + ', perturbed:' + str(result.entity.perturbed))
if result.entity.explanations is not None:
explanations = result.entity.explanations
for explanation in explanations:
if 'predictions' in explanation:
predictions = explanation['predictions']
for prediction in predictions:
predicted_value = prediction['value']
probability = prediction['probability']
print('prediction : ' + str(predicted_value) + ', probability : ' + str(probability))
if 'explanation_features' in prediction:
explanation_features = prediction['explanation_features']
for explanation_feature in explanation_features:
feature_name = explanation_feature['feature_name']
feature_weight = explanation_feature['weight']
if (feature_weight >= 0 ):
feature_weight_percent = round(feature_weight * 100, 2)
print(str(feature_name) + ' : ' + str(feature_weight_percent))
task_feature_weight_map = {}
task_feature_weight_map[result.metadata.explanation_task_id] = feature_weight_percent
construct_explanation_features_map(feature_name, feature_weight_percent)
print('\n>>>>>>>>>>>>>>>>>>>>>>\n')
explanation_features_map
import matplotlib.pyplot as plt
for key in explanation_features_map.keys():
#plot_graph(key, explanation_features_map[key])
values = explanation_features_map[key]
plt.title(key)
plt.ylabel('Weight')
plt.bar(range(len(values)), values)
plt.show()
```
# Quality monitoring and feedback logging <a name="quality"></a>
## Enable quality monitoring
The code below waits ten seconds to allow the payload logging table to be set up before it begins enabling monitors. First, it turns on the quality (accuracy) monitor and sets an alert threshold of 70%. OpenScale will show an alert on the dashboard if the model accuracy measurement (area under the curve, in the case of a binary classifier) falls below this threshold.
The second paramater supplied, min_records, specifies the minimum number of feedback records OpenScale needs before it calculates a new measurement. The quality monitor runs hourly, but the accuracy reading in the dashboard will not change until an additional 50 feedback records have been added, via the user interface, the Python client, or the supplied feedback endpoint.
```
import time
#time.sleep(10)
target = Target(
target_type=TargetTypes.SUBSCRIPTION,
target_id=subscription_id
)
parameters = {
"min_feedback_data_size": 90
}
thresholds = [
{
"metric_id": "area_under_roc",
"type": "lower_limit",
"value": .80
}
]
quality_monitor_details = wos_client.monitor_instances.create(
data_mart_id=data_mart_id,
background_mode=False,
monitor_definition_id=wos_client.monitor_definitions.MONITORS.QUALITY.ID,
target=target,
parameters=parameters,
thresholds=thresholds
).result
quality_monitor_instance_id = quality_monitor_details.metadata.id
quality_monitor_instance_id
```
## Feedback logging
The code below downloads and stores enough feedback data to meet the minimum threshold so that OpenScale can calculate a new accuracy measurement. It then kicks off the accuracy monitor. The monitors run hourly, or can be initiated via the Python API, the REST API, or the graphical user interface.
```
!rm additional_feedback_data_v2.json
!wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/credit_risk/additional_feedback_data_v2.json
```
## Get feedback logging dataset ID
```
feedback_dataset_id = None
feedback_dataset = wos_client.data_sets.list(type=DataSetTypes.FEEDBACK,
target_target_id=subscription_id,
target_target_type=TargetTypes.SUBSCRIPTION).result
feedback_dataset_id = feedback_dataset.data_sets[0].metadata.id
if feedback_dataset_id is None:
print("Feedback data set not found. Please check quality monitor status.")
with open('additional_feedback_data_v2.json') as feedback_file:
additional_feedback_data = json.load(feedback_file)
wos_client.data_sets.store_records(feedback_dataset_id, request_body=additional_feedback_data, background_mode=False)
wos_client.data_sets.get_records_count(data_set_id=feedback_dataset_id)
run_details = wos_client.monitor_instances.run(monitor_instance_id=quality_monitor_instance_id, background_mode=False).result
wos_client.monitor_instances.show_metrics(monitor_instance_id=quality_monitor_instance_id)
```
# Drift configuration <a name="drift"></a>
# Drift detection model generation
Please update the score function which will be used forgenerating drift detection model which will used for drift detection . This might take sometime to generate model and time taken depends on the training dataset size. The output of the score function should be a 2 arrays 1. Array of model prediction 2. Array of probabilities
- User is expected to make sure that the data type of the "class label" column selected and the prediction column are same . For eg : If class label is numeric , the prediction array should also be numeric
- Each entry of a probability array should have all the probabities of the unique class lable .
For eg: If the model_type=multiclass and unique class labels are A, B, C, D . Each entry in the probability array should be a array of size 4 . Eg : [ [50,30,10,10] ,[40,20,30,10]...]
**Note:**
- *User is expected to add "score" method , which should output prediction column array and probability column array.*
- *The data type of the label column and prediction column should be same . User needs to make sure that label column and prediction column array should have the same unique class labels*
- **Please update the score function below with the help of templates documented [here](https://github.com/IBM-Watson/aios-data-distribution/blob/master/Score%20function%20templates%20for%20drift%20detection.md)**
```
import pandas as pd
df = pd.read_csv("german_credit_data_biased_training.csv")
df.head()
def score(training_data_frame):
#The data type of the label column and prediction column should be same .
#User needs to make sure that label column and prediction column array should have the same unique class labels
prediction_column_name = "predictedLabel"
probability_column_name = "probability"
feature_columns = list(training_data_frame.columns)
training_data_rows = training_data_frame[feature_columns].values.tolist()
payload_scoring_records = {
"fields": feature_columns,
"values": [x for x in training_data_rows]
}
header = {"Content-Type": "application/json", "x":"y"}
scoring_response_raw = requests.post(scoring_url, json=payload_scoring_records, headers=header, verify=False)
scoring_response = scoring_response_raw.json()
probability_array = None
prediction_vector = None
prob_col_index = list(scoring_response.get('fields')).index(probability_column_name)
predict_col_index = list(scoring_response.get('fields')).index(prediction_column_name)
if prob_col_index < 0 or predict_col_index < 0:
raise Exception("Missing prediction/probability column in the scoring response")
import numpy as np
probability_array = np.array([value[prob_col_index] for value in scoring_response.get('values')])
prediction_vector = np.array([value[predict_col_index] for value in scoring_response.get('values')])
return probability_array, prediction_vector
```
### Define the drift detection input
```
drift_detection_input = {
"feature_columns": feature_columns,
"categorical_columns": cat_features,
"label_column": label_column,
"problem_type": model_type
}
print(drift_detection_input)
```
### Generate drift detection model
```
!rm drift_detection_model.tar.gz
from ibm_wos_utils.drift.drift_trainer import DriftTrainer
drift_trainer = DriftTrainer(df,drift_detection_input)
if model_type != "regression":
#Note: batch_size can be customized by user as per the training data size
drift_trainer.generate_drift_detection_model(score,batch_size=df.shape[0])
#Note: Two column constraints are not computed beyond two_column_learner_limit(default set to 200)
#User can adjust the value depending on the requirement
drift_trainer.learn_constraints(two_column_learner_limit=200)
drift_trainer.create_archive()
!ls -al
filename = 'drift_detection_model.tar.gz'
```
### Upload the drift detection model to OpenScale subscription
```
wos_client.monitor_instances.upload_drift_model(
model_path=filename,
archive_name=filename,
data_mart_id=data_mart_id,
subscription_id=subscription_id,
enable_data_drift=True,
enable_model_drift=True
)
```
### Delete the existing drift monitor instance for the subscription
```
monitor_instances = wos_client.monitor_instances.list().result.monitor_instances
for monitor_instance in monitor_instances:
monitor_def_id=monitor_instance.entity.monitor_definition_id
if monitor_def_id == "drift" and monitor_instance.entity.target.target_id == subscription_id:
wos_client.monitor_instances.delete(monitor_instance.metadata.id)
print('Deleted existing drift monitor instance with id: ', monitor_instance.metadata.id)
target = Target(
target_type=TargetTypes.SUBSCRIPTION,
target_id=subscription_id
)
parameters = {
"min_samples": 100,
"drift_threshold": 0.1,
"train_drift_model": False,
"enable_model_drift": True,
"enable_data_drift": True
}
drift_monitor_details = wos_client.monitor_instances.create(
data_mart_id=data_mart_id,
background_mode=False,
monitor_definition_id=wos_client.monitor_definitions.MONITORS.DRIFT.ID,
target=target,
parameters=parameters
).result
drift_monitor_instance_id = drift_monitor_details.metadata.id
drift_monitor_instance_id
```
### Drift run
```
drift_run_details = wos_client.monitor_instances.run(monitor_instance_id=drift_monitor_instance_id, background_mode=False)
time.sleep(5)
wos_client.monitor_instances.show_metrics(monitor_instance_id=drift_monitor_instance_id)
```
## Summary
As part of this notebook, we have performed the following:
* Create a subscription to an custom ML end point
* Scored the custom ML provider with 100 records
* With the scored payload and also the scored response, we called the DataSets SDK method to store the payload logging records into the data mart. While doing so, we have set the scoring_id attribute.
* Configured the fairness monitor and executed it and viewed the fairness metrics output.
* Configured explainabilty monitor
* Randomly selected 5 transactions for which we want to get the prediction explanation.
* Submitted explainability tasks for the selected scoring ids, and waited for their completion.
* In the end, we composed a weight map of feature and its weight across transactions. And plotted the same.
* For example:
```
{'ForeignWorker': [33.29, 5.23],
'OthersOnLoan': [15.96, 19.97, 12.76],
'OwnsProperty': [15.43, 3.92, 4.44, 10.36],
'Dependents': [9.06],
'InstallmentPercent': [9.05],
'CurrentResidenceDuration': [8.74, 13.15, 12.1, 10.83],
'Sex': [2.96, 12.76],
'InstallmentPlans': [2.4, 5.67, 6.57],
'Age': [2.28, 8.6, 11.26],
'Job': [0.84],
'LoanDuration': [15.02, 10.87, 18.91, 12.72],
'EmploymentDuration': [14.02, 14.05, 12.1],
'LoanAmount': [9.28, 12.42, 7.85],
'Housing': [4.35],
'CreditHistory': [6.5]}
```
The understanding of the above map is like this:
* LoanDuration, CurrentResidenceDuration, OwnsProperty are the most contributing features across transactions for their respective prediction. Their weights for the respective prediction can also be seen.
* And the low contributing features are CreditHistory, Housing, Job, InstallmentPercent and Dependents, with their respective weights can also be seen as printed.
* We configured quality monitor and uploaded feedback data, and thereby ran the quality monitor
* For drift monitoring purposes, we created the drift detection model and uploaded to the OpenScale subscription.
* Executed the drift monitor.
Thank You! for working on tutorial notebook.
Author: Ravi Chamarthy (ravi.chamarthy@in.ibm.com)
| github_jupyter |
# ETS models
The ETS models are a family of time series models with an underlying state space model consisting of a level component, a trend component (T), a seasonal component (S), and an error term (E).
This notebook shows how they can be used with `statsmodels`. For a more thorough treatment we refer to [1], chapter 8 (free online resource), on which the implementation in statsmodels and the examples used in this notebook are based.
`statmodels` implements all combinations of:
- additive and multiplicative error model
- additive and multiplicative trend, possibly dampened
- additive and multiplicative seasonality
However, not all of these methods are stable. Refer to [1] and references therein for more info about model stability.
[1] Hyndman, Rob J., and George Athanasopoulos. *Forecasting: principles and practice*, 3rd edition, OTexts, 2019. https://www.otexts.org/fpp3/7
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
from statsmodels.tsa.exponential_smoothing.ets import ETSModel
plt.rcParams['figure.figsize'] = (12, 8)
```
## Simple exponential smoothing
The simplest of the ETS models is also known as *simple exponential smoothing*. In ETS terms, it corresponds to the (A, N, N) model, that is, a model with additive errors, no trend, and no seasonality. The state space formulation of Holt's method is:
\begin{align}
y_{t} &= y_{t-1} + e_t\\
l_{t} &= l_{t-1} + \alpha e_t\\
\end{align}
This state space formulation can be turned into a different formulation, a forecast and a smoothing equation (as can be done with all ETS models):
\begin{align}
\hat{y}_{t|t-1} &= l_{t-1}\\
l_{t} &= \alpha y_{t-1} + (1 - \alpha) l_{t-1}
\end{align}
Here, $\hat{y}_{t|t-1}$ is the forecast/expectation of $y_t$ given the information of the previous step. In the simple exponential smoothing model, the forecast corresponds to the previous level. The second equation (smoothing equation) calculates the next level as weighted average of the previous level and the previous observation.
```
oildata = [
111.0091, 130.8284, 141.2871, 154.2278,
162.7409, 192.1665, 240.7997, 304.2174,
384.0046, 429.6622, 359.3169, 437.2519,
468.4008, 424.4353, 487.9794, 509.8284,
506.3473, 340.1842, 240.2589, 219.0328,
172.0747, 252.5901, 221.0711, 276.5188,
271.1480, 342.6186, 428.3558, 442.3946,
432.7851, 437.2497, 437.2092, 445.3641,
453.1950, 454.4096, 422.3789, 456.0371,
440.3866, 425.1944, 486.2052, 500.4291,
521.2759, 508.9476, 488.8889, 509.8706,
456.7229, 473.8166, 525.9509, 549.8338,
542.3405
]
oil = pd.Series(oildata, index=pd.date_range('1965', '2013', freq='AS'))
oil.plot()
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
```
The plot above shows annual oil production in Saudi Arabia in million tonnes. The data are taken from the R package `fpp2` (companion package to prior version [1]).
Below you can see how to fit a simple exponential smoothing model using statsmodels's ETS implementation to this data. Additionally, the fit using `forecast` in R is shown as comparison.
```
model = ETSModel(oil, error='add', trend='add', damped_trend=True)
fit = model.fit(maxiter=10000)
oil.plot(label='data')
fit.fittedvalues.plot(label='statsmodels fit')
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
# obtained from R
params_R = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params_R).fittedvalues
yhat.plot(label='R fit', linestyle='--')
plt.legend();
```
By default the initial states are considered to be fitting parameters and are estimated by maximizing log-likelihood. Additionally it is possible to only use a heuristic for the initial values. In this case this leads to better agreement with the R implementation.
```
model_heuristic = ETSModel(oil, error='add', trend='add', damped_trend=True,
initialization_method='heuristic')
fit_heuristic = model_heuristic.fit()
oil.plot(label='data')
fit.fittedvalues.plot(label='estimated')
fit_heuristic.fittedvalues.plot(label='heuristic', linestyle='--')
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
# obtained from R
params = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params).fittedvalues
yhat.plot(label='with R params', linestyle=':')
plt.legend();
```
The fitted parameters and some other measures are shown using `fit.summary()`. Here we can see that the log-likelihood of the model using fitted initial states is a bit lower than the one using a heuristic for the initial states.
Additionally, we see that $\beta$ (`smoothing_trend`) is at the boundary of the default parameter bounds, and therefore it's not possible to estimate confidence intervals for $\beta$.
```
fit.summary()
fit_heuristic.summary()
```
## Holt-Winters' seasonal method
The exponential smoothing method can be modified to incorporate a trend and a seasonal component. In the additive Holt-Winters' method, the seasonal component is added to the rest. This model corresponds to the ETS(A, A, A) model, and has the following state space formulation:
\begin{align}
y_t &= l_{t-1} + b_{t-1} + s_{t-m} + e_t\\
l_{t} &= l_{t-1} + b_{t-1} + \alpha e_t\\
b_{t} &= b_{t-1} + \beta e_t\\
s_{t} &= s_{t-m} + \gamma e_t
\end{align}
```
austourists_data = [
30.05251300, 19.14849600, 25.31769200, 27.59143700,
32.07645600, 23.48796100, 28.47594000, 35.12375300,
36.83848500, 25.00701700, 30.72223000, 28.69375900,
36.64098600, 23.82460900, 29.31168300, 31.77030900,
35.17787700, 19.77524400, 29.60175000, 34.53884200,
41.27359900, 26.65586200, 28.27985900, 35.19115300,
42.20566386, 24.64917133, 32.66733514, 37.25735401,
45.24246027, 29.35048127, 36.34420728, 41.78208136,
49.27659843, 31.27540139, 37.85062549, 38.83704413,
51.23690034, 31.83855162, 41.32342126, 42.79900337,
55.70835836, 33.40714492, 42.31663797, 45.15712257,
59.57607996, 34.83733016, 44.84168072, 46.97124960,
60.01903094, 38.37117851, 46.97586413, 50.73379646,
61.64687319, 39.29956937, 52.67120908, 54.33231689,
66.83435838, 40.87118847, 51.82853579, 57.49190993,
65.25146985, 43.06120822, 54.76075713, 59.83447494,
73.25702747, 47.69662373, 61.09776802, 66.05576122,
]
index = pd.date_range("1999-03-01", "2015-12-01", freq="3MS")
austourists = pd.Series(austourists_data, index=index)
austourists.plot()
plt.ylabel('Australian Tourists');
# fit in statsmodels
model = ETSModel(austourists, error="add", trend="add", seasonal="add",
damped_trend=True, seasonal_periods=4)
fit = model.fit()
# fit with R params
params_R = [
0.35445427, 0.03200749, 0.39993387, 0.97999997, 24.01278357,
0.97770147, 1.76951063, -0.50735902, -6.61171798, 5.34956637
]
fit_R = model.smooth(params_R)
austourists.plot(label='data')
plt.ylabel('Australian Tourists')
fit.fittedvalues.plot(label='statsmodels fit')
fit_R.fittedvalues.plot(label='R fit', linestyle='--')
plt.legend();
fit.summary()
```
## Predictions
The ETS model can also be used for predicting. There are several different methods available:
- `forecast`: makes out of sample predictions
- `predict`: in sample and out of sample predictions
- `simulate`: runs simulations of the statespace model
- `get_prediction`: in sample and out of sample predictions, as well as prediction intervals
We can use them on our previously fitted model to predict from 2014 to 2020.
```
pred = fit.get_prediction(start='2014', end='2020')
df = pred.summary_frame(alpha=0.05)
df
```
In this case the prediction intervals were calculated using an analytical formula. This is not available for all models. For these other models, prediction intervals are calculated by performing multiple simulations (1000 by default) and using the percentiles of the simulation results. This is done internally by the `get_prediction` method.
We can also manually run simulations, e.g. to plot them. Since the data ranges until end of 2015, we have to simulate from the first quarter of 2016 to the first quarter of 2020, which means 17 steps.
```
simulated = fit.simulate(anchor="end", nsimulations=17, repetitions=100)
for i in range(simulated.shape[1]):
simulated.iloc[:,i].plot(label='_', color='gray', alpha=0.1)
df["mean"].plot(label='mean prediction')
df["pi_lower"].plot(linestyle='--', color='tab:blue', label='95% interval')
df["pi_upper"].plot(linestyle='--', color='tab:blue', label='_')
pred.endog.plot(label='data')
plt.legend()
```
In this case, we chose "end" as simulation anchor, which means that the first simulated value will be the first out of sample value. It is also possible to choose other anchor inside the sample.
| github_jupyter |
## **Bootstrap Your Own Latent A New Approach to Self-Supervised Learning:** https://arxiv.org/pdf/2006.07733.pdf
```
# !pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
# !pip install -qqU fastai fastcore
# !pip install nbdev
import fastai, fastcore, torch
fastai.__version__ , fastcore.__version__, torch.__version__
from fastai.vision.all import *
```
### Sizes
Resize -> RandomCrop
320 -> 256 | 224 -> 192 | 160 -> 128
```
resize = 320
size = 256
```
## 1. Implementation Details (Section 3.2 from the paper)
### 1.1 Image Augmentations
Same as SimCLR with optional grayscale
```
import kornia
def get_aug_pipe(size, stats=imagenet_stats, s=.6):
"SimCLR augmentations"
rrc = kornia.augmentation.RandomResizedCrop((size, size), scale=(0.2, 1.0), ratio=(3/4, 4/3))
rhf = kornia.augmentation.RandomHorizontalFlip()
rcj = kornia.augmentation.ColorJitter(0.8*s, 0.8*s, 0.8*s, 0.2*s)
rgs = kornia.augmentation.RandomGrayscale(p=0.2)
tfms = [rrc, rhf, rcj, rgs, Normalize.from_stats(*stats)]
pipe = Pipeline(tfms)
pipe.split_idx = 0
return pipe
```
### 1.2 Architecture
```
def create_encoder(arch, n_in=3, pretrained=True, cut=None, concat_pool=True):
"Create encoder from a given arch backbone"
encoder = create_body(arch, n_in, pretrained, cut)
pool = AdaptiveConcatPool2d() if concat_pool else nn.AdaptiveAvgPool2d(1)
return nn.Sequential(*encoder, pool, Flatten())
class MLP(Module):
"MLP module as described in paper"
def __init__(self, dim, projection_size=256, hidden_size=2048):
self.net = nn.Sequential(
nn.Linear(dim, hidden_size),
nn.BatchNorm1d(hidden_size),
nn.ReLU(inplace=True),
nn.Linear(hidden_size, projection_size)
)
def forward(self, x):
return self.net(x)
class BYOLModel(Module):
"Compute predictions of v1 and v2"
def __init__(self,encoder,projector,predictor):
self.encoder,self.projector,self.predictor = encoder,projector,predictor
def forward(self,v1,v2):
q1 = self.predictor(self.projector(self.encoder(v1)))
q2 = self.predictor(self.projector(self.encoder(v2)))
return (q1,q2)
def create_byol_model(arch=resnet50, hidden_size=4096, pretrained=True, projection_size=256, concat_pool=False):
encoder = create_encoder(arch, pretrained=pretrained, concat_pool=concat_pool)
with torch.no_grad():
x = torch.randn((2,3,128,128))
representation = encoder(x)
projector = MLP(representation.size(1), projection_size, hidden_size=hidden_size)
predictor = MLP(projection_size, projection_size, hidden_size=hidden_size)
apply_init(projector)
apply_init(predictor)
return BYOLModel(encoder, projector, predictor)
```
### 1.3 BYOLCallback
```
def _mse_loss(x, y):
x = F.normalize(x, dim=-1, p=2)
y = F.normalize(y, dim=-1, p=2)
return 2 - 2 * (x * y).sum(dim=-1)
def symmetric_mse_loss(pred, *yb):
(q1,q2),z1,z2 = pred,*yb
return (_mse_loss(q1,z2) + _mse_loss(q2,z1)).mean()
x = torch.randn((64,256))
y = torch.randn((64,256))
test_close(symmetric_mse_loss((x,y),y,x), 0) # perfect
test_close(symmetric_mse_loss((x,y),x,y), 4, 1e-1) # random
```
Useful Discussions and Supportive Material:
- https://www.reddit.com/r/MachineLearning/comments/hju274/d_byol_bootstrap_your_own_latent_cheating/fwohtky/
- https://untitled-ai.github.io/understanding-self-supervised-contrastive-learning.html
```
import copy
class BYOLCallback(Callback):
"Implementation of https://arxiv.org/pdf/2006.07733.pdf"
def __init__(self, T=0.99, debug=True, size=224, **aug_kwargs):
self.T, self.debug = T, debug
self.aug1 = get_aug_pipe(size, **aug_kwargs)
self.aug2 = get_aug_pipe(size, **aug_kwargs)
def before_fit(self):
"Create target model"
self.target_model = copy.deepcopy(self.learn.model).to(self.dls.device)
self.T_sched = SchedCos(self.T, 1) # used in paper
# self.T_sched = SchedNo(self.T, 1) # used in open source implementation
def before_batch(self):
"Generate 2 views of the same image and calculate target projections for these views"
if self.debug: print(f"self.x[0]: {self.x[0]}")
v1,v2 = self.aug1(self.x), self.aug2(self.x.clone())
self.learn.xb = (v1,v2)
if self.debug:
print(f"v1[0]: {v1[0]}\nv2[0]: {v2[0]}")
self.show_one()
assert not torch.equal(*self.learn.xb)
with torch.no_grad():
z1 = self.target_model.projector(self.target_model.encoder(v1))
z2 = self.target_model.projector(self.target_model.encoder(v2))
self.learn.yb = (z1,z2)
def after_step(self):
"Update target model and T"
self.T = self.T_sched(self.pct_train)
with torch.no_grad():
for param_k, param_q in zip(self.target_model.parameters(), self.model.parameters()):
param_k.data = param_k.data * self.T + param_q.data * (1. - self.T)
def show_one(self):
b1 = self.aug1.normalize.decode(to_detach(self.learn.xb[0]))
b2 = self.aug1.normalize.decode(to_detach(self.learn.xb[1]))
i = np.random.choice(len(b1))
show_images([b1[i],b2[i]], nrows=1, ncols=2)
def after_train(self):
if self.debug: self.show_one()
def after_validate(self):
if self.debug: self.show_one()
```
## 2. Pretext Training
```
sqrmom=0.99
mom=0.95
beta=0.
eps=1e-4
opt_func = partial(ranger, mom=mom, sqr_mom=sqrmom, eps=eps, beta=beta)
bs=128
def get_dls(size, bs, workers=None):
path = URLs.IMAGEWANG_160 if size <= 160 else URLs.IMAGEWANG
source = untar_data(path)
files = get_image_files(source)
tfms = [[PILImage.create, ToTensor, RandomResizedCrop(size, min_scale=0.9)],
[parent_label, Categorize()]]
dsets = Datasets(files, tfms=tfms, splits=RandomSplitter(valid_pct=0.1)(files))
batch_tfms = [IntToFloatTensor]
dls = dsets.dataloaders(bs=bs, num_workers=workers, after_batch=batch_tfms)
return dls
dls = get_dls(resize, bs)
model = create_byol_model(arch=xresnet34, pretrained=False)
learn = Learner(dls, model, symmetric_mse_loss, opt_func=opt_func,
cbs=[BYOLCallback(T=0.99, size=size, debug=False), TerminateOnNaNCallback()])
learn.to_fp16();
learn.lr_find()
lr=1e-3
wd=1e-2
epochs=100
learn.unfreeze()
learn.fit_flat_cos(epochs, lr, wd=wd, pct_start=0.5)
save_name = f'byol_iwang_sz{size}_epc{epochs}'
learn.save(save_name)
torch.save(learn.model.encoder.state_dict(), learn.path/learn.model_dir/f'{save_name}_encoder.pth')
learn.load(save_name);
lr=1e-4
wd=1e-2
epochs=100
learn.unfreeze()
learn.fit_flat_cos(epochs, lr, wd=wd, pct_start=0.5)
save_name = f'byol_iwang_sz{size}_epc200'
learn.save(save_name)
torch.save(learn.model.encoder.state_dict(), learn.path/learn.model_dir/f'{save_name}_encoder.pth')
lr=1e-4
wd=1e-2
epochs=30
learn.unfreeze()
learn.fit_flat_cos(epochs, lr, wd=wd, pct_start=0.5)
save_name = f'byol_iwang_sz{size}_epc230'
learn.save(save_name)
torch.save(learn.model.encoder.state_dict(), learn.path/learn.model_dir/f'{save_name}_encoder.pth')
lr=5e-5
wd=1e-2
epochs=30
learn.unfreeze()
learn.fit_flat_cos(epochs, lr, wd=wd, pct_start=0.5)
save_name = f'byol_iwang_sz{size}_epc260'
learn.save(save_name)
torch.save(learn.model.encoder.state_dict(), learn.path/learn.model_dir/f'{save_name}_encoder.pth')
learn.recorder.plot_loss()
save_name
```
## 3. Downstream Task - Image Classification
```
def get_dls(size, bs, workers=None):
path = URLs.IMAGEWANG_160 if size <= 160 else URLs.IMAGEWANG
source = untar_data(path)
files = get_image_files(source, folders=['train', 'val'])
splits = GrandparentSplitter(valid_name='val')(files)
item_aug = [RandomResizedCrop(size, min_scale=0.35), FlipItem(0.5)]
tfms = [[PILImage.create, ToTensor, *item_aug],
[parent_label, Categorize()]]
dsets = Datasets(files, tfms=tfms, splits=splits)
batch_tfms = [IntToFloatTensor, Normalize.from_stats(*imagenet_stats)]
dls = dsets.dataloaders(bs=bs, num_workers=workers, after_batch=batch_tfms)
return dls
def do_train(epochs=5, runs=5, lr=2e-2, size=size, bs=bs, save_name=None):
dls = get_dls(size, bs)
for run in range(runs):
print(f'Run: {run}')
learn = cnn_learner(dls, xresnet34, opt_func=opt_func, normalize=False,
metrics=[accuracy,top_k_accuracy], loss_func=LabelSmoothingCrossEntropy(),
pretrained=False)
# learn.to_fp16()
if save_name is not None:
state_dict = torch.load(learn.path/learn.model_dir/f'{save_name}_encoder.pth')
learn.model[0].load_state_dict(state_dict)
print("Model loaded...")
learn.unfreeze()
learn.fit_flat_cos(epochs, lr, wd=wd)
```
### ImageWang Leaderboard
**sz-256**
**Contrastive Learning**
- 5 epochs: 67.70%
- 20 epochs: 70.03%
- 80 epochs: 70.71%
- 200 epochs: 71.78%
**BYOL**
- 5 epochs: 64.74%
- 20 epochs: **71.01%**
- 80 epochs: **72.58%**
- 200 epochs: **72.13%**
### 5 epochs
```
# we are using old pretrained model with size 192 for transfer learning
# link: https://github.com/KeremTurgutlu/self_supervised/blob/252269827da41b41091cf0db533b65c0d1312f85/nbs/byol_iwang_192.ipynb
save_name = 'byol_iwang_sz192_epc230'
lr = 1e-2
wd=1e-2
bs=128
epochs = 5
runs = 5
do_train(epochs, runs, lr=lr, bs=bs, save_name=save_name)
np.mean([0.657165,0.637312,0.631967,0.646729,0.664291])
```
### 20 epochs
```
lr=2e-2
epochs = 20
runs = 3
do_train(epochs, runs, lr=lr, save_name=save_name)
np.mean([0.711631, 0.705269, 0.713413])
```
### 80 epochs
```
epochs = 80
runs = 1
do_train(epochs, runs, save_name=save_name)
```
### 200 epochs
```
epochs = 200
runs = 1
do_train(epochs, runs, save_name=save_name)
```
| github_jupyter |
```
import torch
from torch import nn
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
sns.set()
hval = {}
def dhook(name):
def inner_hook(grad):
global hval
hval[name] = grad
return grad
return inner_hook
def to_plot(tensor):
return tensor.squeeze(1).T.detach().numpy()
def posteriorgram(data, xlab, ylab, title, **kwargs):
sns.heatmap(data, cmap="YlGnBu", cbar=True, **kwargs)
plt.xlabel(xlab)
plt.ylabel(ylab)
plt.title(title)
plt.gca().invert_yaxis()
def double_posterior(dat, gradient, **kwargs):
plt.figure(figsize=(18,14))
plt.subplot(211)
posteriorgram(dat, "Time", "Phones", "Activations", **kwargs)
plt.subplot(212)
posteriorgram(gradient, "Time", "Phones", "Gradient", **kwargs)
```
## CTC and Baum Welch
The Baum Welch procedure is the foundation of CTC, however CTC is more constrained. The two main constrains
1. The blank label - CTC mandates a label as the blank label in which the network doesn't make any predictions. This allows the network to be selective with it's predictions.
2. Constrained transition matrix - the CTC transition matrix is to either to stay in current phone, go blank or go to next voiced phone. (If prediction is blank, it stays blank or to next phone). Phones can be repeated, but you don't have branching paths, nor recursive transitions.
These are not well articulated (in fact Graves does not make any explicit links to Baum Welch) and are poorly justified. We will focus on part 2 since blank label investigations require real data for good intuition.
We refer to "Multi-path CTC" for the equations derived there. They have been verified to work within the pytorch framework of autograd. This is a huge result, as implementing CTC is non-trivial, and a version with custom transition matrices would be an incredible software engineering undertaking.
```
def multi_ctc(ctc, data, targets, inlen, target_len):
"""
For simplicity, we assume that the targets are equal length
This is not necessary, but makes for slightly simpler code here
"""
loss = 0
for target in targets:
l = ctc(data, target, inlen, target_len)
# print(f"{l:.4f}:{np.exp(-l.item()):.4f} for {target}")
loss += torch.exp(-l)
totloss = -torch.log(loss)
# print(f"combo: {totloss}, {loss}")
return totloss
def train_multi(data, targets, epochs=100):
T, N, C = data.shape
data = data.requires_grad_(True)
# if we don't take sum, the CTCLoss will be
# averaged across time and this means that our
# equation won't be correct since each path sum is kind of normalised
ctcloss = nn.CTCLoss(reduction="sum")
inlen = torch.IntTensor([T])
target_len = torch.IntTensor([len(targets[0])])
global hval
hval = {}
for epoch in range(epochs):
data.register_hook(dhook("dgrad"))
ds = data.log_softmax(2)
ds.register_hook(dhook("ds"))
loss = multi_ctc(ctcloss, ds, targets, inlen, target_len)
loss.backward()
# bootleg SGD
data = data - .5 * hval["dgrad"]
endp = data.softmax(2).squeeze(1).T.detach().numpy()
grad = hval["ds"].squeeze(1).T.detach().numpy()
return endp, grad
data = torch.zeros(3, 1, 5)
targets = [torch.IntTensor([1,3, 4]) , torch.IntTensor([1,2,4])]
end, g = train_multi(data, targets, 1)
double_posterior(to_plot(data.softmax(2)), g, annot=True)
end, g = train_multi(data, targets, 100)
double_posterior(end, g, annot=True)
data_b = torch.rand(3, 1, 5)
# data_b[0,0,1] += 10
data_b[1,0,2] += 2
# data_b[2,0,4] += 10
targets = [torch.IntTensor([1,2, 4]), torch.IntTensor([1,3,4])]
plt.figure(figsize=(14,7))
posteriorgram(to_plot(data_b.softmax(2)), "time", "phone", "data", annot=True)
end, g = train_multi(data_b, targets, 20)
double_posterior(end, g, annot=True)
data_N = torch.rand(3, 1, 5)
data_N[1,0,:] = 0
data_N[1,0,2] = 1
data_N[1,0,3] = 1.01
# data_b[2,0,4] += 10
targets = [torch.IntTensor([1,2, 4]), torch.IntTensor([1,3,4])]
plt.figure(figsize=(18,7))
posteriorgram(to_plot(data_N.softmax(2)), "time", "phone", "data", annot=True)
end, g = train_multi(data_N, targets, 1000)
double_posterior(end, g, annot=True)
```
The above plots show that the multi-path CTC training that would be a natural extension from baum welch doesn't actually work very well here. Even from small starting differences, the eventual result is that the result is that the slightly more likely path becomes the overwhelming favourite. In contrast, values that are identical, remain so the entire way
```
data_N = torch.rand(10, 1, 5)
# data_N[1,0,2] = 1
# data_N[1,0,3] = 1.1
# data_b[2,0,4] += 10
targets = [torch.IntTensor([1,2, 4]), torch.IntTensor([1,3,4])]
plt.figure(figsize=(18,7))
posteriorgram(to_plot(data_N.softmax(2)), "time", "phone", "data", annot=True)
end, g = train_multi(data_N, targets, 1000)
double_posterior(end, g, annot=True)
```
What does this mean when we have training data? This can be illustrated easily with a few thought experiments.
Consider the phones "ah" and "a". This is a common difference in pronounciation (tomato, alexa etc), so being able to learn the correct phone is useful. Let us assume we've done a curriculum learning, and our network is sort of good at picking these sounds. Now let us introduce branching paths.
1. The output of the a vs ah phone is close to 1, it’s that confident about the different phones. In this case, what will the gradients be? In this situation, since the gradients are weighted by likelihood, the correct path is identified and the gradients should be mostly 0. Hence, our network should be stable, but this is expected since it’s already converged on the correct phones - the fact that we are training isn’t helpful at this point.
2. What about a situation where the a vs ah sound is not so clear, say 0.7 for the correct phone and 0.3 for the wrong phone. In this case, each training example will make the correct pronunciation more correct. In the context of a batch, the gradients are now effectively weighted by the makeup of the pronunciations.
- I.e. if the batch is evenly weighted 50-50 “ah” and ’a’ then the gradients effectively cancel the other out and again, we learn nothing between the phones in the batch. In this case, I believe we would eventually lose out ability to identify between ’a’ and ’ah’, always putting out 0.5, 0.5 for the phones (or would it just learn nothing in this case?)
- If the batch is not evenly weighted. The pronunciation that is more common will dominate and our network will switch to predicting the more common pronunciation and the other phone would no longer be predicted at all.
The above results were seen in experimental tests. Training on Australian accents and then adding in a large number of American accent examples (large misbalance) resulted in our network shifting to always predicting american accents.
| github_jupyter |
# Fitbit Data Analysis
## About Fitbit Data Analysis
This project provides some high-level data analysis of steps, sleep, heart rate and weight data from Fitbit tracking.
Please using fitbit_downloader file to first collect and export your data.
-------
### Dependencies and Libraries
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt, matplotlib.font_manager as fm
from datetime import datetime
import seaborn
%matplotlib inline
```
-------
# Steps
```
daily_steps = pd.read_csv('data/daily_steps.csv', encoding='utf-8')
daily_steps['Date'] = pd.to_datetime(daily_steps['Date'])
daily_steps['dow'] = daily_steps['Date'].dt.weekday
daily_steps['day_of_week'] = daily_steps['Date'].dt.weekday_name
daily_steps.tail()
len(daily_steps)
# drop days with now steps
daily_steps = daily_steps[daily_steps.Steps > 0]
len(daily_steps)
daily_steps.Steps.max()
daily_steps.Steps.min()
daily_steps.Steps.max()
```
### Step Charts
```
daily_steps['RollingMeanSteps'] = daily_steps.Steps.rolling(window=10, center=True).mean()
daily_steps.plot(x='Date', y='RollingMeanSteps', title= 'Daily step counts rolling mean over 10 days')
daily_steps.groupby(['dow'])['Steps'].mean()
ax = daily_steps.groupby(['dow'])['Steps'].mean().plot(kind='bar', x='day_of_week')
plt.suptitle('Average Steps by Day of the Week', fontsize=16)
plt.xlabel('Day of Week: 0 = Monday, 6 = Sunday', fontsize=12, color='red')
```
# Sleep
```
daily_sleep = pd.read_csv('data/daily_sleep.csv', encoding='utf-8')
daily_inbed = pd.read_csv('data/daily_inbed.csv', encoding='utf-8')
len(daily_sleep)
sleep_data = pd.merge(daily_sleep, daily_inbed, how='inner', on='Date')
sleep_data['Date'] = pd.to_datetime(sleep_data['Date'])
sleep_data['dow'] = sleep_data['Date'].dt.weekday
sleep_data['day_of_week'] = sleep_data['Date'].dt.weekday_name
sleep_data['day_of_week'] = sleep_data["day_of_week"].astype('category')
sleep_data['InBedHours'] = round((sleep_data.InBed / 60), 2)
sleep_data = sleep_data[sleep_data.Sleep > 0]
len(daily_sleep)
sleep_data.info()
sleep_data.tail()
sleep_data.describe()
sleep_data.plot(x='Date', y='Hours')
sleep_data['RollingMeanSleep'] = sleep_data.Sleep.rolling(window=10, center=True).mean()
sleep_data.plot(x='Date', y='RollingMeanSleep', title= 'Daily sleep counts rolling mean over 10 days')
sleep_data.groupby(['dow'])['Hours'].mean()
ax = sleep_data.groupby(['dow'])['Hours'].mean().plot(kind='bar', x='day_of_week')
plt.suptitle('Average Sleep by Night of the Week', fontsize=16)
plt.xlabel('Day of Week: 0 = Monday, 6 = Sunday', fontsize=12, color='red')
```
| github_jupyter |
Evaluate experimental design using D-Efficiency.
**Definitions**:
$\mathbf{X}$ is the model matrix: A row for each run and a column for each term in the model.
For instance, a model assuming only main effects:
$\mathbf{Y} = \mathbf{X} \beta + \alpha$
$\mathbf{X}$ will contain $p = m + 1$ columns (number of factors + intercept).
D-optimality:
*D-efficiency* $= \left(\frac{1}{n}|\mathbf{X}'\mathbf{X}|^{1/p}\right)$
**D-efficiency**
D-efficiency compares the design $\mathbf{X}$ with the D-optimal design $\mathbf{X_D}$ for the assumed model:
*D-efficiency* $= \left[ \frac{|\mathbf{X}'\mathbf{X}|}{|\mathbf{X_D}'\mathbf{X_D}|} \right]^{1/p}$
In JMP, the D-efficiency compares the design with an orthogonal design in terms of D-optimality criterion:
*D-efficiency* $= 100 \left(\frac{1}{n}|\mathbf{X}'\mathbf{X}|^{1/p}\right)$
In orthogonal designs (see Olive, D.J. (2017) Linear Regression, Springer, New York, NY.):
* the entries in the model matrix $\mathbf{X}$ are either -1 or 1,
* the columns are orthogonals $c_i^T c_j = 0$ for $i \neq j$,
* $c_i^T c_i = n$, where $n$ is the number of experiments,
* the sum of the absolute value of the columns is $n$.
For an orthogonal design with $p$ factors and $n$ experiments:
$\mathbf{X}'\mathbf{X} = n \mathbf{I}$
and therefore:
$|\mathbf{X}'\mathbf{X}| = n^p $.
**Designs**
D-optimal designs maximize $D$:
$D = |\mathbf{X}'\mathbf{X}|$
(no need for the other terms because they are constant)
D-optimal split-plot designs maximize:
$D = |\mathbf{X}'\mathbf{V}^{-1}\mathbf{X}|$
where $\mathbf{V}^{-1}$ is the block diagonal covariance matrix of the responses.
Split-plot designs are those that some factors are harder to vary than others. The covariance indiciates the ratio between the whole variance and the subplot variance to the error variance.
**Estimation efficiency**
There are several parameters (see JMP guide), but they are related. The basic is the relateive std error to estimate, i.e., how large the standard errors of the model's parameter estimates are relative to the error standard deviation.
*SE* $= \sqrt{\left(\mathbf{X}'\mathbf{X}\right)_{ii}^{-1}}$
where $\left(\mathbf{X}'\mathbf{X}\right)_{ii}^{-1}$ is the $i$ diagonal of $\left(\mathbf{X}'\mathbf{X}\right)^{-1}$.
### Calculation of D-efficiency with categorical variables
With categorical variables with $l$ levels, we need to use $l-1$ dummy variables. There are several possible [constrat codings](https://juliastats.github.io/StatsModels.jl/latest/contrasts.html).
* One possibility is to perform a **one-hot encoding** with values -1 and 1 (an hypercube) and take the $l-1$ levels.
* In order to perform the dimension reduction, one approach is to express the points in their $l-1$ principal directions.
* In order to make the design orthogonal, the product of the columns of the dummy variables need to add up to the number of variables $\sum_{j=1}^{l-1} c_{ij}^T c_{ij} = l-1$. My approach: a) normalize the resulting vectors, b) multiply them for $\sqrt{l-1}$.
# Example
An example for 2-factor model with main effects:
```
import numpy as np
X1 = np.matrix( "[1 1 1; 1 -1 1; 1 -1 -1]")
X1
def Deff(X):
# D-efficiency
return (100.0/X.shape[0]) * ( np.linalg.det( np.dot( np.transpose( X ), X ) )**(1.0/X.shape[1]))
def Dopt(X):
# D-optimality
return np.linalg.det( np.dot( np.transpose( X ), X ) )
def SE(X):
# Estimation efficiency
return np.diag( np.linalg.inv( np.dot( np.transpose( X ), X ) ) )
def Contrib(X):
cn = []
for i in range(0, X.shape[0]):
cn.append( Dopt( np.vstack( [X[:i,:], X[(i+1):,:]] ) ) )
return cn
def VarAdd(X,xj):
# Variance of adding/removing one
return np.dot( np.dot( np.transpose(xj) , np.linalg.inv( np.dot( np.transpose( X ), X) ) ), xj )
Dopt( np.vstack( [X[:i,:], X[i:,:]] ) )
Contrib(X1)
SE(X1)
```
# Algorithm
```
import itertools
p = 10 # Number of factors print(w,Deff(X),i,j
n = 24 # Number of runs
# Initial design
X = np.random.randint(0,2,(n,p+1))
val = []
for x in np.arange(p):
val.append([0,1])
ffact = []
for m in itertools.product(*val):
ffact.append(m)
ffact = np.array(ffact)
X = ffact[np.random.randint(0,len(ffact),n),:]
J = 0
w = 0
while ( (J < 1e4) and (w < 10000) ):
d2 = None
d6 = None
w += 1
try:
d1 = Deff( X )
d2 = Dopt( X )
d3 = SE( X )
d4 = Contrib( X )
except:
continue
J = max(J, d1)
i = np.argmin( d4 )
j = np.argmax( d3 )
X1 = X.copy()
k = 0
while( k < 10 ):
i = np.argsort( d4 )[ np.random.randint(0,5)]
j = np.flip( np.argsort( d3 ) )[0] # [ np.random.randint(0,5)]
X1[i,:] = ffact[np.random.randint(0,len(ffact),1), :]
# if X[i,j] == 0:
# X1[i,j] = 1
# else:
# X1[i,j] = 0
k += 1
try:
d5 = Deff( X1 )
d6 = Dopt( X1 )
d7 = SE( X1 )
d8 = Contrib( X1 )
except:
continue
if d6 > d2:
X = X1
print(w,J,d1,d2,d6,i,j)
print(w,J,d1,d2,d6,i,j)
p = 100 # Number of factors
n = 150 # Number of runs
m = 100 # Sampled design space per iteration
# Here we generate a full factorial but this is not possible for large designs
# Replace by random sampling, ideally some descent algorithm (TBD)
# For categorical variables, we randomize the levels that are then mapped into the dummy variables
FULL = False
if FULL:
val = []
for x in np.arange(p+1):
val.append([-1,1])
ffact = []
for m in itertools.product(*val):
ffact.append(m)
ffact = np.array(ffact)
# Initial design: could we do something better than a purely random start?
X = np.array([-1,1])[ np.random.randint(0,2,(n,p+1)) ]
# D-Efficiency of the initial design
# Here I have implemented a simple DETMAX algorithm. At each iteration:
# - remove the design with the lowest variance
# - add the design with the highest variance
# Many more efficent variants exist (kl-exchange, etc..)
J = Deff(X)
print(J)
w = 0
while ((J<99.0) and (w < 100)):
# X1 is the design space sample in the iteration.
# Here we loop through the full factorial, which is computationally expensive
# First thing to fdo is to change it to random generation of a subset of the full library
# It would be better to move across some surface like gradient descent...
if FULL:
X1 = ffact
else:
X1 = np.array([-1,1])[ np.random.randint(0,2,(m,p+1)) ]
sub = []
for i in np.arange(X.shape[0]):
sub.append( VarAdd(X, X[i,:]) )
w += 1
Xsub = None
dList = np.argsort( sub )[0:1]
for i in np.arange(X.shape[0]):
if i in dList:
continue
else:
if Xsub is None:
Xsub = X[i,:]
else:
Xsub = np.vstack( [Xsub, X[i,:]] )
add = []
for j in np.arange(X1.shape[0]):
add.append( VarAdd( Xsub, X1[j,:] ) )
aList = np.flip( np.argsort( add ) )[0:1]
Xn = Xsub
for j in aList:
Xn = np.vstack( [Xn, X1[j,:] ] )
if w % 100 == 0:
print(w,J,i,j, Dopt(X), Dopt(Xsub), Dopt(Xn))
if Dopt(Xn) > Dopt(X):
X = Xn
J = Deff(X)
elif Dopt(Xn) == Dopt(X):
break
print(w,J,i,j)
X.shape
X = np.random.randint(0,2,(n,p+1))
X1 = np.random.randint(0,2,(n,p+1))
add = []
for i in np.arange(X1.shape[0]):
add.append( VarAdd(X, X1[i,:]) )
sub = []
for i in np.arange(X.shape[0]):
sub.append( VarAdd(X, X[i,:]) )
sub
Dopt( X1 )
d4
np.dot( np.transpose( X), np.array( X ) )
np.transpose( X )
np.linalg.det( np.dot( np.transpose( X ), X ) )
24**11
import dexpy.optimal
from dexpy.model import ModelOrder
reaction_design = dexpy.optimal.build_optimal(50, run_count=64, order=ModelOrder.linear)
#reaction_design
from sklearn.preprocessing import OneHotEncoder
```
dexpy library works fine, but it does not work with dummy variables. Therefore, if we add dummy variables, there will be clashes between levels in the factor (multiple level set simultaneously). Also, if we increase the order of the model, it will generate intermediate values that are not actually meaningful for dummy variables.
Dummy variables could probably be better processed by generating a full factorial with labels and then convert to dummy after sampling.
```
reaction_design
np.matrix( "[1 1 1; 2 3 4]")
import pandas as pd
df = pd.read_excel('/mnt/SBC1/data/OptimalDesign/data/CD.xlsx')
XX = np.array( df.iloc[:,0:5] )
Deff(XX)
X = np.array([-1,-1])[ np.random.randint(0,2,(n,p+1)) ]
X
```
| github_jupyter |
# Introduction to Neural Networks
Based off of the lab exercises from deeplearning.ai, using public datasets and personal flair.
## Objectives
- Build the general architecture of a learning algorithm, including:
- initializing parameters
- calculating the cost function and its gradient
- using an optimization algorithm
- Gather all three functions above into a main model function, in the right order.
## Import Packages
```
import os
import random
import re
import numpy as np
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
from sklearn.model_selection import train_test_split
%matplotlib inline
```
## Dataset
Data will be taken from Kaggle's [Dogs vs. Cats](https://www.kaggle.com/c/dogs-vs-cats-redux-kernels-edition/data) dataset.
From Kaggle's description:
>The train folder contains 25,000 images of dogs and cats. Each image in this folder has the label as part of the filename. The test folder contains 12,500 images, named according to a numeric id. For each image in the test set, you should predict a probability that the image is a dog (1 = dog, 0 = cat).
Steps to reproduce:
- preprocess train and validation set
- (optional) select subset of training set
- resize images to all be the same (64x64)
- flatten images
- build logistic regression model as a single-layer neural network
- initialize weight matrix
- write forward and backprop functions, defining the log loss cost function
- optimize learning
```
TRAIN_PATH = 'C:/Users/JYDIW/Documents/kaggle-datasets/dogs-vs-cats-redux-kernels-edition/train/'
TEST_PATH = 'C:/Users/JYDIW/Documents/kaggle-datasets/dogs-vs-cats-redux-kernels-edition/test/'
ROWS = 128
COLS = 128
CHANNELS = 3
m_train = 800
m_val = 200
m_total = m_train + m_val
all_train_dogs = [TRAIN_PATH+f for f in os.listdir(TRAIN_PATH) if 'dog' in f]
all_train_cats = [TRAIN_PATH+f for f in os.listdir(TRAIN_PATH) if 'cat' in f]
all_train_images = random.sample(all_train_dogs, m_total//2) + random.sample(all_train_cats, m_total//2)
random.shuffle(all_train_images)
train_images, val_images = train_test_split(all_train_images, test_size=m_val)
# all_test_images = [TEST_PATH+f for f in os.listdir(TEST_PATH)]
# test_images = random.sample(all_test_images, m_test)
def read_image(image_path, as_array=False):
img = Image.open(image_path)
if as_array:
return np.asarray(img.resize((COLS, ROWS)))
return img.resize((COLS, ROWS))
def resize_images(images):
count = len(images)
data = np.ndarray((count, ROWS, COLS, CHANNELS), dtype=np.uint8)
for i, file in enumerate(images):
img = read_image(file, as_array=True)
data[i] = img
if (i+1)%250 == 0:
print(f'Processed {i+1} of {count}')
return data
print(read_image(train_images[0], as_array=True).shape)
read_image(train_images[0])
train_images_resized = resize_images(train_images)
val_images_resized = resize_images(val_images)
def generate_labels(images):
labels = np.zeros((1, np.array(images).shape[0]), dtype=np.uint8)
for i, img in enumerate(images):
if re.findall('.+\/(\w+)\.\d+\.jpg', img)[0] == 'dog':
labels[0][i] = 1
# else:
# labels[0][i] = 0
return labels
y_train = generate_labels(train_images)
y_val = generate_labels(val_images)
def flatten_and_normalize_images(images):
return images.reshape(images.shape[0], -1).T / 255
X_train = flatten_and_normalize_images(train_images_resized)
X_val = flatten_and_normalize_images(val_images_resized)
print(X_train.shape)
print(y_train.shape)
```
## Building the Algorithm
The main steps for building a Neural Network are:
1. Define the model structure (such as number of input features)
2. Initialize the model's parameters
3. Loop:
- Calculate current loss (forward propagation)
- Calculate current gradient (backward propagation)
- Update parameters (gradient descent)
```
def sigmoid(z):
return 1 / (1 + np.exp(-1 * z))
def initialize_with_zeros(dim):
w = np.zeros((dim, 1))
b = 0
return w, b
def negative_log_likelihood(A, y, m):
J = -1 * np.sum(y * np.log(A) + (1 - y) * np.log(1 - A)) / m
return J
def propagate(w, b, X, y):
m = X.shape[1]
A = sigmoid(np.dot(w.T, X) + b)
cost = negative_log_likelihood(A, y, m)
dw = np.dot(X, (A - y).T) / m
db = np.sum(A - y) / m
cost = np.squeeze(cost)
grads = {"dw": dw, "db": db}
return grads, cost
def optimize(w, b, X, y, num_iterations, learning_rate, verbose=False):
costs = []
for i in range(num_iterations):
grads, cost = propagate(w, b, X, y)
dw = grads['dw']
db = grads['db']
w -= learning_rate * dw
b -= learning_rate * db
if i % 100 == 0:
costs.append(cost)
if verbose:
print(f'cost after iteration {i}: {cost}')
params = {'w': w, 'b': b}
grads = {'dw': dw, 'db': db}
return params, grads, costs
def predict(w, b, X):
m = X.shape[-1]
y_pred = np.zeros((1, m))
w = w.reshape(X.shape[0], 1)
A = sigmoid(np.dot(w.T, X) + b)
for i in range(A.shape[1]):
y_pred[0][i] = (A[0][i] > 0.5)
return y_pred
def model(X_train, y_train, X_val, y_val, num_iterations=2000, learning_rate=0.5, verbose=False):
w, b = initialize_with_zeros(X_train.shape[0])
params, grads, costs = optimize(w, b, X_train, y_train, num_iterations, learning_rate, verbose)
w = params['w']
b = params['b']
y_pred_train = predict(w, b, X_train)
y_pred_val = predict(w, b, X_val)
print(f'train accuracy: {(100 - np.mean(np.abs(y_pred_train - y_train)) * 100)}')
print(f'test accuracy: {(100 - np.mean(np.abs(y_pred_val - y_val)) * 100)}')
d = {"costs": costs,
"y_prediction_test": y_pred_val,
"y_prediction_train" : y_pred_train,
"w" : w,
"b" : b,
"learning_rate" : learning_rate,
"num_iterations": num_iterations}
return d
m = model(X_train, y_train, X_val, y_val, 2000, 0.005, True)
```
## Adding Layers to the Model
Steps to reproduce:
- Initialize the parameters for a two-layer network and for an $L$-layer neural network.
- Implement the forward propagation module.
- Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).
- We give you the ACTIVATION function (relu/sigmoid).
- Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.
- Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.
- Compute the loss.
- Implement the backward propagation module (denoted in red in the figure below).
- Complete the LINEAR part of a layer's backward propagation step.
- We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward)
- Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function.
- Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function
- Finally update the parameters.
| github_jupyter |
# R API Serving Examples
In this example, we demonstrate how to quickly compare the runtimes of three methods for serving a model from an R hosted REST API. The following SageMaker examples discuss each method in detail:
* **Plumber**
* Website: [https://www.rplumber.io/](https://www.rplumber.io)
* SageMaker Example: [r_serving_with_plumber](../r_serving_with_plumber)
* **RestRServe**
* Website: [https://restrserve.org](https://restrserve.org)
* SageMaker Example: [r_serving_with_restrserve](../r_serving_with_restrserve)
* **FastAPI** (reticulated from Python)
* Website: [https://fastapi.tiangolo.com](https://fastapi.tiangolo.com)
* SageMaker Example: [r_serving_with_fastapi](../r_serving_with_fastapi)
We will reuse the docker images from each of these examples. Each one is configured to serve a small XGBoost model which has already been trained on the classical Iris dataset.
## Building Docker Images for Serving
First, we will build each docker image from the provided SageMaker Examples.
### Plumber Serving Image
```
!cd .. && docker build -t r-plumber -f r_serving_with_plumber/Dockerfile r_serving_with_plumber
```
### RestRServe Serving Image
```
!cd .. && docker build -t r-restrserve -f r_serving_with_restrserve/Dockerfile r_serving_with_restrserve
```
### FastAPI Serving Image
```
!cd .. && docker build -t r-fastapi -f r_serving_with_fastapi/Dockerfile r_serving_with_fastapi
```
## Launch Serving Containers
Next, we will launch each search container. The containers will be launch on the following ports to avoid port collisions on your local machine or SageMaker Notebook instance:
```
ports = {
"plumber": 5000,
"restrserve": 5001,
"fastapi": 5002,
}
!bash launch.sh
!docker container list
```
## Define Simple Client
```
import requests
from tqdm import tqdm
import pandas as pd
def get_predictions(examples, instance=requests, port=5000):
payload = {"features": examples}
return instance.post(f"http://127.0.0.1:{port}/invocations", json=payload)
def get_health(instance=requests, port=5000):
instance.get(f"http://127.0.0.1:{port}/ping")
```
## Define Example Inputs
Next, we define a example inputs from the classical [Iris](https://archive.ics.uci.edu/ml/datasets/iris) dataset.
* Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
```
column_names = ["Sepal.Length", "Sepal.Width", "Petal.Length", "Petal.Width", "Label"]
iris = pd.read_csv(
"s3://sagemaker-sample-files/datasets/tabular/iris/iris.data", names=column_names
)
iris_features = iris[["Sepal.Length", "Sepal.Width", "Petal.Length", "Petal.Width"]]
example = iris_features.values[:1].tolist()
many_examples = iris_features.values[:100].tolist()
```
## Testing
Now it's time to test how each API server performs under stress.
We will test two use cases:
* **New Requests**: In this scenario, we test how quickly the server can respond with predictions when each client request establishes a new connection with the server. This simulates the server's ability to handle real-time requests. We could make this more realistic by creating an asynchronous environment that tests the server's ability to fulfill concurrent rather than sequential requests.
* **Keep Alive / Reuse Session**: In this scenario, we test how quickly the server can respond with predictions when each client request uses a session to keep its connection to the server alive between requests. This simulates the server's ability to handle sequential batch requests from the same client.
For each of the two use cases, we will test the performance on following situations:
* 1000 requests of a single example
* 1000 requests of 100 examples
* 1000 pings for health status
## New Requests
### Plumber
```
# verify the prediction output
get_predictions(example, port=ports["plumber"]).json()
for i in tqdm(range(1000)):
_ = get_predictions(example, port=ports["plumber"])
for i in tqdm(range(1000)):
_ = get_predictions(many_examples, port=ports["plumber"])
for i in tqdm(range(1000)):
get_health(port=ports["plumber"])
```
### RestRserve
```
# verify the prediction output
get_predictions(example, port=ports["restrserve"]).json()
for i in tqdm(range(1000)):
_ = get_predictions(example, port=ports["restrserve"])
for i in tqdm(range(1000)):
_ = get_predictions(many_examples, port=ports["restrserve"])
for i in tqdm(range(1000)):
get_health(port=ports["restrserve"])
```
### FastAPI
```
# verify the prediction output
get_predictions(example, port=ports["fastapi"]).json()
for i in tqdm(range(1000)):
_ = get_predictions(example, port=ports["fastapi"])
for i in tqdm(range(1000)):
_ = get_predictions(many_examples, port=ports["fastapi"])
for i in tqdm(range(1000)):
get_health(port=ports["fastapi"])
```
## Keep Alive (Reuse Session)
Now, let's test how each one performs when each request reuses a session connection.
```
# reuse the session for each post and get request
instance = requests.Session()
```
### Plumber
```
for i in tqdm(range(1000)):
_ = get_predictions(example, instance=instance, port=ports["plumber"])
for i in tqdm(range(1000)):
_ = get_predictions(many_examples, instance=instance, port=ports["plumber"])
for i in tqdm(range(1000)):
get_health(instance=instance, port=ports["plumber"])
```
### RestRserve
```
for i in tqdm(range(1000)):
_ = get_predictions(example, instance=instance, port=ports["restrserve"])
for i in tqdm(range(1000)):
_ = get_predictions(many_examples, instance=instance, port=ports["restrserve"])
for i in tqdm(range(1000)):
get_health(instance=instance, port=ports["restrserve"])
```
### FastAPI
```
for i in tqdm(range(1000)):
_ = get_predictions(example, instance=instance, port=ports["fastapi"])
for i in tqdm(range(1000)):
_ = get_predictions(many_examples, instance=instance, port=ports["fastapi"])
for i in tqdm(range(1000)):
get_health(instance=instance, port=ports["fastapi"])
```
### Stop All Serving Containers
Finally, we will shut down the serving containers we launched for the tests.
```
!docker kill $(docker ps -q)
```
## Conclusion
In this example, we demonstrated how to conduct a simple performance benchmark across three R model serving solutions. We leave the choice of serving solution up to the reader since in some cases it might be appropriate to customize the benchmark in the following ways:
* Update the serving example to serve a specific model
* Perform the tests across multiple instances types
* Modify the serving example and client to test asynchronous requests.
* Deploy the serving examples to SageMaker Endpoints to test within an autoscaling environment.
For more information on serving your models in custom containers on SageMaker, please see our [support documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-inference-main.html) for the latest updates and best practices.
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.