text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# DAE 4 Descriptive Statistics
Author:
- |
Sofia Dahl, sof@create.aau.dk\
Dept. Architecture, Design and Media Technology, Aalborg University Copenhagen
---
## Learning goals
After working your way through this notebook you should be able to..
- Explain what is meant by 'population' and 'sample'
- Plot and summarise numerical data by showing
+ Central Tendency
+ Variability
- Describe the distribution of data using histograms
- Explain the relationship between standard error of mean and sample size
## Table of Contents
- [1. Population and Samples](#Populations)
- [2. Summarizing data](#Summarizing)
+ [2.1. Central Tendency](#Tendency)
+ [2.2. Variability](#variability)
- [3. Frequency Distributions](#Distributions)
+ [3.1. Normal Distribution](#Normaldist)
- [4. Sample means and sample size](#SampleMeanSize)
- [5. Standard Error of the Mean](#StandardError)
```
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import scipy.stats as stats
```
<a id= 'Populations'> </a>
## 1. Populations and Samples
A *population* is the full set of the individuals of interest to us in our research study. This does not have to be living people, but it is the whole set: ALL chairs; ALL readers of Field & Hole's book; ALL players of World of Warcraft; ALL Danish women; ALL students on Medialogy 4th semester...
'All individuals' may be quite many, and it is often neither practical nor feasible to try to measure the whole population. A *sample* is intended to be representative of the population, and we can use the sample to make inferences about the population in question.
For these inferences to be trustworthy, the sample should ideally be drawn so that every individual within the population has an equal chance of being selected in the sample.
### TRY OUT 1:
Draw a sample of size N and plot the heights. Repeat several times with different N.
```
# TRY OUT : Draw a sample of size N and plot the heights.
#Repeat several times with different N.
N=23
# Define X-axis range
x=np.arange(1,N+1)
# Draw a sample - Parameters are (MEAN, STANDARD DEVIATION, N)
nscores=np.random.normal(178.4,7.59,N)
# Plot sample in two dimensions
plt.scatter(x,nscores)
plt.xlabel('individual')
plt.ylabel('height (cm)')
plt.show()
```
In real studies, random sampling is typically much easier said than done. Therefore, HOW you recruit test participants is something that should be carefully considered so that you do not end up with bias and unreliable data.
<a id= 'Summarizing'> </a>
## 2 Summarizing data
Once we have our sample, we first want to collect, classify, summarize and present the data so we can interpret it. This is what we use descriptive statistics for.
<a id= 'Tendency'> </a>
### 2.1 Central tendency
Ways of summarizing data differ depending on the measurement scales used and how the collected data looks.
The *mean value* is a measure of central tendencies that is based on ALL values (or scores) and one can think of it as a kind of "balancing point". The arithmetic mean of a sample can be calculated as
$$\bar{X} =\frac {\sum_i^N{ X_i}} {N} $$
where $X_i$ is each individual score (that is $X_1, X_2, ..., X_N$)
The *median*, on the other hand, is less sensitive to extreme values. The median is the point below which half of the (numerically ordered) scores fall and is also called the 50th percentile
The *mode* is simply the most commonly occuring score in the data, and therefore a score always present in the data. This is used mostly for categorical (nominal) data.
### Try out 2:
Calculate different measures of central tendency by hand for
A=[1, 1, 2, 3, 3, 3, 3, 4]
```
#TRY OUT 2: calculate different measures of central tendency by hand for
A=[1, 1, 2, 3, 3, 3, 3, 4]
#Then check with the following functions
# Uncomment next line to display MODE
# stats.mode(A)
# Uncomment next line to display MEDIAN
# np.median(A)
# Uncomment next line to display MEAN
# np.mean(A)
```
### Try out 3:
what will be the answer for this call?
```
# TRY OUT BY HAND FIRST: What will be the mode of the the following sample?
# ["red", "blue", "blue", "red", "green", "red", "red"]
# Uncomment next line to display MODE
# stats.mode(["red", "blue", "blue", "red", "green", "red", "red"])
```
<a id= 'variability'> </a>
### 2.2. Variability
Reporting only mean values typically does not tell us much about the true
nature of the data, because it lacks information about the *variability* or
*spread* in it. The simplest way of reporting this variability is to specify the range of the data, i.e. the smallest and largest values. While this gives us some information, two variability measures that are more helpful are variance and standard deviation. These help us understand where most values will be found.
VARIANCE is the sum of the **squared difference** between *each individual
score* and the *mean*, all divided by the *number* of scores
$$\sigma^2 =\frac{\sum_i^N{ (X_i-\bar{X}})^2 } {N}$$
where $X_i$ is each
individual value and $\bar{X}$ is the
arithmetic mean.
STANDARD DEVIATION is simply the square root of the variance
$$\sigma =\sqrt {\sigma^2}$$
The standard deviation can be thought of as the average distance between all individual values and the sample mean. Assuming that data is normally distributed, most of the scores can be found within this distance (68.27% of them to be more precise).
What you see above refers to the true variance and the true standard deviation of the population. For SAMPLE variance and standard deviation, the denominator is adjusted by replacing *N* by *(N-1)*. You can think of it as reducing the risk of underestimating the variance for small sample sizes (because the denominator N-1 is smaller than N, resulting in a higher calculated variance).
### Try out 4:
Which of the following function calls calculate the sample mean?
```
#TRY OUT 4: which of the following function calls are calculating the sample mean?
# 'ddof' Means Delta Degrees of Freedom.
# The divisor used in calculations is N - ddof, where N represents the number of elements.
# By default ddof is zero.
a=np.arange(1,4)
#compare the following:
print(np.std(a))
print(np.std(a, ddof=1))
# TIP look up ddof in: help(np.std)
```
<a id= 'Distributions'> </a>
## 3. Frequency Distributions
The central tendency and spread of the data summarize some characteristics of the sample, but to get a better overview we would like to know about the *distribution* of data points. Histograms will tell us about the shape of the distribution. There are a few shapes that can be expected to occur in different kinds of data: *normal* distributions (bell shaped and symmetrical), *skewed* distributions (one tail longer than the other), *uniform* distributions (flat), and *bimodal* (two main peaks).
### Try out 5:
Plot the histogram and determine what type of distribution this is
```
#TRY OUT 5: Plot the histogram and determine what type of distribution this is
# In the 'scores' line, try increasing the third parameter (4) to draw a larger sample
# Increase it by factors of 10 (4 - 40 - 400 - 4000 - 40000...) to see how the shape changes
scores=np.random.randint(1,6,4)
plt.hist(scores)
plt.show()
# TIP: for more info, uncomment the next line and run
# help(np.random)
```
<a id= 'Normaldist'> </a>
### 3.1 Normal distribution
The *Gaussian* distribution is so common that it is also called a *normal* distribution. A lot of different biological processes give rise data distributed in this fashion.
The distribution is defined by the function:
$$ y(x) =\frac{1}{\sigma \sqrt{2 \pi}} e^-{\frac{(x-\mu)^2}{2\sigma^2}}$$
where $\sigma$ is the standard devaiation and $\mu$ is the
arithmetic mean. Since normal distributions are perfectly symmetrical, the mean, median and mode are the same. Note that we here use *Greek* letters to denote the TRUE mean and standard deviations of the population.
```
# Plot a standard normal distribution where:
# Standard deviation (sigma) = 1
# Mean (mu) = 0
# Generate an x-vector
x = np.arange(-4, 4, 0.1)
sigma1 = 1
mu1 = 0
# Calculate a normal distribution of x with a mean of 0
y1 =1/(sigma1* np.sqrt(2*np.pi))*np.exp(-((x-mu1)**2)/(2 * sigma1**2))
plt.plot(x,y1)
# Plot the mean value as vertical line
plt.axvline(mu1, 0, 1)
plt.show()
```
### Try out 6: (Typical exam question)
Plot the normal distribution with different mean and std. What changes in the plot?
```
#TRY OUT: Copy and modify the code above to plot the normal distribution with different mean and std.
# What changes in the plot?
```
In a normal distribution, the majority of scores lie around the centre (main peak) of the distribution. The width of the bell says something about the variability of data, specifically how well-centered it is around the mean value. In fact, as seen in the equation above, the width is linked to the standard deviation. 68.26 % of all values can be found in the region $\mu \pm \sigma$. If we increase the range to $\mu \pm 2\sigma$, we account for almost ALL values, 96.44%.
An example of normally distributed data can be seen in human height:
https://ourworldindata.org/human-height#height-is-normally-distributed
We can simulate drawing a sample measuring heights
### Try out 7:
What is the mean and the standard deviation for this distribution?
```
# What is the mean and the standard deviation for this distribution
# Variable for sample size
N = 50
# Variable for sample values
sampleheights=np.random.normal(164,7,N)
plt.hist(sampleheights)
plt.ylabel('occurrences')
plt.xlabel('height (cm)')
plt.show()
```
### Try out 8:
Go to the link on human heights above and change the values so that they correspond to samples of the opposite sex.
```
#TRY OUT: Go to the link on human heights above
# and change the values so that they correspond to samples of the opposite sex
```
<a id= 'SampleMeanSize'> </a>
## 4. Sample mean and sample size
We draw samples because it is impractical (or impossible) to measure all individuals in the whole population. We therefore need also to consider how large a sample is needed for our sample mean $\bar{X}$ to be a good estimate of the true population mean $\mu$.
As our sample size N increases it will start to approach the whole population, so the immediate answer is the larger the better. The reason for this we can see in the variability of sample means.
We can investigate how the sample means vary in relation to the true population mean $\mu$ by simulating samples from a normally distributed population. By repeatedly drawing means of a sample size N, calculating their respective sample means, and plotting these, we can see how they differ.
### Try out 9:
Draw different samples, each with a sample size N. Then extend the code to add means for all samples and plot the histogram of the resulting scores.
Compare to the true mean of the population (the one we send to the random-function).
Repeat for different Ns.
```
# TRY OUT: draw different samples, each with a sample size N.
N=5 #variable for sample size
sample1=np.random.normal(164,7,N)
sample2=np.random.normal(164,7,N)
sample3=np.random.normal(164,7,N)
sample4=np.random.normal(164,7,N)
sample5=np.random.normal(164,7,N)
# extend this to add means for all samples
allmeans= [np.mean(sample1), np.mean(sample2)]
#Plot the histogram of the resulting scores.
#compare to the true mean of the population
#Repeat for different Ns
```
<a id= 'StandardError'> </a>
## 5. Standard Error of the Mean
The estimate of the true population mean vary among the samples drawn from the population. As we collect more and more samples it becomes clear that **the sample means *themselves* form a distribution with some variability**. If the sample size N is large enough, most sample means will be close to the true population mean. That is the standard deviation of the sample means distribution is small.
The standard deviation of the sample means distribution is called *standard error of the mean* and it serves as a measure of how precise our estimation of the true mean is.
```
# The effect of sample size on standard error of the mean can be seen
# by drawing K samples, each of size N, calculating sample means and plot them.
# K is the number of samples we draw
K = 100
# N contains the sample sizes we are comparing
N=[10, 20, 30]
# Initialize an array for the sample means
meanarr=np.zeros(K, float)
for i in range(0,len(N)): # For each sample size...
for j in range(0,K): # For each individual sample of that size...
score=np.random.normal(200,20,N[i]) # Generate normally-distributed sample vector
meanarr[j]=score.mean() # calculate the sample mean
count, bins, ignored =plt.hist(meanarr, density=True) # plot histogram with K means (corresponding to one sample size)
print('Standard error=', meanarr.std()) # print the standard error for each sample size
plt.ylabel('Occurrences')
plt.xlabel('Average Score')
plt.show()
# The plot shows that the larger samples have means that closer to the true population mean
# Hence they have a higher peak, with lower standard deviation (bell width)
```
### Try out 10:
Change the number of samples drawn (K) in the code above and see how the distribution changes
```
# TRY OUT: Change the number of samples drawn K and see how the distribution changes
```
### Try out 11:
Add additional sample sizes for N and see the effect of the standard error
```
# TRY OUT: Add additional sample sizes for N and see the effect of the standard error
```
| github_jupyter |
# SSD300 Training Tutorial
This tutorial explains how to train an SSD300 on the Pascal VOC datasets. The preset parameters reproduce the training of the original SSD300 "07+12" model. Training SSD512 works simiarly, so there's no extra tutorial for that. The same goes for training on other datasets.
You can find a summary of a full training here to get an impression of what it should look like:
[SSD300 "07+12" training summary](https://github.com/pierluigiferrari/ssd_keras/blob/master/training_summaries/ssd300_pascal_07%2B12_training_summary.md)
```
from tensorflow.keras.optimizers import Adam, SGD
from tensorflow.keras.callbacks import ModelCheckpoint, LearningRateScheduler, TerminateOnNaN, CSVLogger
from tensorflow.keras import backend as K
from tensorflow.keras.models import load_model
from math import ceil
import numpy as np
from matplotlib import pyplot as plt
from models.keras_ssd300 import ssd_300
from keras_loss_function.keras_ssd_loss import SSDLoss
from keras_layers.keras_layer_AnchorBoxes import AnchorBoxes
from keras_layers.keras_layer_DecodeDetections import DecodeDetections
from keras_layers.keras_layer_DecodeDetectionsFast import DecodeDetectionsFast
from keras_layers.keras_layer_L2Normalization import L2Normalization
from ssd_encoder_decoder.ssd_input_encoder import SSDInputEncoder
from ssd_encoder_decoder.ssd_output_decoder import decode_detections, decode_detections_fast
from data_generator.object_detection_2d_data_generator import DataGenerator
from data_generator.object_detection_2d_geometric_ops import Resize
from data_generator.object_detection_2d_photometric_ops import ConvertTo3Channels
from data_generator.data_augmentation_chain_original_ssd import SSDDataAugmentation
from data_generator.object_detection_2d_misc_utils import apply_inverse_transforms
%matplotlib inline
```
## 0. Preliminary note
All places in the code where you need to make any changes are marked `TODO` and explained accordingly. All code cells that don't contain `TODO` markers just need to be executed.
## 1. Set the model configuration parameters
This section sets the configuration parameters for the model definition. The parameters set here are being used both by the `ssd_300()` function that builds the SSD300 model as well as further down by the constructor for the `SSDInputEncoder` object that is needed to run the training. Most of these parameters are needed to define the anchor boxes.
The parameters as set below produce the original SSD300 architecture that was trained on the Pascal VOC datsets, i.e. they are all chosen to correspond exactly to their respective counterparts in the `.prototxt` file that defines the original Caffe implementation. Note that the anchor box scaling factors of the original SSD implementation vary depending on the datasets on which the models were trained. The scaling factors used for the MS COCO datasets are smaller than the scaling factors used for the Pascal VOC datasets. The reason why the list of scaling factors has 7 elements while there are only 6 predictor layers is that the last scaling factor is used for the second aspect-ratio-1 box of the last predictor layer. Refer to the documentation for details.
As mentioned above, the parameters set below are not only needed to build the model, but are also passed to the `SSDInputEncoder` constructor further down, which is responsible for matching and encoding ground truth boxes and anchor boxes during the training. In order to do that, it needs to know the anchor box parameters.
```
img_height = 300 # Height of the model input images
img_width = 300 # Width of the model input images
img_channels = 3 # Number of color channels of the model input images
mean_color = [123, 117, 104] # The per-channel mean of the images in the dataset. Do not change this value if you're using any of the pre-trained weights.
swap_channels = [2, 1, 0] # The color channel order in the original SSD is BGR, so we'll have the model reverse the color channel order of the input images.
n_classes = 20 # Number of positive classes, e.g. 20 for Pascal VOC, 80 for MS COCO
scales_pascal = [0.1, 0.2, 0.37, 0.54, 0.71, 0.88, 1.05] # The anchor box scaling factors used in the original SSD300 for the Pascal VOC datasets
scales_coco = [0.07, 0.15, 0.33, 0.51, 0.69, 0.87, 1.05] # The anchor box scaling factors used in the original SSD300 for the MS COCO datasets
scales = scales_pascal
aspect_ratios = [[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5]] # The anchor box aspect ratios used in the original SSD300; the order matters
two_boxes_for_ar1 = True
steps = [8, 16, 32, 64, 100, 300] # The space between two adjacent anchor box center points for each predictor layer.
offsets = [0.5, 0.5, 0.5, 0.5, 0.5, 0.5] # The offsets of the first anchor box center points from the top and left borders of the image as a fraction of the step size for each predictor layer.
clip_boxes = False # Whether or not to clip the anchor boxes to lie entirely within the image boundaries
variances = [0.1, 0.1, 0.2, 0.2] # The variances by which the encoded target coordinates are divided as in the original implementation
normalize_coords = True
```
## 2. Build or load the model
You will want to execute either of the two code cells in the subsequent two sub-sections, not both.
### 2.1 Create a new model and load trained VGG-16 weights into it (or trained SSD weights)
If you want to create a new SSD300 model, this is the relevant section for you. If you want to load a previously saved SSD300 model, skip ahead to section 2.2.
The code cell below does the following things:
1. It calls the function `ssd_300()` to build the model.
2. It then loads the weights file that is found at `weights_path` into the model. You could load the trained VGG-16 weights or you could load the weights of a trained model. If you want to reproduce the original SSD training, load the pre-trained VGG-16 weights. In any case, you need to set the path to the weights file you want to load on your local machine. Download links to all the trained weights are provided in the [README](https://github.com/pierluigiferrari/ssd_keras/blob/master/README.md) of this repository.
3. Finally, it compiles the model for the training. In order to do so, we're defining an optimizer (Adam) and a loss function (SSDLoss) to be passed to the `compile()` method.
Normally, the optimizer of choice would be Adam (commented out below), but since the original implementation uses plain SGD with momentum, we'll do the same in order to reproduce the original training. Adam is generally the superior optimizer, so if your goal is not to have everything exactly as in the original training, feel free to switch to Adam. You might need to adjust the learning rate scheduler below slightly in case you use Adam.
Note that the learning rate that is being set here doesn't matter, because further below we'll pass a learning rate scheduler to the training function, which will overwrite any learning rate set here, i.e. what matters are the learning rates that are defined by the learning rate scheduler.
`SSDLoss` is a custom Keras loss function that implements the multi-task that consists of a log loss for classification and a smooth L1 loss for localization. `neg_pos_ratio` and `alpha` are set as in the paper.
```
# 1: Build the Keras model.
K.clear_session() # Clear previous models from memory.
model = ssd_300(image_size=(img_height, img_width, img_channels),
n_classes=n_classes,
mode='training',
l2_regularization=0.0005,
scales=scales,
aspect_ratios_per_layer=aspect_ratios,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
normalize_coords=normalize_coords,
subtract_mean=mean_color,
swap_channels=swap_channels)
# 2: Load some weights into the model.
# TODO: Set the path to the weights you want to load.
# weights_path = 'path/to/VGG_ILSVRC_16_layers_fc_reduced.h5'
# model.load_weights(weights_path, by_name=True)
# 3: Instantiate an optimizer and the SSD loss function and compile the model.
# If you want to follow the original Caffe implementation, use the preset SGD
# optimizer, otherwise I'd recommend the commented-out Adam optimizer.
#adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
sgd = SGD(lr=0.001, momentum=0.9, decay=0.0, nesterov=False)
ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)
model.compile(optimizer=sgd, loss=ssd_loss.compute_loss)
```
### 2.2 Load a previously created model
If you have previously created and saved a model and would now like to load it, execute the next code cell. The only thing you need to do here is to set the path to the saved model HDF5 file that you would like to load.
The SSD model contains custom objects: Neither the loss function nor the anchor box or L2-normalization layer types are contained in the Keras core library, so we need to provide them to the model loader.
This next code cell assumes that you want to load a model that was created in 'training' mode. If you want to load a model that was created in 'inference' or 'inference_fast' mode, you'll have to add the `DecodeDetections` or `DecodeDetectionsFast` layer type to the `custom_objects` dictionary below.
```
# TODO: Set the path to the `.h5` file of the model to be loaded.
# model_path = 'path/to/trained/model.h5'
# We need to create an SSDLoss object in order to pass that to the model loader.
ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)
K.clear_session() # Clear previous models from memory.
# model = load_model(model_path, custom_objects={'AnchorBoxes': AnchorBoxes,
# 'L2Normalization': L2Normalization,
# 'compute_loss': ssd_loss.compute_loss})
```
## 3. Set up the data generators for the training
The code cells below set up the data generators for the training and validation datasets to train the model. The settings below reproduce the original SSD training on Pascal VOC 2007 `trainval` plus 2012 `trainval` and validation on Pascal VOC 2007 `test`.
The only thing you need to change here are the filepaths to the datasets on your local machine. Note that parsing the labels from the XML annotations files can take a while.
Note that the generator provides two options to speed up the training. By default, it loads the individual images for a batch from disk. This has two disadvantages. First, for compressed image formats like JPG, this is a huge computational waste, because every image needs to be decompressed again and again every time it is being loaded. Second, the images on disk are likely not stored in a contiguous block of memory, which may also slow down the loading process. The first option that `DataGenerator` provides to deal with this is to load the entire dataset into memory, which reduces the access time for any image to a negligible amount, but of course this is only an option if you have enough free memory to hold the whole dataset. As a second option, `DataGenerator` provides the possibility to convert the dataset into a single HDF5 file. This HDF5 file stores the images as uncompressed arrays in a contiguous block of memory, which dramatically speeds up the loading time. It's not as good as having the images in memory, but it's a lot better than the default option of loading them from their compressed JPG state every time they are needed. Of course such an HDF5 dataset may require significantly more disk space than the compressed images (around 9 GB total for Pascal VOC 2007 `trainval` plus 2012 `trainval` and another 2.6 GB for 2007 `test`). You can later load these HDF5 datasets directly in the constructor.
The original SSD implementation uses a batch size of 32 for the training. In case you run into GPU memory issues, reduce the batch size accordingly. You need at least 7 GB of free GPU memory to train an SSD300 with 20 object classes with a batch size of 32.
The `DataGenerator` itself is fairly generic. I doesn't contain any data augmentation or bounding box encoding logic. Instead, you pass a list of image transformations and an encoder for the bounding boxes in the `transformations` and `label_encoder` arguments of the data generator's `generate()` method, and the data generator will then apply those given transformations and the encoding to the data. Everything here is preset already, but if you'd like to learn more about the data generator and its data augmentation capabilities, take a look at the detailed tutorial in [this](https://github.com/pierluigiferrari/data_generator_object_detection_2d) repository.
The data augmentation settings defined further down reproduce the data augmentation pipeline of the original SSD training. The training generator receives an object `ssd_data_augmentation`, which is a transformation object that is itself composed of a whole chain of transformations that replicate the data augmentation procedure used to train the original Caffe implementation. The validation generator receives an object `resize`, which simply resizes the input images.
An `SSDInputEncoder` object, `ssd_input_encoder`, is passed to both the training and validation generators. As explained above, it matches the ground truth labels to the model's anchor boxes and encodes the box coordinates into the format that the model needs.
In order to train the model on a dataset other than Pascal VOC, either choose `DataGenerator`'s appropriate parser method that corresponds to your data format, or, if `DataGenerator` does not provide a suitable parser for your data format, you can write an additional parser and add it. Out of the box, `DataGenerator` can handle datasets that use the Pascal VOC format (use `parse_xml()`), the MS COCO format (use `parse_json()`) and a wide range of CSV formats (use `parse_csv()`).
```
# 1: Instantiate two `DataGenerator` objects: One for training, one for validation.
# Optional: If you have enough memory, consider loading the images into memory for the reasons explained above.
train_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None)
val_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None)
# 2: Parse the image and label lists for the training and validation datasets. This can take a while.
# TODO: Set the paths to the datasets here.
# The directories that contain the images.
VOC_2007_images_dir = '/workspace/data/VOC2007_trainval/JPEGImages'
# The directories that contain the annotations.
VOC_2007_annotations_dir = '/workspace/data/VOC2007_trainval/Annotations'
# The paths to the image sets.
VOC_2007_train_image_set_filename = '/workspace/data/VOC2007_trainval/ImageSets/Main/train.txt'
VOC_2007_val_image_set_filename = '/workspace/data/VOC2007_trainval/ImageSets/Main/val.txt'
# The XML parser needs to now what object class names to look for and in which order to map them to integers.
classes = ['background',
'aeroplane', 'bicycle', 'bird', 'boat',
'bottle', 'bus', 'car', 'cat',
'chair', 'cow', 'diningtable', 'dog',
'horse', 'motorbike', 'person', 'pottedplant',
'sheep', 'sofa', 'train', 'tvmonitor']
train_dataset.parse_xml(images_dirs=[VOC_2007_images_dir],
image_set_filenames=[VOC_2007_train_image_set_filename],
annotations_dirs=[VOC_2007_annotations_dir],
classes=classes,
include_classes='all',
exclude_truncated=False,
exclude_difficult=False,
ret=False)
val_dataset.parse_xml(images_dirs=[VOC_2007_images_dir],
image_set_filenames=[VOC_2007_val_image_set_filename],
annotations_dirs=[VOC_2007_annotations_dir],
classes=classes,
include_classes='all',
exclude_truncated=False,
exclude_difficult=True,
ret=False)
# Optional: Convert the dataset into an HDF5 dataset. This will require more disk space, but will
# speed up the training. Doing this is not relevant in case you activated the `load_images_into_memory`
# option in the constructor, because in that cas the images are in memory already anyway. If you don't
# want to create HDF5 datasets, comment out the subsequent two function calls.
# train_dataset.create_hdf5_dataset(file_path='dataset_pascal_voc_07+12_trainval.h5',
# resize=False,
# variable_image_size=True,
# verbose=True)
# val_dataset.create_hdf5_dataset(file_path='dataset_pascal_voc_07_test.h5',
# resize=False,
# variable_image_size=True,
# verbose=True)
# 3: Set the batch size.
batch_size = 16 # Change the batch size if you like, or if you run into GPU memory issues.
# 4: Set the image transformations for pre-processing and data augmentation options.
# For the training generator:
ssd_data_augmentation = SSDDataAugmentation(img_height=img_height,
img_width=img_width,
background=mean_color)
# For the validation generator:
convert_to_3_channels = ConvertTo3Channels()
resize = Resize(height=img_height, width=img_width)
# 5: Instantiate an encoder that can encode ground truth labels into the format needed by the SSD loss function.
# The encoder constructor needs the spatial dimensions of the model's predictor layers to create the anchor boxes.
predictor_sizes = [model.get_layer('conv4_3_norm_mbox_conf').output_shape[1:3],
model.get_layer('fc7_mbox_conf').output_shape[1:3],
model.get_layer('conv6_2_mbox_conf').output_shape[1:3],
model.get_layer('conv7_2_mbox_conf').output_shape[1:3],
model.get_layer('conv8_2_mbox_conf').output_shape[1:3],
model.get_layer('conv9_2_mbox_conf').output_shape[1:3]]
ssd_input_encoder = SSDInputEncoder(img_height=img_height,
img_width=img_width,
n_classes=n_classes,
predictor_sizes=predictor_sizes,
scales=scales,
aspect_ratios_per_layer=aspect_ratios,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
matching_type='multi',
pos_iou_threshold=0.5,
neg_iou_limit=0.5,
normalize_coords=normalize_coords)
# 6: Create the generator handles that will be passed to Keras' `fit_generator()` function.
train_generator = train_dataset.generate(batch_size=batch_size,
shuffle=True,
transformations=[ssd_data_augmentation],
label_encoder=ssd_input_encoder,
returns={'processed_images',
'encoded_labels'},
keep_images_without_gt=False)
val_generator = val_dataset.generate(batch_size=batch_size,
shuffle=False,
transformations=[convert_to_3_channels,
resize],
label_encoder=ssd_input_encoder,
returns={'processed_images',
'encoded_labels'},
keep_images_without_gt=False)
# Get the number of samples in the training and validations datasets.
train_dataset_size = train_dataset.get_dataset_size()
val_dataset_size = val_dataset.get_dataset_size()
print("Number of images in the training dataset:\t{:>6}".format(train_dataset_size))
print("Number of images in the validation dataset:\t{:>6}".format(val_dataset_size))
next(train_generator)[0].shape
```
## 4. Set the remaining training parameters
We've already chosen an optimizer and set the batch size above, now let's set the remaining training parameters. I'll set one epoch to consist of 1,000 training steps. The next code cell defines a learning rate schedule that replicates the learning rate schedule of the original Caffe implementation for the training of the SSD300 Pascal VOC "07+12" model. That model was trained for 120,000 steps with a learning rate of 0.001 for the first 80,000 steps, 0.0001 for the next 20,000 steps, and 0.00001 for the last 20,000 steps. If you're training on a different dataset, define the learning rate schedule however you see fit.
I'll set only a few essential Keras callbacks below, feel free to add more callbacks if you want TensorBoard summaries or whatever. We obviously need the learning rate scheduler and we want to save the best models during the training. It also makes sense to continuously stream our training history to a CSV log file after every epoch, because if we didn't do that, in case the training terminates with an exception at some point or if the kernel of this Jupyter notebook dies for some reason or anything like that happens, we would lose the entire history for the trained epochs. Finally, we'll also add a callback that makes sure that the training terminates if the loss becomes `NaN`. Depending on the optimizer you use, it can happen that the loss becomes `NaN` during the first iterations of the training. In later iterations it's less of a risk. For example, I've never seen a `NaN` loss when I trained SSD using an Adam optimizer, but I've seen a `NaN` loss a couple of times during the very first couple of hundred training steps of training a new model when I used an SGD optimizer.
```
# Define a learning rate schedule.
def lr_schedule(epoch):
if epoch < 80:
return 0.001
elif epoch < 100:
return 0.0001
else:
return 0.00001
# Define model callbacks.
# TODO: Set the filepath under which you want to save the model.
model_checkpoint = ModelCheckpoint(filepath='ssd300_pascal_07+12_epoch-{epoch:02d}_loss-{loss:.4f}_val_loss-{val_loss:.4f}.h5',
monitor='val_loss',
verbose=1,
save_best_only=True,
save_weights_only=False,
mode='auto',
period=1)
#model_checkpoint.best =
csv_logger = CSVLogger(filename='ssd300_pascal_07+12_training_log.csv',
separator=',',
append=True)
learning_rate_scheduler = LearningRateScheduler(schedule=lr_schedule,
verbose=1)
terminate_on_nan = TerminateOnNaN()
callbacks = [model_checkpoint,
csv_logger,
learning_rate_scheduler,
terminate_on_nan]
```
## 5. Train
In order to reproduce the training of the "07+12" model mentioned above, at 1,000 training steps per epoch you'd have to train for 120 epochs. That is going to take really long though, so you might not want to do all 120 epochs in one go and instead train only for a few epochs at a time. You can find a summary of a full training [here](https://github.com/pierluigiferrari/ssd_keras/blob/master/training_summaries/ssd300_pascal_07%2B12_training_summary.md).
In order to only run a partial training and resume smoothly later on, there are a few things you should note:
1. Always load the full model if you can, rather than building a new model and loading previously saved weights into it. Optimizers like SGD or Adam keep running averages of past gradient moments internally. If you always save and load full models when resuming a training, then the state of the optimizer is maintained and the training picks up exactly where it left off. If you build a new model and load weights into it, the optimizer is being initialized from scratch, which, especially in the case of Adam, leads to small but unnecessary setbacks every time you resume the training with previously saved weights.
2. In order for the learning rate scheduler callback above to work properly, `fit_generator()` needs to know which epoch we're in, otherwise it will start with epoch 0 every time you resume the training. Set `initial_epoch` to be the next epoch of your training. Note that this parameter is zero-based, i.e. the first epoch is epoch 0. If you had trained for 10 epochs previously and now you'd want to resume the training from there, you'd set `initial_epoch = 10` (since epoch 10 is the eleventh epoch). Furthermore, set `final_epoch` to the last epoch you want to run. To stick with the previous example, if you had trained for 10 epochs previously and now you'd want to train for another 10 epochs, you'd set `initial_epoch = 10` and `final_epoch = 20`.
3. In order for the model checkpoint callback above to work correctly after a kernel restart, set `model_checkpoint.best` to the best validation loss from the previous training. If you don't do this and a new `ModelCheckpoint` object is created after a kernel restart, that object obviously won't know what the last best validation loss was, so it will always save the weights of the first epoch of your new training and record that loss as its new best loss. This isn't super-important, I just wanted to mention it.
```
# If you're resuming a previous training, set `initial_epoch` and `final_epoch` accordingly.
initial_epoch = 0
final_epoch = 120
steps_per_epoch = 1000
history = model.fit_generator(generator=train_generator,
steps_per_epoch=steps_per_epoch,
epochs=final_epoch,
callbacks=callbacks,
validation_data=val_generator,
validation_steps=ceil(val_dataset_size/batch_size),
initial_epoch=initial_epoch)
```
## 6. Make predictions
Now let's make some predictions on the validation dataset with the trained model. For convenience we'll use the validation generator that we've already set up above. Feel free to change the batch size.
You can set the `shuffle` option to `False` if you would like to check the model's progress on the same image(s) over the course of the training.
```
# 1: Set the generator for the predictions.
predict_generator = val_dataset.generate(batch_size=1,
shuffle=True,
transformations=[convert_to_3_channels,
resize],
label_encoder=None,
returns={'processed_images',
'filenames',
'inverse_transform',
'original_images',
'original_labels'},
keep_images_without_gt=False)
# 2: Generate samples.
batch_images, batch_filenames, batch_inverse_transforms, batch_original_images, batch_original_labels = next(predict_generator)
i = 0 # Which batch item to look at
print("Image:", batch_filenames[i])
print()
print("Ground truth boxes:\n")
print(np.array(batch_original_labels[i]))
# 3: Make predictions.
y_pred = model.predict(batch_images)
```
Now let's decode the raw predictions in `y_pred`.
Had we created the model in 'inference' or 'inference_fast' mode, then the model's final layer would be a `DecodeDetections` layer and `y_pred` would already contain the decoded predictions, but since we created the model in 'training' mode, the model outputs raw predictions that still need to be decoded and filtered. This is what the `decode_detections()` function is for. It does exactly what the `DecodeDetections` layer would do, but using Numpy instead of TensorFlow (i.e. on the CPU instead of the GPU).
`decode_detections()` with default argument values follows the procedure of the original SSD implementation: First, a very low confidence threshold of 0.01 is applied to filter out the majority of the predicted boxes, then greedy non-maximum suppression is performed per class with an intersection-over-union threshold of 0.45, and out of what is left after that, the top 200 highest confidence boxes are returned. Those settings are for precision-recall scoring purposes though. In order to get some usable final predictions, we'll set the confidence threshold much higher, e.g. to 0.5, since we're only interested in the very confident predictions.
```
# 4: Decode the raw predictions in `y_pred`.
y_pred_decoded = decode_detections(y_pred,
confidence_thresh=0.5,
iou_threshold=0.4,
top_k=200,
normalize_coords=normalize_coords,
img_height=img_height,
img_width=img_width)
```
We made the predictions on the resized images, but we'd like to visualize the outcome on the original input images, so we'll convert the coordinates accordingly. Don't worry about that opaque `apply_inverse_transforms()` function below, in this simple case it just aplies `(* original_image_size / resized_image_size)` to the box coordinates.
```
# 5: Convert the predictions for the original image.
y_pred_decoded_inv = apply_inverse_transforms(y_pred_decoded, batch_inverse_transforms)
np.set_printoptions(precision=2, suppress=True, linewidth=90)
print("Predicted boxes:\n")
print(' class conf xmin ymin xmax ymax')
print(y_pred_decoded_inv[i])
```
Finally, let's draw the predicted boxes onto the image. Each predicted box says its confidence next to the category name. The ground truth boxes are also drawn onto the image in green for comparison.
```
# 5: Draw the predicted boxes onto the image
# Set the colors for the bounding boxes
colors = plt.cm.hsv(np.linspace(0, 1, n_classes+1)).tolist()
classes = ['background',
'aeroplane', 'bicycle', 'bird', 'boat',
'bottle', 'bus', 'car', 'cat',
'chair', 'cow', 'diningtable', 'dog',
'horse', 'motorbike', 'person', 'pottedplant',
'sheep', 'sofa', 'train', 'tvmonitor']
plt.figure(figsize=(20,12))
plt.imshow(batch_original_images[i])
current_axis = plt.gca()
for box in batch_original_labels[i]:
xmin = box[1]
ymin = box[2]
xmax = box[3]
ymax = box[4]
label = '{}'.format(classes[int(box[0])])
current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color='green', fill=False, linewidth=2))
current_axis.text(xmin, ymin, label, size='x-large', color='white', bbox={'facecolor':'green', 'alpha':1.0})
for box in y_pred_decoded_inv[i]:
xmin = box[2]
ymin = box[3]
xmax = box[4]
ymax = box[5]
color = colors[int(box[0])]
label = '{}: {:.2f}'.format(classes[int(box[0])], box[1])
current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color=color, fill=False, linewidth=2))
current_axis.text(xmin, ymin, label, size='x-large', color='white', bbox={'facecolor':color, 'alpha':1.0})
```
| github_jupyter |
<img src="../../../images/qiskit_header.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" align="middle">
# Pulse Schedules
The `pulse` module allows quantum experiments to be described at the level of pulses. For IBMQ devices these are microwave pulses applied to our superconducting qubits.
The pulse `Schedule`, `Instruction`, `Command` and `Channel` are the main objects of the pulse module.
All instances of these objects are **immutable**, which results in:
- An API that is easier to reason about
- Objects that may be reused many times as components of parent objects without the problem of underlying children object's guarantees changing
```
%matplotlib inline
import numpy as np
from qiskit.pulse.channels import (DriveChannel, MeasureChannel, ControlChannel,
AcquireChannel, MemorySlot, SnapshotChannel)
from qiskit.pulse.commands import (SamplePulse, FrameChange, PersistentValue,
Acquire, Snapshot)
from qiskit.pulse import pulse_lib, Schedule
```
# Channels
`Channel`s correspond to physical channels on the backend. `Channel`s are combined with `Command`s to form an `Instruction`.
As all channels correspond to physical channels on a device, they have an `index` which specifies their corresponding device channel and a `buffer`, which is the buffering delay in units `dt/dtm` that must be added between commands on the same channel when `appending`.
The fundamental channel types are:
- `DriveChannel`: Qubit drive channel. The index channel corresponds to the system qubit index.
- `MeasureChannel`: Qubit stimulus channel. The channel index corresponds to the system qubit index.
- `ControlChannel`: Arbitrary control channel with action specified by Hamiltonian provided by device. The function of this channel must be extracted from the system Hamiltonian.
- `AcquireChannel`: Qubit acquisition channel. The channel index corresponds to the system qubit index.
- `MemorySlot`: Channel for storage of qubit results for measurement levels 0,1 and 2. The output of an `Acquire` of any qubit may be stored in any `MemorySlot`.
- `RegisterSlot`: Channel for storage of qubit results for use with conditional commands. The output of an `Acquire` may be mapped to corresponding `RegisterSlot` with indexes specified by the `backend` configurations `register_map`.
- `Snapshot`: Channel for snapshots (for use with simulators only).
Channels may be declared using the following:
```
drive_ch0 = DriveChannel(0, buffer=2)
meas_ch0 = MeasureChannel(0, buffer=2)
control_ch0 = ControlChannel(0, buffer=2)
acquire_ch0 = AcquireChannel(0, buffer=0)
memory_slot0 = MemorySlot(0)
drive_ch1 = DriveChannel(1, buffer=2)
meas_ch1 = MeasureChannel(1, buffer=2)
control_ch1 = ControlChannel(1, buffer=2)
acquire_ch1 = AcquireChannel(1, buffer=0)
memory_slot1 = MemorySlot(1)
```
The buffer parameter is optional and both index and buffer may be accessed.
```
print(drive_ch0.index)
print(drive_ch0.buffer)
```
Channels can be checked for equivalency. This check is based on the channel index as every channel corresponds to a unique device channel.
```
drive_ch0 == drive_ch0
drive_ch0 == DriveChannel(0)
drive_ch0 == drive_ch1
```
## Commands
Pulse `Command`s specify commands to be completed on a device `Channel`. Once defined a `Command` is *immutable* and may be used many times on many different channels. Some `Command`s may only be applied to specific `Channel` types.
The fundamental commands for the `PulseChannel` are:
- `SamplePulse`: A pulse specified as a complex array of samples to be output out on the corresponding channel. Each pulse sample corresponds to a timestep of unit `dt` on the backend.
- `FrameChange`: A persistent framechange of the phase of all future pulses on the corresponding channel. `Framechange`s have zero duration on the backend.
- `PersistentValue`: A pulse that will holds its value until the next pulse on the corresponding channel. `PersistentValue` pulses will have variable duration on the backend as they depend on subsequent commands.
```
sine_pulse = SamplePulse(np.sin(np.linspace(0,4*np.pi, 20)), name='random_pulse')
pi_fc = FrameChange(np.pi)
one_pv = PersistentValue(1.0)
```
The `name` for commands is optional if it is not supplied it will be automatically assigned.
`SamplePulse`s may be easily visualized
```
sine_pulse.draw()
```
The fundamental command for the `AcquireChannel` is:
- `Acquire`: Specify the acquisition, kerneling and discrimination of measurement data on a set of acquisition channels
```
acquire = Acquire(100)
```
The fundamental command for the `SnapshotChannel` is:
- `Snapshot`: A snapshot of the simulated system.
```
snapshot = Snapshot('test_snapshot', 'state')
```
All `Command`s have a `duration` attribute. This may be fixed as for the case of the `FrameChange` having zero duration, or dependent on the command as is the case for the `SamplePulse` who's duration is defined by the number of samples in the pulse.
```
sine_pulse.duration
pi_fc.duration
```
## Instructions
An `Instruction` may be created by applying a `Command` to a set of `Channel`s. Each `Command` type has a corresponding `Instruction` type. An `Instruction` is obtained from a `Command` by calling `to_instruction` or directly calling the `Command`. The `Channel`s to apply the command to are supplied as arguments to the above call.
```
sine_instr = sine_pulse.to_instruction(drive_ch0)
sine_instr_equivalent = sine_pulse(drive_ch0)
```
Certain instructions such as the `AcquireInstruction` accept multiple channels. In this case `AcquireInstruction` accepts up to three lists of the same size. These are lists of `AcquireChannel`s, `MemorySlot`s and `RegisterSlot`s. This allows the acquisition, kerneling and discrimination of multiple qubit to be combined and the output of a given channel to directed to the desired storage location.
```
acquire_instr = acquire([acquire_ch0, acquire_ch1], [memory_slot0, memory_slot1])
```
`Instructions` can also be plotted
```
sine_instr.draw()
acquire_instr.draw()
```
`Instruction`s have a notion of time. Each instruction has a `start_time` and a set of `Timeslot`s that the instruction occupies.
```
sine_instr.start_time
sine_instr.duration
```
`Timeslot`s are occupied on a channel-by-channel basis and may not overlap on a given channel. They are used to enforce time ordering within a pulse schedule.
```
sine_instr.timeslots.is_mergeable_with(sine_instr.timeslots)
sine_instr.timeslots.is_mergeable_with(acquire_instr.timeslots)
```
## Schedules
Pulse schedules are made by scheduling `Instruction`s. The `Schedule` may be viewed as a container for `Instruction`s and `Schedule`s shifted in time. In this way a simple `Schedule`s may be treated as a building blocks for more complicated `Schedule`s.
```
sched = Schedule(name='test_schedule')
pulse_sched = Schedule(sine_instr, name='sine_schedule')
acquire_sched = Schedule((10, acquire_instr), name='acquire_schedule')
```
There are three basic methods for manipulating `Schedule`s. These methods all return a new `Schedule` which modifies the attributes of the underlying children `Schedule`s or `Instruction`s.
1. `union`: Joins the supplied schedules and instructions, merging their timeslots together. If timeslots overlap an error will be raised.
2. `shift`: Shifts the initial time of the `Schedule` or `Instruction` by an amount.
3. `append`: Append the supplied schedule or instruction at the last time of the parent schedule plus the maximum buffer over the set of all channels shared by the parent and child schedules.
Note that there is no requirement that a parent `Schedule` must contain all of the `Channel`s of a child `Schedule` or `Instruction` to which it is being joined with. The set of `Channel`s present in the `Schedule` will automatically be extended.
```
union_sched = pulse_sched.union(acquire_sched)
union_sched.draw()
shifted_pulse_sched = pulse_sched.shift(10)
shifted_pulse_sched.draw()
pulse_sched.append(acquire_sched).draw()
```
For ease of use it is also possible to build schedules directly from `Instruction`s. Note that a `Schedule` will still be returned.
```
combined_schedule = sine_instr.append(acquire_instr)
combined_schedule.draw()
```
### Syntactic Sugar
To make it easier to build pulse schedules there exists a **syntactic sugar** for operating on schedules
1. `a.union(b)` $\equiv$ `a | b`
2. `a.append(b)` $\equiv$ `a + b`
3. `a.shift(time)` $\equiv$ `a << time`
As the `Schedule` for the above methods emulate numeric types this enables a simple way of construction composite pulse schedules.
Below we construct two equivalent schedules one with the syntactic sugar and one without
```
sched = Schedule(name='verbose')
sched = sched.union(sine_instr.shift(10))
sched = sched.append(acquire_instr.append(sine_instr))
sched.draw()
sched = Schedule(name='syntactic_sugar')
sched |= sine_instr << 10
sched += acquire_instr + sine_instr
sched.draw()
```
## Outcomes of Pulse Schedules
There are three (integer) measurement pulse levels (`meas_levels`) for pulse outputs which are triggered by `Acquire` commands and stored into the desired `MemorySlots`:
- Measurement level 0: Return the sampled measurement output from the `AcquireChannel` after mixing down with the measurement stimulus LO. There will be a large amount of data associated with measurement level.
- Measurement level 1: Return the data after the application of a user specified (or default if not specified) kernel.
- Measurement level 2: Return the discriminated counts after the application of a measurement kernel and discriminator. This corresponds to the measurement output of a quantum circuit. See the [circuits notebook](quantum_circuits.ipynb) for more information.
For measurement level 0 and 1 there is another supported measurement result post-processing modifier, `meas_return`:
- `single`: Return the results for each individual shot.
- `avg`: Average the results over all shots.
<div class="alert alert-block alert-info">
<b>Note:</b> All modes may not be supported on all backends due to different system constraints.
</div>
<div class="alert alert-block alert-info">
<b>Note:</b> Methods below are not executed as the pulse simulator has not yet been released for Qiskit.
</div>
Schedules may be executed in a similar manner to schedules
```python
job = execute(sched, meas_level=1, meas_return='avg')
result = job.result()
```
For measurement levels 0 and 1 results may be extracted from the result object with
```python
result.get_memory(schedule)
```
For measurement levels 2, results are extracted in the same way as for circuit results. To get individual shot counts,
```python
result.get_memory(schedule)
```
and for binned counts
```python
job = execute(sched, meas_level=2)
result = job.result()
result.get_counts(schedule)
```
```
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
| github_jupyter |
## NYC Neighborhood School Quality Metric:
# Buying a Home in NYC: What Neighborhoods are the Best Value?
### Applying Data Science Tools to Understand NYC's Residential Real Estate Fundamentals
Josh Grasso | joshgrasso@gmail.com
This project seeks to understand the fundamental factors that explain differences in residential real estate prices across NYC.
### Neighborhood School Quality: Measured by NYC Department of Education Math Test Scores
The goal here is to produce a metric for each neighborhood in NYC, which captures information about the desirability of the schools in that neighborhood. School selection is a prominent factor for all residents and families in deciding where to live or buy a home, including in NYC. NYC school enrollment, for both public and charter schools, is based on zoning. From InsideSchools.org, a resource for parents: "Starting in kindergarten, your child is eligible to attend a school based on your home address, often referred to as your "zoned" school. The city is divided into 32 school districts, and most districts are divided into several dozen individual school zones."
There are a handful of datasets made available by the NYC Department of Education which could be used to benchmark school quality and desirability. I chose scores on the Math section of a standardized test, administered to all Elementary and Middle-school aged students across NYC. These standardized tests began to be administered in 2013 when NY moved to Common Core Learning Standards: "The New York State P-12 Common Core Learning Standards (CCLS) are internationally-benchmarked and evidence-based standards." The data for all students in NYC public and charter schools, in grades 3 through 8, are aggregated at the school-level and are available for each year, from 2013 to 2019. There are average Math scores for over 1,300 schools in NYC, 1,118 public schools and 191 charter schools.
My assumption is that Elementary school quality has a stronger influence on residential purchase decisions than later grades - for example High School. Also, I chose Math scores instead of English scores, and instead of an average of the two subjects/scores. In the event the Math and English scores were not strongly correlated, I'm biased as an engineer - and would have deferred to Math scores anyhow.
There are two data sets for School Math Test scores, one for Public schools and one for Charter schools. This data is then mapped to the Department of Education’s definition of neighborhoods, termed Neighborhood Tabulation Areas (or NTA’s) based on the unique identifier for each school – the District Borough Number (DBN). There are a total of 195 NTA's. Finally, the NTA is mapped to the Neighborhood names used in the rest of the analysis, of which there are 306 distinct neighborhoods. It ends up being the case that one NTA may equate to multiple neighborhoods, as we define in the analysis. To rank the schools, and then the neighborhoods in terms of school quality, first the percentile rank for math test score results for each school is calculated for each year. The average of all years (2013 through 2019) is used as the average for each school, and the neighborhood metric is an average of each of these percentile values for each of the schools in that neighborhood. The final distribution of average percentiles across schools is very close to uniform – which is important in the analysis – a bell curve distribution would have presented a challenge in distinguishing one neighborhood from another in a regression.
Private schools are omitted from the analysis, at this point. There isn’t uniform test data for private schools available from the sources I found. Also it’s not clear if zoning has any effect on private school attendance – at least in the same way as it does for public and charter schools. The lack of private schools in the analysis can produce some biases – such as in Manhattan were the relative quality of public schools in one neighborhood vs. another may have less of an impact on relative value of single family residences – if a meaningful amount of home buyers send their children to private schools anyhow.
Finally, the average annual change in school percentile rank is calculated by fitting a linear regression to the full dataset for each individual school. This will be used to build a "momentum" metric for each neighborhood - to be used alongside a measure of "value" in determining which neighborhoods look most compelling from an investment perspective. A neighborhood with strong price growth and improving school quality is can differentiate it from other neighborhoods with less favorable trends.
#### Key Resources:
* NYC Elementary & Middle School Math Test Results, DOE & Charter School, 2013 - 2019:
* https://infohub.nyced.org/reports/academics/test-results
* NYC Elementary School Enrollment Guide:
* https://insideschools.org/elementary
#### Additional Research:
* Add Private Schools:
* https://www.isaagny.org/school-affiliates-directory/searchable-directory
* https://www.niche.com/about/data/
* https://nces.ed.gov/surveys/pss/privateschoolsearch/
```
import numpy as np
import pandas as pd
import requests
import json
import plotly.express as px
from IPython.display import Image
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
#plt.style.use('seaborn')
sns.set()
from datetime import datetime
today = datetime.now()
month,day,year = today.month,today.day,today.year
from pathlib import Path
home_path = Path.home() / 'Jupyter' / 'Real_Estate'
```
### Neighborhood School Quality Metric: Math Test Scores for Grades 3 - 8
There are a handful of datasets made available by the NYC Department of Education which could be used to benchmark school quality and desirability. I chose scores on the Math section of a standardized test, administered to all Elementary and Middle-school aged students across NYC. These standardized tests began to be administered in 2013 when NY moved to Common Core Learning Standards: "The New York State P-12 Common Core Learning Standards (CCLS) are internationally-benchmarked and evidence-based standards." The data is available for each year, from 2013 to 2019, for all students in NYC public and charter schools, in grades 3 through 8, aggregated at the school-level. There are Math scores for over 1,300 schools in NYC, 1,118 public schools and 191 charter schools.
```
# School Quality:
# Import NYC Public and Charter School Math Test Score Results
# Source: https://infohub.nyced.org/reports/academics/test-results
# The data is also available at the link below.
# However, downloading the data as a JSON through the API link, only provided a portion of the full data set.
# Source: https://data.cityofnewyork.us/Education/2013-2019-Math-Test-Results-School-SWD-Ethnicity-G/74ah-8ukf
local_path = home_path / 'Math_Scores.xlsx'
math_xls_df = pd.read_excel(local_path, sheet_name='All', header=0)
math_xls_df.head(2)
# Charter School Test Scores
# Source: https://infohub.nyced.org/reports/academics/test-results
local_path = home_path / 'Charter_Scores.xlsx'
charter_xls_df = pd.read_excel(local_path, sheet_name='Math', header=0)
charter_xls_df.head(2)
# math_xls_df.columns == charter_xls_df.columns # True
all_math_df = math_xls_df.append(charter_xls_df).reset_index(drop=True)
all_math_df.head(5)
len(math_xls_df['DBN'].unique())
len(charter_xls_df['DBN'].unique())
len(all_math_df['DBN'].unique())
# Calculate Percentiles:
math_clean_df = all_math_df[all_math_df['Grade'] == 'All Grades'].reset_index(drop=True)
math_clean_df = math_clean_df.drop(index=1264)
# ValueError: Unable to parse string "s" at position 1264
# About the data: In order to comply with regulations of the Family Educational Rights and Privacy Act (FERPA)
# on public reporting of education outcomes, groups with 5 or fewer tested students are suppressed with an “s”.
# Source: https://data.cityofnewyork.us/Education/2013-2019-Math-Test-Results-School-SWD-Ethnicity-G/74ah-8ukf
math_clean_df['Mean Scale Score'] = pd.to_numeric(math_clean_df['Mean Scale Score'])
math_clean_df = pd.pivot_table(math_clean_df, values='Mean Scale Score', index='DBN', columns='Year',
aggfunc='mean', fill_value=np.nan)
# math_clean_df[math_clean_df.index.duplicated(keep=False)] # None
math_clean_df
math_clean_df[math_clean_df.isnull().any(axis=1)]
# math_clean_df.isnull().sum()
math_clean_df.isnull().T.sum().sort_values(ascending=False).value_counts()
# Percentiles:
math_percentiles_df = math_clean_df.rank(axis=0, pct=True)
math_percentiles_df
# School Quality Metric: Use the average percentile for all of the years available
# Incorporate as much of the data as is available
# Come back to the change from 2013 to 2019 - to get a sense of which schools/neighborhoods are improving
school_percentile_avg_df = (math_percentiles_df.apply(lambda x: np.mean(x), axis=1)
.to_frame(name='Avg_Percentile')
.sort_values(by='Avg_Percentile').reset_index())
school_percentile_avg_df
# Plot it first, they might all be stacked together in the middle (50%)
plt.figure(figsize=(15,5))
ax = sns.histplot(data=school_percentile_avg_df, x="Avg_Percentile",
binwidth=0.05, alpha=0.5, element="step"); # color=u'b'
plt.title("Avg Percentile Accross 2013 to 2019", fontsize=15);
# What Elementary School am I zoned for?
# PS 107 John W. Kimball
# Source: https://schoolsearch.schools.nyc/
school_percentile_avg_df[school_percentile_avg_df['DBN'] == '15K107']
```
### DBN to NTA: NTA to Neighborhoods:
As shown in the data above, Math test results are aggregated at the school-level. The District-Borough-Number (DBN) column provides the unique identification code for each school in NYC. The metric/column that I'll use to benchmark school quality is the "Mean Scale Score," which is the "average of total students tested."
For our real estate analysis, we need to build a metric of the school quality in each neighborhood in NYC. Thus, we will use this school-level data (by DBN), and map it to each neighborhood. As you can see above, there is no location information in this table - so we need to add it.
The data below provides location information for each school - and is made available by NYC Department of Education. First, note that there are several "definitions" of NYC neighborhoods, defined and used by the different govt. departments in NYC (Education, Planning, Finance). The definition used here is the Neighborhood Tabulation Areas (NTA). There are 195 NTAs. The definition we are using to define neighborhoods in our analysis of real estate prices has over 300 neighborhoods. Thus, we will need to map NTA's to the final definition we are using. Also note that we are supplementing the school location data below with the comprehensive NTA dataset - to add the information on which Borough each neighborhood/NTA is located.
```
# School DBN to NTA:
# This data has the DBN (shown here as ATS) which is unique to the school, and maps to test/performace data
# It also has NTA data, which can be used to map to the NYC Neighborhood
# Neighborhood Tabulation Areas (NTA)
# Source: https://www1.nyc.gov/site/planning/data-maps/open-data/dwn-nynta.page
# School Locations:
school_locations_csv_url = 'https://data.cityofnewyork.us/api/views/i4ni-6qin/rows.csv?accessType=DOWNLOAD'
# Source: https://data.cityofnewyork.us/Education/2015-2016-School-Locations/i4ni-6qin
# - also - : Source: https://data.cityofnewyork.us/Education/2018-2019-School-Locations/9ck8-hj3u
resp = requests.get(school_locations_csv_url)
local_path = home_path / 'School_Locations.csv'
with open(local_path, 'wb') as output:
output.write(resp.content)
school_locations_df = pd.read_csv(local_path)
# Visualize:
# school_locations_df.head().T
# Incl. Charter Schools?: # school_locations_df['managed_by_name'].unique() # ['DOE', 'Charter'] Yes
school_locations_df = school_locations_df[['ATS SYSTEM CODE', 'LOCATION_NAME', 'LOCATION_CATEGORY_DESCRIPTION',
'NTA', 'NTA_NAME', 'PRIMARY_ADDRESS_LINE_1', 'MANAGED_BY_NAME',
'OPEN_DATE']].copy()
school_locations_df['ATS SYSTEM CODE'] = school_locations_df['ATS SYSTEM CODE'].str.rstrip()
school_locations_df['NTA_NAME'] = school_locations_df['NTA_NAME'].str.rstrip()
school_locations_df = school_locations_df.rename(columns={'ATS SYSTEM CODE': 'ATS_SYSTEM_CODE'})
school_locations_df.head(2)
len(school_locations_df['ATS_SYSTEM_CODE'].unique())
len(school_locations_df['NTA'].unique())
school_percentile_locations_df = school_percentile_avg_df.copy()
school_percentile_locations_df['NTA'] = school_percentile_locations_df['DBN'].map(school_locations_df.set_index('ATS_SYSTEM_CODE')['NTA'])
school_percentile_locations_df['NTA_NAME'] = school_percentile_locations_df['DBN'].map(school_locations_df.set_index('ATS_SYSTEM_CODE')['NTA_NAME'])
school_percentile_locations_df['MANAGED_BY_NAME'] = (school_percentile_locations_df['DBN'].map(school_locations_df.set_index('ATS_SYSTEM_CODE')['MANAGED_BY_NAME']))
school_percentile_locations_df['LOCATION_NAME'] = (school_percentile_locations_df['DBN'].map(school_locations_df.set_index('ATS_SYSTEM_CODE')['LOCATION_NAME']))
school_percentile_locations_df['LOCATION_CATEGORY_DESCRIPTION'] = (school_percentile_locations_df['DBN'].map(school_locations_df.set_index('ATS_SYSTEM_CODE')['LOCATION_CATEGORY_DESCRIPTION']))
school_percentile_locations_df['PRIMARY_ADDRESS_LINE_1'] = school_percentile_locations_df['DBN'].map(school_locations_df.set_index('ATS_SYSTEM_CODE')['PRIMARY_ADDRESS_LINE_1'])
school_percentile_locations_df.head(2)
missing_locations_df = school_percentile_locations_df[school_percentile_locations_df['NTA'].isna()]
len(missing_locations_df['DBN'].unique())
```
### Missing School Locations:
There are 27 schools, from the Math scores data set, which have unknown locations given that the School locations data set did not contain these schools. Recall that we previously had over 1,300 locations, thus 27 represents 2% of the schools.
Below, let's parse the DBN, to extract more information about the missing schools.
```
from IPython.display import Image
from pathlib import Path
print("The District-Borough-Number")
Image(filename= home_path / 'NYC_DOE_DBN.png')
# Source: https://teachnyc.zendesk.com/hc/en-us/articles/360053601831-What-is-a-DBN-District-Borough-Number-#:~:text=The%20DBN%20or%20District%20Borough,has%20a%20District%20Borough%20Number.
dnb_dict = {'Brooklyn': 'K',
'Queens': 'Q',
'Bronx': 'X',
'Manhattan': 'M',
'Staten_Island': 'R'}
# Parse DBN's to better understand Missing School Locations
# '(\d+)(K?X?Q?M?R?)(\d+)'
import re
dbn_regex = re.compile('(\d+)(K?X?Q?M?R?)(\d+)') # Resource: https://regex101.com/
missing_schools_list = []
for missing_dbn in missing_locations_df['DBN'].unique():
_regex_match = re.search(dbn_regex, missing_dbn)
if _regex_match is not None:
district, borough, school_number = _regex_match.groups()
missing_schools_list.append({'District': district,
'Borough': borough,
'School_Number': school_number})
else:
pass
missing_schools_df = pd.DataFrame(missing_schools_list)
# Switch Keys/Values in dnb_dict
missing_schools_df = missing_schools_df.replace(dict((v,k) for k,v in dnb_dict.items()))
missing_schools_df.head(2)
missing_schools_df['Borough'].value_counts()
missing_schools_df['District'].value_counts()
```
### District 84 is for Charter Schools:
* Source: https://infohub.nyced.org/reports/school-quality/charter-school-renewal-reports
Almost all of the missing school location data is from Charter schools. Sadly, I can't check the Open Date b/c that data is NaN. Let's move on, rather than trying a work arround for the missing schools.
```
# Drop Schools without location data:
school_percentile_locations_df = school_percentile_locations_df[school_percentile_locations_df['NTA'].notna()]
school_percentile_locations_df['MANAGED_BY_NAME'].unique()
# Analyze Grade Levels: Before I start filtering the data (for only 'All Grades')
# Question: Is admission at Secondary Schools based on residence?
location_category_description_list = ['Elementary', 'K-8', 'Secondary School',
'Junior High-Intermediate-Middle', 'High school',
'K-12 all grades', 'Early Childhood', 'Ungraded']
school_institution_dict = {}
for institution in ['DOE', 'Charter']:
school_categories_dict = dict.fromkeys(location_category_description_list)
for category in location_category_description_list:
_filtered = school_percentile_locations_df[school_percentile_locations_df['MANAGED_BY_NAME'] == institution]
school_categories_dict[category] = (_filtered[_filtered['LOCATION_CATEGORY_DESCRIPTION'] == category]
['NTA'].unique().shape[0])
school_institution_dict[institution] = school_categories_dict
# school_locations_df['MANAGED_BY_NAME'].unique() #
pd.DataFrame.from_dict(school_institution_dict, orient='index')
# Inspect the one High School in the dataset:
# school_percentile_locations_df[school_percentile_locations_df['LOCATION_CATEGORY_DESCRIPTION'] == 'High school']
# School NTA:
# Use to add Borough to NTA data, then to map to neighborhoods:
# Source: https://www1.nyc.gov/site/planning/data-maps/open-data/dwn-nynta.page
nta_json_url = 'https://services5.arcgis.com/GfwWNkhOj9bNBqoJ/ArcGIS/rest/services/NYC_Neighborhood_Tabulation_Areas/FeatureServer/0/query?where=1=1&outFields=*&outSR=4326&f=pgeojson'
resp = requests.get(nta_json_url)
nta_json = resp.json()
nta_property_list = []
for nta_property in nta_json['features']:
nta_dict = dict.fromkeys(['BoroName', 'NTACode', 'NTAName'])
for key in nta_dict:
nta_dict[key] = nta_property['properties'][key]
nta_property_list.append(nta_dict)
nta_df = pd.DataFrame(nta_property_list)
nta_df['BoroName'] = nta_df['BoroName'].replace({'Staten Island': 'Staten_Island'})
nta_df = nta_df.rename(columns={'BoroName': 'Borough'}) # 'NTAName': 'Neighborhood'
nta_df # [['Borough', 'NTAName', 'NTACode']]
# Didn't work: school_percentile_locations_df['Borough'] = school_percentile_locations_df['NTA'].map(school_locations_df.set_index('NTA')['Borough'])
school_percentile_locations_df['Borough'] = school_percentile_locations_df['NTA'].map(nta_df.set_index('NTACode')['Borough'])
school_percentile_locations_df
# Map NTA to Neighborhoods:
# Neighborhoods Names
# https://www1.nyc.gov/site/planning/data-maps/open-data.page
neighborhood_url = 'https://services5.arcgis.com/GfwWNkhOj9bNBqoJ/arcgis/rest/services/Neighborhood_Names/FeatureServer/0/query?where=1=1&outFields=*&outSR=4326&f=pgeojson'
resp = requests.get(neighborhood_url)
neighborhood_json = resp.json()
neighborhood_ids_list = []
neighborhood_details_list = []
for neighborhood_dict in neighborhood_json['features']:
neighborhood_ids_list.append(neighborhood_dict['id'])
d = {}
d['ID'] = neighborhood_dict['id']
# Neighborhood instead of name?
d['Name'] = neighborhood_dict['properties']['Name']
d['Borough'] = neighborhood_dict['properties']['Borough']
d['Lat'] = neighborhood_dict['geometry']['coordinates'][1]
d['Long'] = neighborhood_dict['geometry']['coordinates'][0]
neighborhood_details_list.append(d)
neighborhood_df = pd.DataFrame.from_dict(neighborhood_details_list)
neighborhood_df['Borough'] = neighborhood_df['Borough'].replace({'Staten Island': 'Staten_Island'})
neighborhood_df
# Neighborhood vs. NTA
neighborhood_agg_df = neighborhood_df.groupby(['Borough','Name']).mean()
nta_agg_df = nta_df.set_index(['Borough', 'NTAName'])
# Test Duplicates: # nta_agg_df[nta_agg_df.index.duplicated()]
# nta_agg_df.index.to_list()
nta_neighborhood_intersection = set(neighborhood_agg_df.index.to_list()).intersection(nta_agg_df.index.to_list())
nta_neighborhood_diff = set(neighborhood_agg_df.index.to_list()).difference(nta_agg_df.index.to_list())
nta_neighborhood_dict = {}
nta_neighborhood_dict['Intersection'] = len(nta_neighborhood_intersection)
nta_neighborhood_dict['Difference'] = len(nta_neighborhood_diff)
nta_vs_neighborhood_df = pd.DataFrame(nta_neighborhood_dict, index=[0])
nta_vs_neighborhood_df
```
### Manually Build the Crosswalk from NTA to Neighborhood:
* Neighborhood Map: https://www1.nyc.gov/site/planning/data-maps/city-neighborhoods.page
* NTA Map: https://www1.nyc.gov/assets/planning/download/pdf/data-maps/nyc-population/census2010/ntas.pdf
```
borough_list = ['Brooklyn', 'Queens', 'Bronx', 'Manhattan', 'Staten_Island']
_borough_search = 'Staten_Island' # 'Manhattan'
# Neighborhoods & NTA's: Which Neighborhoods overlap btw. Neighborhood and NTA's?
(set(neighborhood_agg_df.xs(_borough_search, level=0).index.to_list())
.intersection(nta_df[['Borough', 'NTAName', 'NTACode']].set_index(['Borough', 'NTAName']).xs(_borough_search, level=0).index.to_list()))
# Missing Neighborhoods: Which Neighborhoods need to be mapped from NTA's?
print((set(neighborhood_agg_df.xs(_borough_search, level=0).index.to_list())
.difference(nta_df[['Borough', 'NTAName', 'NTACode']].set_index(['Borough', 'NTAName']).xs(_borough_search, level=0).index.to_list())))
# Missing NTS: Which NTA's need to be mapped to a Neighborhood?
print((set(nta_df[['Borough', 'NTAName', 'NTACode']].set_index(['Borough', 'NTAName']).xs(_borough_search, level=0).index.to_list())
.difference(neighborhood_agg_df.xs(_borough_search, level=0).index.to_list())))
# Manually constructed Crosswalks:
# Intersection: Staten Island
#{'Arden Heights', 'Great Kills', 'Port Richmond', 'Westerleigh'}
# Missing Neighborhoods: Staten Island
#{#'Annadale', #'Arlington', #'Arrochar', #'Bay Terrace', #'Bloomfield', #'Bulls Head', #'Butler Manor',
#'Castleton Corners', #'Charleston', #'Chelsea', #'Clifton', #'Concord', #'Dongan Hills', #'Egbertville',
#'Elm Park', #'Eltingville', #'Emerson Hill', #'Fox Hills', #'Graniteville', #'Grant City', #'Grasmere',
#'Greenridge', #'Grymes Hill', #'Heartland Village', #'Howland Hook', #'Huguenot', #'Lighthouse Hill',
#'Manor Heights', #"Mariner's Harbor", #'Midland Beach', #'New Brighton', #'New Dorp', #'New Dorp Beach',
#'New Springville', #'Oakwood', #'Old Town', #'Park Hill', #'Pleasant Plains', #'Port Ivory', #"Prince's Bay",
#'Randall Manor', #'Richmond Town', #'Richmond Valley', #'Rosebank', #'Rossville', #'Sandy Ground',
#'Shore Acres', #'Silver Lake', #'South Beach', #'St. George', #'Stapleton', #'Sunnyside', #'Todt Hill',
#'Tompkinsville', #'Tottenville', #'Travis', #'West Brighton', #'Willowbrook', #'Woodrow'}
# Unmatched NTAs: Staten Island
nta_to_neighborhood_staten_island_dict = {
"Annadale-Huguenot-Prince's Bay-Eltingville": ['Annadale', 'Eltingville', 'Huguenot',
"Prince's Bay", 'Greenridge'],
'Charleston-Richmond Valley-Tottenville': ['Tottenville', 'Charleston', 'Richmond Valley', 'Butler Manor',
'Sandy Ground', 'Pleasant Plains'],
'Grasmere-Arrochar-Ft. Wadsworth': ['Arrochar', 'Grasmere', 'Concord'],
'Grymes Hill-Clifton-Fox Hills': ['Grymes Hill', 'Silver Lake', 'Randall Manor', 'Sunnyside',
'Fox Hills', 'Park Hill'],
"Mariner's Harbor-Arlington-Port Ivory-Graniteville": ['Elm Park', "Mariner's Harbor", 'Arlington',
'Port Ivory', 'Howland Hook', 'Graniteville'],
'New Brighton-Silver Lake': 'West Brighton',
'New Dorp-Midland Beach': ['New Dorp', 'New Dorp Beach', 'Grant City', 'Midland Beach'],
'New Springville-Bloomfield-Travis': ['Bloomfield', 'Chelsea', 'Travis', 'Bulls Head', 'Willowbrook',
'Manor Heights', 'New Springville'],
'Oakwood-Oakwood Beach': ['Oakwood','Richmond Town'],
'Old Town-Dongan Hills-South Beach': ['Dongan Hills', 'South Beach', 'Old Town'],
'Rossville-Woodrow': ['Rossville', 'Woodrow'],
'Stapleton-Rosebank': ['Tompkinsville', 'Stapleton', 'Clifton', 'Rosebank', 'Shore Acres'],
'Todt Hill-Emerson Hill-Heartland Village-Lighthouse Hill': ['Heartland Village', 'Lighthouse Hill',
'Egbertville', 'Todt Hill', 'Emerson Hill'],
'West New Brighton-New Brighton-St. George': ['St. George', 'New Brighton'],
'park-cemetery-etc-Staten Island': np.nan,
# These have the key/name displayed as well as the other neighborhood
'Great Kills': ['Great Kills', 'Bay Terrace'],
'Westerleigh': ['Westerleigh', 'Castleton Corners']
}
# Intersection: Queens
#{'Astoria', 'Auburndale', 'Bellerose', 'Cambria Heights', 'College Point', 'Corona', 'East Elmhurst',
# 'Elmhurst', 'Flushing', 'Forest Hills', 'Glendale', 'Hollis', 'Jackson Heights', 'Kew Gardens',
# 'Kew Gardens Hills', 'Laurelton', 'Maspeth', 'Middle Village', 'Murray Hill', 'North Corona',
# 'Oakland Gardens', 'Ozone Park', 'Queens Village', 'Queensboro Hill', 'Rego Park', 'Richmond Hill',
# 'Ridgewood', 'Rosedale', 'South Jamaica', 'South Ozone Park', 'St. Albans', 'Steinway', 'Whitestone',
# 'Woodhaven', 'Woodside'}
# Missing Neighborhoods: Queens
#{#'Arverne', #'Astoria Heights', #'Bay Terrace', #'Bayside', #'Bayswater', #'Beechhurst', #'Bellaire',
#'Belle Harbor', #'Blissville', #'Breezy Point', #'Briarwood', #'Broad Channel', #'Brookville',
#'Douglaston', #'Edgemere', #'Far Rockaway', #'Floral Park', #'Forest Hills Gardens', #'Fresh Meadows',
#'Glen Oaks', #'Hammels', #'Hillcrest', #'Holliswood', #'Howard Beach', #'Hunters Point', #'Jamaica Center',
#'Jamaica Estates', #'Jamaica Hills', #'Lefrak City', #'Lindenwood', #'Little Neck', #'Long Island City',
#'Malba', #'Neponsit', #'Pomonok', #'Queensbridge', #'Ravenswood', #'Rochdale', #'Rockaway Beach',
#'Rockaway Park', #'Roxbury', #'Somerville', #'Springfield Gardens', #'Sunnyside', #'Sunnyside Gardens',
#'Utopia'}
# Unmatched NTAs: Queens
nta_to_neighborhood_queens_dict = {
'Airport': np.nan,
'Baisley Park': 'Rochdale',
'Bayside-Bayside Hills': 'Bayside',
'Breezy Point-Belle Harbor-Rockaway Park-Broad Channel': ['Breezy Point', 'Roxbury', 'Neponsit',
'Belle Harbor', 'Rockaway Park',
'Rockaway Beach', 'Broad Channel'],
'Briarwood-Jamaica Hills': ['Briarwood', 'Jamaica Hills'],
'Douglas Manor-Douglaston-Little Neck': ['Little Neck', 'Douglaston'],
'East Flushing': 'Murray Hill',
'Elmhurst-Maspeth': 'Woodside',
'Far Rockaway-Bayswater': ['Far Rockaway', 'Bayswater'],
'Fresh Meadows-Utopia': 'Fresh Meadows',
'Ft. Totten-Bay Terrace-Clearview': 'Bay Terrace',
'Glen Oaks-Floral Park-New Hyde Park': ['Glen Oaks', 'Floral Park'],
'Hammels-Arverne-Edgemere': ['Hammels', 'Arverne', 'Somerville', 'Edgemere'],
'Hunters Point-Sunnyside-West Maspeth': ['Long Island City', 'Sunnyside Gardens', 'Sunnyside', 'Blissville'],
'Jamaica': 'Jamaica Center',
'Jamaica Estates-Holliswood': ['Jamaica Estates', 'Holliswood'],
'Lindenwood-Howard Beach': ['Lindenwood', 'Howard Beach'],
'Old Astoria': 'Astoria',
'Pomonok-Flushing Heights-Hillcrest': ['Pomonok', 'Utopia', 'Hillcrest'],
'Queensbridge-Ravenswood-Long Island City': ['Queensbridge', 'Ravenswood', 'Long Island City'],
'Springfield Gardens North': 'Rochdale',
'Springfield Gardens South-Brookville': ['Springfield Gardens', 'Brookville'],
'park-cemetery-etc-Queens': np.nan,
# These have the key/name displayed as well as the other neighborhood
'Steinway': ['Steinway', 'Astoria Heights'],
'Elmhurst': ['Elmhurst', 'Lefrak City'],
'Forest Hills': ['Forest Hills', 'Forest Hills Gardens'],
'Whitestone': ['Whitestone', 'Malba', 'Beechhurst'],
'Bellerose': ['Bellerose', 'Bellaire']
}
# Intersection: Bronx
#{'Belmont', 'Bronxdale', 'Co-op City', 'East Tremont', 'Hunts Point', 'Kingsbridge Heights',
# 'Longwood', 'Mount Hope', 'Norwood', 'Parkchester', 'Pelham Parkway'}
# Missing Neighborhoods: Bronx
#{#'Allerton', #'Baychester', #'Bedford Park', #'Castle Hill', #'City Island', #'Claremont Village',
#'Clason Point', #'Concourse', #'Concourse Village', #'Country Club', #'Eastchester', #'Edenwald',
#'Edgewater Park', #'Fieldston', #'Fordham', #'High Bridge', #'Kingsbridge', #'Melrose', #'Morris Heights',
#'Morris Park', #'Morrisania', #'Mott Haven', #'Mount Eden', #'North Riverdale', #'Olinville', #'Pelham Bay',
#'Pelham Gardens', #'Port Morris', #'Riverdale', #'Schuylerville', #'Soundview', #'Spuyten Duyvil',
#'Throgs Neck', #'Unionport', #'University Heights', #'Van Nest', #'Wakefield', #'West Farms',
#'Westchester Square', #'Williamsbridge', #'Woodlawn'}
# Unmatched NTAs: Bronx
nta_to_neighborhood_bronx_dict = {
'Allerton-Pelham Gardens': ['Allerton', 'Pelham Gardens'],
'Bedford Park-Fordham North': 'Bedford Park',
'Claremont-Bathgate': 'Claremont Village',
'Crotona Park East': 'Claremont Village',
'East Concourse-Concourse Village': ['Concourse', 'Concourse Village'],
'Eastchester-Edenwald-Baychester': ['Eastchester', 'Edenwald', 'Baychester'],
'Fordham South': 'Fordham',
'Highbridge': 'High Bridge',
'Melrose South-Mott Haven North': 'Melrose',
'Morrisania-Melrose': ['Claremont Village', 'Morrisania'],
'Mott Haven-Port Morris': ['Mott Haven', 'Port Morris'],
'North Riverdale-Fieldston-Riverdale': ['North Riverdale', 'Fieldston', 'Riverdale'],
'Pelham Bay-Country Club-City Island': ['Pelham Bay', 'Country Club', 'City Island'],
'Rikers Island': np.nan,
'Schuylerville-Throgs Neck-Edgewater Park': ['Schuylerville', 'Throgs Neck', 'Edgewater Park'],
'Soundview-Bruckner': 'Soundview',
'Soundview-Castle Hill-Clason Point-Harding Park': ['Castle Hill', 'Clason Point', 'Soundview'],
'Spuyten Duyvil-Kingsbridge': ['Spuyten Duyvil', 'Kingsbridge'],
'University Heights-Morris Heights': ['Morris Heights', 'University Heights'],
'Van Cortlandt Village': 'Kingsbridge',
'Van Nest-Morris Park-Westchester Square': ['Van Nest', 'Morris Park'],
'West Concourse': 'Mount Eden',
'West Farms-Bronx River': 'West Farms',
'Westchester-Unionport': ['Westchester Square', 'Unionport'],
'Williamsbridge-Olinville': ['Williamsbridge', 'Olinville'],
'Woodlawn-Wakefield': ['Woodlawn', 'Wakefield'],
'park-cemetery-etc-Bronx': np.nan
}
# Intersection: Brooklyn
#{'Bath Beach', 'Bay Ridge', 'Borough Park', 'Brighton Beach', 'Brownsville', 'Canarsie', 'Clinton Hill',
# 'Dyker Heights', 'East New York', 'East Williamsburg', 'Erasmus', 'Flatbush', 'Flatlands', 'Fort Greene',
# 'Gravesend', 'Greenpoint', 'Homecrest', 'Madison', 'Midwood', 'Ocean Hill', 'Prospect Heights',
# 'Starrett City', 'Williamsburg', 'Windsor Terrace'}
# Missing Neighborhoods: Brooklyn
#{#'Bedford Stuyvesant', #'Bensonhurst', #'Bergen Beach', #'Boerum Hill', #'Broadway Junction',
#'Brooklyn Heights', #'Bushwick', #'Carroll Gardens', #'City Line', #'Cobble Hill', #'Coney Island',
#'Crown Heights', #'Cypress Hills', #'Ditmas Park', #'Downtown', #'Dumbo', #'East Flatbush', #'Fort Hamilton',
#'Fulton Ferry', #'Georgetown', #'Gerritsen Beach', #'Gowanus', #'Highland Park', #'Kensington',
#'Manhattan Beach', #'Manhattan Terrace', #'Marine Park', #'Mill Basin', #'Mill Island', #'New Lots',
#'North Side', #'Ocean Parkway', #'Paerdegat Basin', #'Park Slope', #'Prospect Lefferts Gardens',
#'Prospect Park South', #'Red Hook', #'Remsen Village', #'Rugby', #'Sea Gate', #'Sheepshead Bay',
#'South Side', #'Sunset Park', #'Vinegar Hill', #'Weeksville', #'Wingate'}
# Unmatched NTAs: Brooklyn
nta_to_neighborhood_brooklyn_dict = {
'Bedford': 'Bedford Stuyvesant',
'Bensonhurst East': 'Bensonhurst',
'Bensonhurst West': 'Bensonhurst',
'Brooklyn Heights-Cobble Hill': ['Fulton Ferry', 'Brooklyn Heights', 'Cobble Hill'],
'Bushwick North': 'Bushwick',
'Bushwick South': 'Bushwick',
'Carroll Gardens-Columbia Street-Red Hook': ['Carroll Gardens', 'Red Hook'], # (Possibly Cobble Hill)
'Crown Heights North': ['Crown Heights', 'Weeksville'],
'Crown Heights South': 'Prospect Lefferts Gardens',
'Cypress Hills-City Line': ['City Line', 'Cypress Hills', 'Highland Park'],
'DUMBO-Vinegar Hill-Downtown Brooklyn-Boerum Hill': ['Boerum Hill', 'Dumbo', 'Vinegar Hill', 'Downtown'],
'East Flatbush-Farragut': 'East Flatbush',
'East New York (Pennsylvania Ave)': 'New Lots',
'Georgetown-Marine Park-Bergen Beach-Mill Basin': ['Bergen Beach', 'Georgetown', 'Mill Basin',
'Mill Island', 'Marine Park'],
'Kensington-Ocean Parkway': 'Kensington',
'North Side-South Side': 'Williamsburg', # ['North Side', 'South Side'],
'Ocean Parkway South': 'Ocean Parkway',
'Park Slope-Gowanus': ['Gowanus', 'Park Slope'],
'Prospect Lefferts Gardens-Wingate': ['Prospect Lefferts Gardens', 'Wingate'],
'Rugby-Remsen Village': ['Remsen Village', 'Rugby'],
'Seagate-Coney Island': ['Coney Island', 'Sea Gate'],
'Sheepshead Bay-Gerritsen Beach-Manhattan Beach': ['Gerritsen Beach', 'Sheepshead Bay', 'Manhattan Beach'],
'Stuyvesant Heights': 'Bedford Stuyvesant',
'Sunset Park East': 'Sunset Park',
'Sunset Park West': 'Sunset Park',
'West Brighton': 'Brighton Beach',
'park-cemetery-etc-Brooklyn': np.nan,
# These have the key/name displayed as well as the other neighborhood
'Ocean Hill': ['Ocean Hill', 'Broadway Junction'],
'Flatbush': ['Flatbush', 'Ditmas Park', 'Prospect Park South'],
'Bay Ridge': ['Bay Ridge', 'Fort Hamilton'],
'Midwood': ['Midwood', 'Manhattan Terrace'],
'Canarsie': ['Canarsie', 'Paerdegat Basin']
}
# Intersection: Manhattan
# {'Chinatown', 'Clinton', 'East Village', 'Gramercy', 'Hamilton Heights', 'Lincoln Square',
# 'Lower East Side', 'Manhattanville', 'Morningside Heights', 'Upper West Side', 'West Village', 'Yorkville'}
# Missing Neighborhoods: Manhattan
#{#'Battery Park City', #'Carnegie Hill', #'Central Harlem', #'Chelsea', #'Civic Center', #'East Harlem',
#'Financial District', #'Flatiron', #'Greenwich Village', #'Hudson Yards', #'Inwood', #'Lenox Hill',
#'Little Italy', #'Manhattan Valley', #'Marble Hill', #'Midtown', #'Midtown South', #'Murray Hill',
#'Noho', #'Roosevelt Island', #'Soho', #'Stuyvesant Town', #'Sutton Place', #'Tribeca', #'Tudor City',
#'Turtle Bay', #'Upper East Side', #'Washington Heights'}
# Unmatched NTAs: Manhattan
nta_to_neighborhood_manhattan_dict = {
'Battery Park City-Lower Manhattan': ['Battery Park City', 'Financial District'],
'Central Harlem North-Polo Grounds': 'Central Harlem',
'Central Harlem South': 'Central Harlem',
'East Harlem North': 'East Harlem',
'East Harlem South': 'East Harlem',
'Hudson Yards-Chelsea-Flatiron-Union Square': ['Hudson Yards', 'Chelsea', 'Flatiron'],
'Lenox Hill-Roosevelt Island': ['Lenox Hill', 'Roosevelt Island'],
'Marble Hill-Inwood': ['Marble Hill', 'Inwood'],
'Murray Hill-Kips Bay': 'Murray Hill',
'SoHo-TriBeCa-Civic Center-Little Italy': ['Soho', 'Tribeca', 'Civic Center', 'Little Italy'],
'Turtle Bay-East Midtown': ['Turtle Bay', 'Sutton Place', 'Tudor City'],
'Upper East Side-Carnegie Hill': ['Upper East Side', 'Carnegie Hill'],
'Washington Heights North': 'Washington Heights',
'Washington Heights South': 'Washington Heights',
'Midtown-Midtown South': ['Midtown','Midtown South'],
'Stuyvesant Town-Cooper Village': 'Stuyvesant Town',
'park-cemetery-etc-Manhattan': np.nan,
# These have the key/name displayed as well as the other neighborhood
'West Village': ['West Village', 'Greenwich Village'],
'Upper West Side': ['Upper West Side', 'Manhattan Valley'],
'Chinatown': ['Chinatown', 'Noho']
}
# Join each Borough dict into one dict:
nta_to_neighborhood_dict = {**nta_to_neighborhood_brooklyn_dict, **nta_to_neighborhood_queens_dict,
**nta_to_neighborhood_bronx_dict, **nta_to_neighborhood_manhattan_dict,
**nta_to_neighborhood_staten_island_dict}
# import json
# json.dump(nta_to_neighborhood_dict, open(home_path / "NTA_to_Neighborhood_mapping_dict.json", 'w' ) )
# Complete Mapping:
school_percentile_locations_df['Neighborhood'] = school_percentile_locations_df['NTA_NAME']
school_percentile_locations_df['Neighborhood'] = [nta_to_neighborhood_dict.get(key,key) for key in school_percentile_locations_df['NTA_NAME']]
school_percentile_locations_df = school_percentile_locations_df.explode('Neighborhood')
school_percentile_locations_df
# Final Output: School Quality metric for each Neighborhood in NYC
final_school_percentile_avg_df = (school_percentile_locations_df.explode('Neighborhood')
.groupby(['Borough', 'Neighborhood']).mean())
final_school_percentile_avg_df
final_school_percentile_avg_df['Avg_Percentile'].isna().sum()
# Save to CSV:
final_school_percentile_avg_df.to_csv(path_or_buf= home_path / 'Neighborhood_School_Quality_Percentile_Avg.csv')
# Let's visualize the distribution of School Quality across each Borough:
plt.figure(figsize=(15,5))
ax = sns.histplot(data=final_school_percentile_avg_df.sort_values(by='Avg_Percentile').droplevel(1, axis=0),
x="Avg_Percentile", hue='Borough', multiple='stack', # 'stack',
stat='count', common_norm=False, # kde=True, 'density'
legend=True, #palette="viridis", # stat='count'
binwidth=0.05, alpha=0.5, element='step'); # color=u'b'
plt.title("Distribution of School Quality by Borough (Percentile Rank)", fontsize=15);
# Same data, just a different visualization:
plt.figure(figsize=(15,5))
ax = sns.kdeplot(data=final_school_percentile_avg_df.sort_values(by='Avg_Percentile').droplevel(1, axis=0),
x="Avg_Percentile", hue='Borough', common_norm=False, legend=True,
alpha=0.5, linewidth=5, );
plt.title("Distribution of School Quality by Borough (Percentile Rank)", fontsize=15);
```
### Some Thoughts on School Quality Across Boroughs:
* Manhattan, as is expected, has a greater concentratation of higher quality schools
* Staten Island seems to have most schools above the 50th percentile, which is better than I anticipated
* Brooklyn acutally has the hump of the distribution below the 50th percentile, as does the Bronx
* Meanwhile, Queens has the bulk of it's distribution above the 50th percentile
The 80th Percentile:
Regarding the distribution of Borough schools ranked above the 80th percentile, Brooklyn and Staten Island have a similar percentage, Queens has a higher percentage, and the Bronx has a lower percentage. Manhattan has a majority of its schools above this threshold. It's an interesting area of further investigation - how do real estate prices compare between the Manhattan and outer-borough neighborhoods, in neighborhoods with school quality above the 80th percentile? And similarly, how do prices compare in Manhattan and outer-borough neighborhoods with school quality below the 80th percentile?
```
def map_neighborhoods_to_nta(series, nta_df):
series = series.squeeze()
import json; nta_to_neighborhood_dict = json.load(open(home_path / "NTA_to_Neighborhood_mapping_dict.json"))
mapped_df = nta_df.copy()
mapped_df['Neighborhood'] = [nta_to_neighborhood_dict.get(key,key) # only changing names in dict
for key in mapped_df['NTAName']]
mapped_df = mapped_df.explode('Neighborhood')
mapped_df[series.name] = (mapped_df.set_index(['Borough','Neighborhood']).index
.map(series.to_dict()))
mapped_df = mapped_df.groupby(['Borough','NTAName']).mean().fillna(0)
mapped_df = nta_df.set_index(['Borough','NTAName']).join(mapped_df)
return mapped_df
# Plotly Express: basic Choropleth Map
nta_schools_df = map_neighborhoods_to_nta(final_school_percentile_avg_df, nta_df)
import json; import plotly.express as px
nta_geojson_url = 'https://data.cityofnewyork.us/api/geospatial/cpf4-rkhq?method=export&format=GeoJSON'
resp = requests.get(nta_geojson_url)
nycmap = json.loads(resp.text)
# MapBox Token:
MapBox_Token_path = Path.home() / 'Jupyter' / 'MapBox_Token.txt'
if MapBox_Token_path.is_file():
px.set_mapbox_access_token(open(MapBox_Token_path, 'rt').read())
else:
print('Error: File not found')
fig = px.choropleth(data_frame=nta_schools_df.reset_index(drop=False),
geojson=nycmap, # nycmap, nta_geojson_url
locations='NTACode',
featureidkey='properties.ntacode',
color='Avg_Percentile',
color_continuous_scale='plasma', # inferno, plasma, balance
color_continuous_midpoint=0.5,
range_color=(0.0, 1.0),
center={"lat": 40.7, "lon": -74.0},
hover_name='NTAName', # 'NTAName'
labels={'NTAName': 'Neighborhood'},
title="NYC School Quality HeatMap (2021)",
scope='usa'
)
fig.update_geos(fitbounds="locations")
# fig.show(); # not rendering in GitHub, loading image instead:
Image(filename= home_path / 'Figures' / "Figure_Schools.png")
```
### Trajectory of School Quality: Improving or Deteriorating
In addition the the average percentile ranking for each school, and each neighborhood, for all the data that is available from 2013 through 2019, let's also calculate which neighborhoods are on a positive trajectory. The real estate in a neighborhood can represent a value, and opportunity, if prices are below what can be explained by the regression across all of NYC using the factors of commute time, neighborhood quality, and school quality. Another important dimension to the analysis is if there is positive momentum: is the time-series of prices increasing, is there population growth, and is the school quality improving as well?
```
math_percentiles_df
def school_quailty_trajectory(series):
_regression_values = series.dropna().values
if len(_regression_values) >= 2:
return np.polyfit(np.arange(len(_regression_values)), _regression_values, 1)[0]
else:
return 0
# Calculate Trajectory: Regression Slope
math_trajectory_df = math_percentiles_df.copy()
math_trajectory_df['Trajectory'] = math_trajectory_df.T.apply(lambda x: school_quailty_trajectory(x))
math_trajectory_df = math_trajectory_df.sort_values(by='Trajectory', ascending=False)
# Having missing years is causing outliers in the trajectory/slope parameter
math_trajectory_df
# math_trajectory_df['Trajectory'].plot();
plt.figure(figsize=(15,5))
ax = sns.histplot(data=math_trajectory_df, x="Trajectory", stat='density',
binwidth=0.01, alpha=0.5, element="step"); # color=u'b'
plt.title("Change of Percentile Accross Period (2013 to 2019 ex. Missing Values)", fontsize=15);
# The outliers here are outside of a +/- 0.1 change in percentile rankings
# Let's check to what extent these are caused by missing years of data
#math_trajectory_df['Missing_Values'] = math_trajectory_df.T.apply(lambda x: x.isna().sum())
outliers_trajectory_df = (math_trajectory_df['Trajectory'].to_frame()
.join(math_trajectory_df.T.apply(lambda x: x.isna().sum()).rename('Missing_Values')))
# outliers_trajectory_df[outliers_trajectory_df['Missing_Values'] > 4]['Trajectory'].plot()
# outliers_trajectory_df[outliers_trajectory_df['Missing_Values'] <= 4]['Trajectory'].plot();
```
If a school has fewer than 4 years of data (missing 3 or more years), and is in the outer extremes of the distribution +/- 0.1 percentile change, then the outlier values are replaced with +/- 0.1. If however, the school is an outlier, as defined as a change greater-than or less than +/- 0.1, then the value remain. The lack of data is less of a factor.
```
above_outliers_df = (outliers_trajectory_df[(outliers_trajectory_df['Trajectory'] >= 0.1)
& (outliers_trajectory_df['Missing_Values'] >= 3)])
above_outliers_df
below_outliers_df = (outliers_trajectory_df[(outliers_trajectory_df['Trajectory'] <= -0.1)
& (outliers_trajectory_df['Missing_Values'] >= 3)])
below_outliers_df
new_above_outliers_df = above_outliers_df.copy()
new_below_outliers_df = below_outliers_df.copy()
new_above_outliers_df['Trajectory'] = 0.1
new_below_outliers_df['Trajectory'] = -0.1
replace_outliers_df = new_above_outliers_df.append(new_below_outliers_df)
replace_outliers_df
math_trajectory_df
replace_outliers_df[replace_outliers_df.index.duplicated(keep=False)]
math_trajectory_df['Trajectory_Updated'] = math_trajectory_df['Trajectory']
math_trajectory_df['Trajectory_Updated'].update(replace_outliers_df['Trajectory'])
math_trajectory_df
plt.figure(figsize=(15,5))
ax = sns.histplot(data=math_trajectory_df, x="Trajectory_Updated", stat='density',
binwidth=0.01, alpha=0.5, element="step"); # color=u'b'
plt.title("Removed Outliers: Change of Percentile Accross Period (2013 to 2019 ex. Missing Values)", fontsize=15);
math_trajectory_df.reset_index(drop=False)
# Now Map to Neighborhoods:
final_math_trajectory_df = math_trajectory_df.reset_index(drop=False)
final_math_trajectory_df['NTA_NAME'] = final_math_trajectory_df['DBN'].map(school_locations_df.set_index('ATS_SYSTEM_CODE')['NTA_NAME'])
final_math_trajectory_df['NTA'] = final_math_trajectory_df['DBN'].map(school_locations_df.set_index('ATS_SYSTEM_CODE')['NTA'])
final_math_trajectory_df['Borough'] = final_math_trajectory_df['NTA'].map(nta_df.set_index('NTACode')['Borough'])
# Complete Mapping:
final_math_trajectory_df['Neighborhood'] = final_math_trajectory_df['NTA_NAME']
final_math_trajectory_df['Neighborhood'] = [nta_to_neighborhood_dict.get(key,key) for key in final_math_trajectory_df['NTA_NAME']]
#final_math_trajectory_df = final_math_trajectory_df.explode('Neighborhood')
# Calc by Neighborhood
final_math_trajectory_df = (final_math_trajectory_df.explode('Neighborhood')
.groupby(['Borough', 'Neighborhood']).mean())
final_math_trajectory_df.sort_values('Trajectory_Updated', ascending=False) # ['Trajectory_Updated']
final_math_trajectory_df.loc['Brooklyn', 'Williamsburg'].to_frame().T
# To CSV:
final_math_trajectory_df.to_csv(path_or_buf= home_path / 'Neighborhood_School_Quality_Momentum.csv')
plt.figure(figsize=(15,5))
ax = sns.histplot(data=final_math_trajectory_df, x="Trajectory_Updated", stat='density',
binwidth=0.01, alpha=0.5, element="step"); # color=u'b'
plt.title("Neighborhoods: Change of Percentile Accross Period (2013 to 2019)", fontsize=15);
final_math_trajectory_df['Trajectory_Updated'].sort_values(ascending=False)
```
| github_jupyter |
# Keyword Filter
This notebook generates keyword-filtered versions of the pre-filtered datasets (those filtered by evidence duplicates). More specifically, for each task, a list of keywords is created first. Then, all text-triple pairs that contain one of these keywords in their evidence are filtered out.
```
# Imports
import getpass
import os
import sys
import time
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from stonkgs.constants import (
CELL_LINE_DIR,
CELL_TYPE_DIR,
DISEASE_DIR,
LOCATION_DIR,
MISC_DIR,
ORGAN_DIR,
SPECIES_DIR,
RELATION_TYPE_DIR,
)
```
Record details
```
print(getpass.getuser())
print(sys.version)
print(time.asctime())
```
## 1. Read the datasets
```
cell_line = pd.read_csv(os.path.join(CELL_LINE_DIR, 'cell_line_no_duplicates.tsv'), sep="\t")
cell_type = pd.read_csv(os.path.join(CELL_TYPE_DIR, 'cell_type_no_duplicates.tsv'), sep="\t")
disease = pd.read_csv(os.path.join(DISEASE_DIR, 'disease_no_duplicates.tsv'), sep="\t")
location = pd.read_csv(os.path.join(LOCATION_DIR, 'location_no_duplicates.tsv'), sep="\t")
organ = pd.read_csv(os.path.join(ORGAN_DIR, 'organ_no_duplicates.tsv'), sep="\t")
species = pd.read_csv(os.path.join(SPECIES_DIR, 'species_no_duplicates.tsv'), sep="\t")
relation_type = pd.read_csv(os.path.join(RELATION_TYPE_DIR, 'relation_type_no_duplicates.tsv'), sep="\t")
```
## 2. Create task specific "blacklists" of keywords
```
cell_line_dict = {
'3684': 'HeLa cell',
'7606': 'MCF7 cell',
'2795': 'DMS 114 cell',
'1230': 'HEK293 cell',
'8172': 'NIH-3T3 cell',
'3704': 'Hep G2 cell',
'9465': 'U-937 cell',
'7365': 'LNCAP cell',
'2596': 'COS-1 cell',
'9348': 'THP-1 cell',
}
cell_line_blacklist = list(cell_line_dict.values())
cell_line_pat = r'(\b{}\b)'.format('|'.join(cell_line_blacklist))
cell_type_dict = {
'7': 'early embryonic cell (metazoa)',
'57': 'fibroblast',
'235': 'macrophage',
'567': 'polymodal nocireceptor',
'938': 'CD56-bright natural killer cell',
'115': 'endothelial cell',
'150': 'glandular epithelial cell',
'192': 'smooth muscle cell',
'136': 'fat cell',
'182': 'hepatocyte',
}
cell_type_blacklist = list(cell_type_dict.values())
cell_type_pat = r'(\b{}\b)'.format('|'.join(cell_type_blacklist))
disease_dict = {
'1324': 'lung cancer',
'1936': 'atherosclerosis',
'1612': 'breast cancer',
'9538': 'multiple myeloma',
'1240': 'leukemia',
'219': 'colon cancer',
'1909': 'melanoma',
'769': 'neuroblastoma',
'3908': 'lung non-small cell carcinoma',
'3347': 'osteosarcoma',
}
disease_blacklist = list(disease_dict.values())
disease_pat = r'(\b{}\b)'.format('|'.join(disease_blacklist))
location_dict = {
'D002467': 'Cell Nucleus',
'D002462': 'Cell Membrane',
'D003593': 'Cytoplasm',
'D005109': 'Extracellular Matrix',
'D005110': 'Extracellular Space',
}
location_blacklist = list(location_dict.values())
location_pat = r'(\b{}\b)'.format('|'.join(location_blacklist))
organ_dict = {
'2048': 'lung',
'2107': 'liver',
'1986': 'endothelium',
'1134': 'skeletal muscle tissue',
'483': 'epithelium',
'947': 'aorta',
'310': 'breast',
'142': 'mechanosensory system', # cross reference from BILA
# https://www.ebi.ac.uk/ols/ontologies/uberon/terms?iri=http%3A%2F%2Fpurl.obolibrary.org%2Fobo%2FUBERON_0007037
'2367': 'prostate gland',
'948': 'heart',
}
organ_blacklist = list(organ_dict.values())
organ_pat = r'(\b{}\b)'.format('|'.join(organ_blacklist))
species_blacklist = ["rat", "rats", "human", "humans", "mouse", "mice", "homo sapiens", "rodents", "rattus", "mus"]
species_pat = r'(\b{}\b)'.format('|'.join(species_blacklist))
relation_type_blacklist = list(set(np.unique(relation_type['interaction'])).union(
set(np.unique(relation_type['polarity']))))
relation_type_blacklist = [term.replace("_", " ") for term in relation_type_blacklist]
relation_type_pat = r'(\b{}\b)'.format('|'.join(relation_type_blacklist))
print(relation_type_pat)
```
## 3. Filter by the pattern created with the keywords (case insensitive)
```
cell_line_new = cell_line[~cell_line["evidence"].str.contains(cell_line_pat, case=False, na=False)]
cell_type_new = cell_type[~cell_type["evidence"].str.contains(cell_type_pat, case=False, na=False)]
disease_new = disease[~disease["evidence"].str.contains(disease_pat, case=False, na=False)]
location_new = location[~location["evidence"].str.contains(location_pat, case=False, na=False)]
organ_new = organ[~organ["evidence"].str.contains(organ_pat, case=False, na=False)]
species_new = species[~species["evidence"].str.contains(species_pat, case=False, na=False)]
relation_type_new = relation_type[~relation_type["evidence"].str.contains(relation_type_pat, case=False, na=False)]
```
Also print out how many entries were filtered out by the keywords
```
print(f'For cell line, {len(cell_line)-len(cell_line_new)} out of {len(cell_line)} entries were filtered out')
print(f'For cell type, {len(cell_type)-len(cell_type_new)} out of {len(cell_type)} entries were filtered out')
print(f'For disease, {len(disease)-len(disease_new)} out of {len(disease)} entries were filtered out')
print(f'For location, {len(location)-len(location_new)} out of {len(location)} entries were filtered out')
print(f'For organ, {len(organ)-len(organ_new)} out of {len(organ)} entries were filtered out')
print(f'For species, {len(species)-len(species_new)} out of {len(species)} entries were filtered out')
print(f'For relation type, {len(relation_type)-len(relation_type_new)} out of {len(relation_type)} entries were filtered out')
```
## 4. Save the new datasets
```
cell_line_new.drop(columns=['Unnamed: 0'], inplace=True)
cell_type_new.drop(columns=['Unnamed: 0'], inplace=True)
disease_new.drop(columns=['Unnamed: 0'], inplace=True)
location_new.drop(columns=['Unnamed: 0'], inplace=True)
organ_new.drop(columns=['Unnamed: 0'], inplace=True)
species_new.drop(columns=['Unnamed: 0'], inplace=True)
relation_type_new.drop(columns=['Unnamed: 0'], inplace=True)
cell_line_new.to_csv(os.path.join(CELL_LINE_DIR, 'cell_line_keyword_filtered.tsv'), sep="\t", index=None)
cell_type_new.to_csv(os.path.join(CELL_TYPE_DIR, 'cell_type_keyword_filtered.tsv'), sep="\t", index=None)
disease_new.to_csv(os.path.join(DISEASE_DIR, 'disease_keyword_filtered.tsv'), sep="\t", index=None)
location_new.to_csv(os.path.join(LOCATION_DIR, 'location_keyword_filtered.tsv'), sep="\t", index=None)
organ_new.to_csv(os.path.join(ORGAN_DIR, 'organ_keyword_filtered.tsv'), sep="\t", index=None)
species_new.to_csv(os.path.join(SPECIES_DIR, 'species_keyword_filtered.tsv'), sep="\t", index=None)
relation_type_new.to_csv(os.path.join(RELATION_TYPE_DIR, 'relation_type_keyword_filtered.tsv'), sep="\t", index=None)
```
| github_jupyter |
# Logistic Regression with Linear and Polynomial Features
```
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import PolynomialFeatures
import numpy as np
```
## Reading the data
```
data = pd.read_csv(r"..\Data\small_ohe.csv")
data.head()
data.columns
# Saperating features and result vectors
X = data.drop('y', axis=1).values
y = data['y'].values
```
### Test Train Split
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=42)
poly_features_1 = PolynomialFeatures(degree=1)
poly_features_2 = PolynomialFeatures(degree=2)
poly_features_3 = PolynomialFeatures(degree=3)
X_train_poly1 = poly_features_1.fit_transform(X_train)
X_train_poly2 = poly_features_2.fit_transform(X_train)
X_train_poly3 = poly_features_3.fit_transform(X_train)
```
## Logistic Regression with Linear Features
```
logisticRegr = LogisticRegression()
logisticRegr.fit(X_train_poly1, y_train)
y_test_predict = logisticRegr.predict(poly_features_1.fit_transform(X_test))
acc1 = metrics.accuracy_score(y_test_predict, y_test)
pre1 = metrics.precision_score(y_test_predict,y_test)
rec1 = metrics.recall_score(y_test_predict,y_test)
f1_1 = metrics.f1_score(y_test_predict,y_test)
```
## Logistic Regression with Polynomial Features of degree=2
```
logisticRegr = LogisticRegression()
logisticRegr.fit(X_train_poly2, y_train)
y_test_predict = logisticRegr.predict(poly_features_2.fit_transform(X_test))
acc2 = metrics.accuracy_score(y_test_predict, y_test)
pre2 = metrics.precision_score(y_test_predict,y_test)
rec2 = metrics.recall_score(y_test_predict,y_test)
f1_2 = metrics.f1_score(y_test_predict,y_test)
```
## Logistic Regression with Polynomial Features of degree=3
```
logisticRegr = LogisticRegression()
logisticRegr.fit(X_train_poly3, y_train)
y_test_predict = logisticRegr.predict(poly_features_3.fit_transform(X_test))
acc3 = metrics.accuracy_score(y_test_predict, y_test)
pre3 = metrics.precision_score(y_test_predict,y_test)
rec3 = metrics.recall_score(y_test_predict,y_test)
f1_3 = metrics.f1_score(y_test_predict,y_test)
```
## Comparing the results
```
table = pd.DataFrame({
'Metric': ['Accuracy', 'Precision', 'Recall', 'F1 Score'],
'Logistic Regression with Linear Features': [acc1, pre1, rec1, f1_1],
'Logistic Regression with Features of Degree=2' : [acc2, pre2, rec2, f1_2],
'Logistic Regression with Features of Degree=3' : [acc3, pre3, rec3, f1_3]})
table
```
### ----------------------------------------------------------------------------------------------------------------------------------------------------------
# Model evaluation after appliying PCA
```
from sklearn.decomposition import PCA
pca = PCA(n_components=0.90)
x_pca = pca.fit_transform(X)
x_pca.shape
```
### Observation : To capture 90% of variance we need only 24 Features out of 56
```
X_train, X_test, y_train, y_test = train_test_split(x_pca, y, test_size=0.30, random_state=42)
poly_features_1 = PolynomialFeatures(degree=1)
poly_features_2 = PolynomialFeatures(degree=2)
poly_features_3 = PolynomialFeatures(degree=3)
X_train_poly1 = poly_features_1.fit_transform(X_train)
X_train_poly2 = poly_features_2.fit_transform(X_train)
X_train_poly3 = poly_features_3.fit_transform(X_train)
logisticRegr = LogisticRegression()
logisticRegr.fit(X_train_poly1, y_train)
y_test_predict = logisticRegr.predict(poly_features_1.fit_transform(X_test))
acc1 = metrics.accuracy_score(y_test_predict, y_test)
pre1 = metrics.precision_score(y_test_predict,y_test)
rec1 = metrics.recall_score(y_test_predict,y_test)
f1_1 = metrics.f1_score(y_test_predict,y_test)
logisticRegr = LogisticRegression()
logisticRegr.fit(X_train_poly2, y_train)
y_test_predict = logisticRegr.predict(poly_features_2.fit_transform(X_test))
acc2 = metrics.accuracy_score(y_test_predict, y_test)
pre2 = metrics.precision_score(y_test_predict,y_test)
rec2 = metrics.recall_score(y_test_predict,y_test)
f1_2 = metrics.f1_score(y_test_predict,y_test)
logisticRegr = LogisticRegression()
logisticRegr.fit(X_train_poly3, y_train)
y_test_predict = logisticRegr.predict(poly_features_3.fit_transform(X_test))
acc3 = metrics.accuracy_score(y_test_predict, y_test)
pre3 = metrics.precision_score(y_test_predict,y_test)
rec3 = metrics.recall_score(y_test_predict,y_test)
f1_3 = metrics.f1_score(y_test_predict,y_test)
table = pd.DataFrame({
'Metric': ['Accuracy', 'Precision', 'Recall', 'F1 Score'],
'Logistic Regression with Linear Features': [acc1, pre1, rec1, f1_1],
'Logistic Regression with Features of Degree=2' : [acc2, pre2, rec2, f1_2],
'Logistic Regression with Features of Degree=3' : [acc3, pre3, rec3, f1_3]})
table
```
## Observation : We can see that with just 24 Features instead 56 Features we can get the same accuracy
### ----------------------------------------------------------------------------------------------------------------------------------------------------------
# Plotting the ROC Curve for Better Visualization
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=42)
logisticRegr = LogisticRegression()
poly_features_2 = PolynomialFeatures(degree=2)
X_poly2 = poly_features_2.fit_transform(X_train)
logisticRegr.fit(X_poly2, y_train)
predict = logisticRegr.predict(poly_features_2.fit_transform(X_test))
#data['predict'] = predict
TP2 = np.sum([(c==1 and p==1) for c,p in zip(y_test,predict)])
TN2 = np.sum([(c==0 and p==0) for c,p in zip(y_test,predict)])
FP2 = np.sum([(c==0 and p==1) for c,p in zip(y_test,predict)])
FN2 = np.sum([(c==1 and p==0) for c,p in zip(y_test,predict)])
P2 = np.sum(y_test)
N2 = len(y_test) - P2
phat2 = logisticRegr.predict_proba(poly_features_2.fit_transform(X_test))[:,1]
logisticRegr = LogisticRegression()
poly_features_1 = PolynomialFeatures(degree=1)
X_poly1 = poly_features_1.fit_transform(X_train)
logisticRegr.fit(X_poly1, y_train)
predict = logisticRegr.predict(poly_features_1.fit_transform(X_test))
#data['predict'] = predict
TP1 = np.sum([(c==1 and p==1) for c,p in zip(y_test,predict)])
TN1 = np.sum([(c==0 and p==0) for c,p in zip(y_test,predict)])
FP1 = np.sum([(c==0 and p==1) for c,p in zip(y_test,predict)])
FN1 = np.sum([(c==1 and p==0) for c,p in zip(y_test,predict)])
P1 = np.sum(y_test)
N1 = len(y_test) - P1
phat1 = logisticRegr.predict_proba(poly_features_1.fit_transform(X_test))[:,1]
logisticRegr = LogisticRegression()
poly_features_3 = PolynomialFeatures(degree=3)
X_poly3 = poly_features_3.fit_transform(X_train)
logisticRegr.fit(X_poly3, y_train)
predict = logisticRegr.predict(poly_features_3.fit_transform(X_test))
#data['predict'] = predict
TP3 = np.sum([(c==1 and p==1) for c,p in zip(y_test,predict)])
TN3 = np.sum([(c==0 and p==0) for c,p in zip(y_test,predict)])
FP3 = np.sum([(c==0 and p==1) for c,p in zip(y_test,predict)])
FN3 = np.sum([(c==1 and p==0) for c,p in zip(y_test,predict)])
P3 = np.sum(y_test)
N3 = len(y_test) - P3
phat3 = logisticRegr.predict_proba(poly_features_3.fit_transform(X_test))[:,1]
from sklearn.metrics import roc_curve
fpr1, tpr1, thresholds1 = roc_curve(y_test, phat1)
plt.plot(fpr1, tpr1)
plt.plot(FP1/N1, TP1/P1, label = 'Degree=1', color='blue')
fpr2, tpr2, thresholds2 = roc_curve(y_test, phat2)
plt.plot(fpr2, tpr2)
plt.plot(FP2/N2, TP2/P2, label = 'Degree=2', color='darkorange')
fpr3, tpr3, thresholds3 = roc_curve(y_test, phat3)
plt.plot(fpr3, tpr3)
plt.plot(FP3/N3, TP3/P3, label = 'Degree=3', color='green')
x = np.linspace(0,1,num=50)
plt.plot(x,x,color='lightgrey',linestyle='--',marker='',lw=2,label='random guess')
plt.legend(fontsize = 14)
plt.xlabel('False positive rate', fontsize = 18)
plt.ylabel('True positive rate', fontsize = 18)
plt.xlim(0,1.1)
plt.ylim(0,1.1)
plt.show()
```
| github_jupyter |
# [LEGALST-123] Lab 24: Morality and Sentiment Analysis
This lab will cover morality and sentiment analysis using the *Moral Foundations Theory* with dictionary-based analysis, connecting to topic modeling and classifications ideas from previous labs.
### Table of Contents
[The Data](#section data)<br>
[Goal and Question](#section goal)<br>
1 - [Text Pre-processing](#section 1)<br>
2 - [Polarity](#section 2)<br>
3 - [Moral Foundations Theory](#section 3)<br>
4 - [Non-negative matrix factorization](#section 4)<br>
**Dependencies:**
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import json
from sklearn.decomposition import NMF, LatentDirichletAllocation
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
import nltk
from nltk.stem.snowball import SnowballStemmer
import seaborn as sns
!pip install textblob
from textblob import TextBlob
```
----
## The Data<a id='section data'></a>
For this lab, we'll use the Old Bailey dataset, something you all should be familiar with now. The size of the dataset is also rather large so we will compare two year-long periods, one from before 1827 and one after. Read the question to better understand why we look at 1827.
## Goal and Question<a id='section goal'></a>
The goal of today's lab is to explore sentiment analysis with three different approaches – [polarity scoring](#section 2), [topic-specific dictionary methods](#section 3), and [topic modeling](#section 4).
We'll look at sentiment in the context of the following question:
**Did the way judges, prosecutors, and witnesses talk about moral culpability change after the Bloody Code was mostly repealed in 1827 (at the leading edge of a wave of legal reform in England)?**
*Note: this is a question that could encompass an entire research project. Today's lab uses a very small subset of data due to datahub memory limitations, and skips over many of the steps needed for truly robust conclusions. *
Something to think about: What are some things you would need to consider before answering this question?
----
## Section 1: Text Pre-processing<a id='section 1'></a>
### Before we start
This dataset we are about to look at is incredibly large, so to avoid crashing our datahub kernel, we only consider two years: 1822 and 1832. These two years were chosen as periods that were equally far from 1827 (when the Bloody Code was mostly repealed), while not being so far from each other that we'd expect to see major language usage change due only to time.
----
### Getting started
Let's get working with the data.
```
# contains Old Bailey trial data from 1822 and 1832
old_bailey = pd.read_csv('data/obc_1822_1832.csv', index_col='trial_id')
# select only the columns we need for this lab
old_bailey = old_bailey.loc[:, ['year', 'transcript']]
old_bailey.head()
```
Awesome! We now have data we can work with. Before we start anything, we must clean the text!
Just to review, we want to process our text by:<br>
1) Lowercasing the words<br>
2) Cleaning up punctuation<br>
3) Splitting into individual words<br>
4) Stemming the word tokens<br>
For the sake of time (and to get to the good stuff), we've provided the pre-processing code below. This a big data set, so the code will take up to a minute to run.
```
# pre-process the data
lower_cased = old_bailey['transcript'].str.lower()
punct_re = r'[^\w\s]'
lower_no_punc = lower_cased.str.replace(punct_re, ' ', regex=True)
tokens = lower_no_punc.str.split()
old_bailey['tokens'] = tokens
stemmer = SnowballStemmer('english')
stem_lists = []
for token_list in old_bailey['tokens']:
stem_lists.append([stemmer.stem(wd) for wd in token_list])
old_bailey['stemmed_tokens'] = stem_lists
old_bailey.head()
```
----
## Section 2: Polarity <a id='section 2'></a>
One way to measure the tone of a text is to look at the text **polarity**: a measure of how positive or negative it is perceived to be. For example, a sentence like "I love Berkeley!" would be considered positive, while a sentence like "Stanford is terrible!" would be negative. And, because polarity is represented as a scale, some words have stronger positive or negative sentiment than others- "I like data science" is positive, but not as positive as "I love data science."
We will use the [TextBlob](https://textblob.readthedocs.io/en/dev/quickstart.html#sentiment-analysis) tools to analyze the sentiment of Old Bailey. TextBlob provides access to many common text-processing operations, and includes a lexicon and rule-based sentiment analysis tool.
A TextBlob is created around string of text:
```
# creates a sentiment analyzer
blob = TextBlob("This is a super exciting, totally awesome test sentence.")
blob
```
We can access the sentiment by using `.sentiment`.
```
blob.sentiment
```
`sentiment` returns two values: the **polarity** and the **subjectivity**. The polarity ranges between -1 and 1 where -1 is a very negative text and 1 is a very positive text. Subjectivity ranges between 0 and 1 where 0 is a very objective text and 1 is a very subjective text (i.e. one that can be interpreted many different ways). You can get the polarity by using `.polarity`.
```
blob.sentiment.polarity
```
Polarity is calculated fairly simply: TextBlob accesses a dictionary of words that have been assigned polarity and subjectivity scores, looks up each word in the given text, and averages over the sentence. It also employs a few rules, such as changing the polarity of a word that comes after a negation.
```
happy = TextBlob('Happy')
print(happy.sentiment.polarity)
negation = TextBlob('Not')
print(negation.sentiment.polarity)
negated_happy = TextBlob('Not happy')
print(negated_happy.sentiment.polarity)
```
**QUESTION:** Try calculating the polarity scores of a few of your own sentences in the cell below.
```
# test the polarity scoring for different sentences
my_blob = ...
...
```
Next, we want to get the average polarity for each transcript.
**EXERCISE:** define a function that will take in a string of text and return the polarity of that text.
```
def get_polarity(text):
"""Return the polarity of TEXT"""
...
return ...
# SOLUTION
def get_polarity(text):
"""Return the polarity of TEXT"""
blob = TextBlob(text)
return blob.sentiment.polarity
```
**EXERCISE**: Using `.apply` and your `get_polarity` function, get the polarity of every transcript in the Old Bailey data.
```
polarities = ...
# SOLUTION
polarities = old_bailey['transcript'].apply(get_polarity)
# add the polarities as a column
old_bailey['polarity'] = polarities
old_bailey.head()
```
**QUESTION:**
- What was the most negative transcript/transcripts?
- What was the most positive transcript/transcripts?
```
# find the transcript with the highest polarity
most_pos = ...
most_pos
# SOLUTION
# find the transcript with the highest polarity
most_pos = old_bailey[old_bailey.polarity == old_bailey.polarity.max()]
print(most_pos.transcript)
# find the transcript with the lowest polarity
most_neg = ...
most_neg
# SOLUTION
# find the transcript with the lowest polarity
most_neg = old_bailey[old_bailey.polarity == old_bailey.polarity.min()]
print(most_neg.transcript)
```
**EXERCISE:** Let's take a look at violin plots of these two datasets to better compare how the average compound polarity is distributed for each of the two years, before and after 1827.
To show both years at once, it's easiest to use the Seaborn (abbreviated as `sns`) visualization library function. `y` is set to the name of the variable (a string) whose distributions we want to see. `x` is set to the name of the variable (also a string)that we want to compare distributions for . `data` is set to the dataframe (not a string) with all the values.
```
# uncomment the next line and fill in the code to create the violin plots
#sns.violinplot(x=..., y=..., data=...)
#SOLUTION
sns.violinplot(x="year", y="polarity", data=old_bailey);
```
**QUESTION:** What does this plot show us?
What are some advantages to using polarity as a way to measure moral tone? What are some issues with this approach? Consider also how these answers might change for a different data set.
*Write your answer here.*
**SOLUTION**
The plot shows that the violin plots are about the same shape and have about the same median around 0.
Polarity is a nice metric because it's easy to calculate and interpret. It might run into issues because it will ignore words that aren't in its dictionary; this is especially an issue for this data, which has text from almost 200 years ago and may contain a significant number of more archaic words.
----
## Section 3: Moral Foundations Theory<a id='section 3'></a>
Another approach is to create specialized dictionaries containing specific words of interest to try to analyze sentiment from a particular angle (i.e. use a **dictionary method**). One set of researchers did just that from the perspective of [Moral Foundations Theory](http://moralfoundations.org/). We will now use it to see if we can understand more about the moral tone of Old Bailey transcripts than by using general polarity. You should be doing something like this for your homework. We will be using a provided moral foundations dictionary.
```
with open('data/haidt_dict.json') as json_data:
mft_dict = json.load(json_data)
```
Moral Foundations Theory posits that there are five (with an occasional sixth) innane, universal psychological foundations of morality, and that those foundations shape human cultures and institutions (including legal). The keys of the dictionary correspond to the five foundations.
```
#look at the keys of the dictionary provided
keys = mft_dict.keys()
list(keys)
```
And the values of the dictionary are lists of words associated with each foundation.
```
mft_dict[list(keys)[0]] #one example of the values provided for the first key
```
### Calculating Percentages
In this approach, we'll use the frequency of Moral Foundations-related words as a measure of how the transcripts talk about morality and see if there's a difference between pre- and post-1827 trends.
As a first step, we need to know the total number of words in each transcript.
**EXERCISE:** Add a column to `old_bailey` with the number of words corresponding to each transcript.
```
# create a new column called 'total_words'
old_bailey['total_words'] = ...
old_bailey.head()
# SOLUTION
old_bailey['total_words'] = old_bailey['tokens'].apply(len)
old_bailey.head()
```
Next, we need to calculate the number of matches to entries in our dictionary for each foundation for each speech.
Run the next cell to add six new columns to `old_bailey`, one per foundation, that show the number of word matches. This cell will also likely take some time to run (no more than a minute). Note that by now, you have the skills to write all the code in the next cell- we're just giving it to you because it's long, fiddly, and writing nested for-loops is not the focus of this lab. Make sure you know what it does before you move on, though.
```
# Will take a bit of time to run due to the large size.
# do the following code for each foundation
for foundation in mft_dict.keys():
# create a new, empty column
num_match_words = np.zeros(len(old_bailey))
stems = mft_dict[foundation]
# do the following code for each foundation word
for stem in stems:
# find related word matches
wd_count = np.array([sum([wd == stem for wd in transcript])for transcript in old_bailey['stemmed_tokens']])
# add the number of matches to the total
num_match_words += wd_count
# create a new column for each foundation with the number of related words per transcript
old_bailey[foundation] = num_match_words
old_bailey.head()
```
**EXERCISE:** The columns for each foundation currently contain the number of words related to that foundation for each of the trials. Calculate the *percentage* of foundation words per trial by dividing the number of matched words by the number of total words and multiplying by 100.
```
# do this for each foundation column
for foundation in mft_dict.keys():
old_bailey[foundation] = old_bailey[foundation] # fill in your code to transform the values here
old_bailey.head()
#SOLUTION
for foundation in mft_dict.keys():
old_bailey[foundation] = (old_bailey[foundation] / old_bailey['total_words']) * 100
old_bailey.head()
```
Let's compare the average percentage of foundation words per transcript for the two dates, 1822, and 1832.
**EXERCISE**: Create a dataframe that only has columns for the five foundations plus the year. Then, use the pandas dataframe function `groupby` to group rows by the year, and call the `mean` function on the `groupby` output to get the averages for each foundation.
```
# the names of the columns we want to keep
mft_columns = ['authority/subversion', 'care/harm', 'fairness/cheating', 'loyalty/betrayal',
'sanctity/degradation', 'year']
# create a data frame with only the above columns included
mft_df = ...
# groups the rows of mft_df by year, then take the mean
foundation_avgs = ...
foundation_avgs
# SOLUTION
# the names of the columns we want to keep
mft_columns = ['authority/subversion', 'care/harm', 'fairness/cheating', 'loyalty/betrayal',
'sanctity/degradation', 'year']
# create a data frame with only the above columns included
mft_df = old_bailey.loc[:, mft_columns]
# groups the rows by year, then take the mean
foundation_avgs = mft_df.groupby('year').mean()
foundation_avgs
```
Next, create a bar graph. The simplest way is to call `.plot.barh()` on your dataframe of the averages.
Also try calling `.transpose()` on your averages dataframe, then making a bar graph of that. The transpose function flips the rows and columns and can make it easier to compare the percentages.
```
# create a bar graph
...
# SOLUTION
# create a bar graph
foundation_avgs.transpose().plot.barh()
```
**QUESTION:** What do you see from the bar graphs you created?
Why would this be a good approach to answering the question of how talk about morality changed between these two periods? What are some limitations of this approach (Hint: look at the values on the graphs you calculated, and remember: these are *percentages*, not proportions)?
*Write your answer here.*
*Potential answer: The proportion of care/harm words is greater before the repeal, and the proportion of authority/subversion words is greater after the repeal. Rarity of words means results may not be significant. And, since this dictionary was not designed specifically for this data set, it also may be missing relevant words.*
----
## Section 4: Non-negative matrix factorization<a id='section 4'></a>
In this section, you can get an idea of sentiment using topic modeling algorithms, something you touched on in the 4/10 lab earlier this week, to help look for patterns.
On Tuesday, you explored Latent Dirichlet Allocation (LDA) in gensim to look for topics in a corpus. Non-negative matrix factorization (NMF), not included in gensim, is another such way to look for topics in unstructured text data. The two methods differ in what kinds of math they use 'under the hood': LDA relies on probabilistic graphical modeling, while NMF uses linear algebra.
We want to generate the topics found for 1822 and 1832 trials, look for topics related to tone or morality, and see if there's a difference between the two.
Run the cell below to make two lists: one list of the trial transcripts for each year.
```
# trial transcripts for 1822
transcripts_1822 = old_bailey[old_bailey['year'] == 1822]['transcript']
# trial transcripts for 1832
transcripts_1832 = old_bailey[old_bailey['year'] == 1832]['transcript']
```
We'll start by looking at 1822. The following cell creates the tfidf vectorizer, fits the text data, and assigns the list of feature name (i.e. the words in the document) to `tfidf_feature_names_1822`.
Check out the [documentation for TfidfVectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html) if you need a refresher on what it does.
```
# create the vectorizer
tfidf_vectorizer_1822 = TfidfVectorizer(max_df=0.95, min_df=2, max_features=1000, stop_words='english')
# fit the data
tfidf_1822 = tfidf_vectorizer_1822.fit_transform(transcripts_1822)
# get the feature names
tfidf_feature_names_1822 = tfidf_vectorizer_1822.get_feature_names()
```
**EXERCISE:** Create the TfidfVectorizer, fit_transform the data, and get the feature names for 1832.
```
# create the vectorizer
tfidf_vectorizer_1832 = ...
# fit the data
tfidf_1832 = ...
# get the feature names
tfidf_feature_names_1832 = ...
# SOLUTION
tfidf_vectorizer_1832 = TfidfVectorizer(max_df=0.95, min_df=2, max_features=1000, stop_words='english')
tfidf_1832 = tfidf_vectorizer_1832.fit_transform(transcripts_1832)
tfidf_feature_names_1832 = tfidf_vectorizer_1832.get_feature_names()
```
As mentioned previously the algorithms are not able to automatically determine the number of topics and this value must be set when running the algorithm. Initialising NMF with ‘nndsvd’ rather than random initialisation improves the time it takes for NMF to converge.`random_state` gives the seed for the random number generator to use: this lets us reproduce our results in the future.
```
num_topics = 20
# Run NMF for 1822
nmf_1822 = NMF(n_components=num_topics, random_state=1, init='nndsvd').fit(tfidf_1822)
```
**EXERCISE:** Run NMF using `num_topics` for the number of components on the data from 1832.
```
# Run NMF for 1832
nmf_1832 = ...
# SOLUTION
nmf_1832 = NMF(n_components=num_topics, random_state=1, init='nndsvd').fit(tfidf_1832)
```
We've provided you the function to display the topics shown by the NMF.
```
def display_topics(model, feature_names, num_top_words):
"""Displays NUM_TOP_WORDS topics for MODEL """
for topic_idx, topic in enumerate(model.components_):
print("Topic %d:" % (topic_idx))
print(" ".join([feature_names[i]
for i in topic.argsort()[:-num_top_words - 1:-1]]))
# the number of words to display per topic
num_top_words = 10
# display the topics for 1822
display_topics(nmf_1822, tfidf_feature_names_1822, num_top_words)
# display the topics for 1832
display_topics(nmf_1832, tfidf_feature_names_1832, num_top_words)
```
As in LDA, it often takes some hyperparameter tuning before you get a coherent set of topics. Go back and tune the parameters for tfidf and NMF to see if you can get topics that show something about the moral sentiment of the transcripts. `num_topics` usually makes the most dramatic difference.
Once you've compared several sets of topics, answer the next question:
**QUESTION:** What did the best set of topics you found tell you about the tone of the documents for the two periods? Why might this approach be a good way to study sentiment? What are some issues with this approach?
*Write your answer here.*
*Potential answer: NMF can be good for uncovering patterns in unstructured data without imposing structure like in dictionary methods. But, unstructured data means you might have to dig a lot to find relevant topics.*
----
Fantastic! Now you know how to approach sentiment analysis several ways using general sentiment analysis with `VADER`, Moral Foundations Theory, and Non-negative matrix factorization.
Ultimately, there were limitations to all of these methods. In your homework, you'll explore how to generate your own dictionary to try to overcome some of these limitations.
----
## Bibliography
- Moral Foundations Theory background and dictionary: *Jesse Graham and Jonathan Haidt http://moralfoundations.org/*
- Moral Foundations analysis code adapted from *https://github.com/ds-modules/XRHETOR-R1A/blob/master/02-Moral-Foundations-Analysis/02-Moral-Foundations-Analysis.ipynb*
- NMF code and explanation adapted from *Aneesha Bakharia. 'Topic Modeling With Scikit-Learn.' https://medium.com/mlreview/topic-modeling-with-scikit-learn-e80d33668730*
----
Notebook developed by: Gibson Chu, Keeley Takimoto
Data Science Modules: http://data.berkeley.edu/education/modules
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
import scipy.signal as sp
import numpy.random as rnd
import ssid
%matplotlib inline
# Just a helper for defining plants
def generalizedPlant(A,B,C,D,Cov,dt):
CovChol = la.cholesky(Cov,lower=True)
NumStates = len(A)
B1 = CovChol[:NumStates,:]
B2 = B
Bbig = np.hstack((B1,B2))
D1 = CovChol[NumStates:,:]
D2 = D
Dbig = np.hstack((D1,D2))
P = (A,Bbig,C,Dbig,dt)
return P
# Define the plant. Here it is a coupled spring-mass-damper system with forces and position measurements.
dt = .1
k1 = 1.
k2 = 2.
c1 = 2.
c2 = 2.
m1 = 1.
m2 = 1.
Kmat = np.array([[k1+k2,-k2],
[-k2,k2]])
Cmat = np.diag([c1,c2])
Mmat = np.diag([m1,m2])
Ac = np.zeros((4,4))
Ac[:2,2:] = np.eye(2)
Ac[2:,:2] = -la.solve(Mmat,Kmat,sym_pos=True)
Ac[2:,2:] = -la.solve(Mmat,Cmat,sym_pos=True)
A = np.eye(4) + dt * Ac
Bc = np.zeros((4,2))
Bc[2:] = la.inv(Mmat)
B = dt * Bc
C = np.zeros((2,4))
C[:,:2] = np.eye(2)
NumInputs = B.shape[1]
NumOutputs = C.shape[0]
D = np.zeros((NumOutputs,NumInputs))
NumStates = len(A)
NumOutputs,NumInputs = D.shape
Q = .1 * np.eye(NumStates) * dt
S = np.zeros((NumStates,NumOutputs))
R = .1 * np.eye(NumOutputs) / dt
CovTop = np.hstack((Q,S))
CovBot = np.hstack((S.T,R))
Cov = np.vstack((CovTop,CovBot))
P = generalizedPlant(A,B,C,D,Cov,dt)
# Generate an input to the plant.
# Here we are using two independent Ornstein-Uhlenbeck processes.
theta = .001
AU = (1- dt * theta) * np.eye(NumInputs)
BU = 10 * np.eye(NumInputs) * np.sqrt(dt)
CU = np.eye(NumInputs)
DU = np.zeros((NumInputs,NumInputs)) / np.sqrt(dt)
# Sampling rate of 1
USys = (AU,BU,CU,DU,dt)
NumURows = 10
NumUCols = 2000
NumU = 2 * NumURows + NumUCols - 1
Time = np.arange(NumU)
uu = rnd.randn(NumU,NumInputs)
# u is the actual input
tu,u,xu = sp.dlsim(USys,uu)
# Simulate the plant with the generated input as well as noise
w = rnd.randn(NumU,NumStates+NumOutputs)
bigU = np.hstack((w,u))
tout,y,xout = sp.dlsim(P,bigU)
plt.plot(tout,u)
plt.plot(tout,y)
# Identify matrices using the Standard N4SID algorithm
AID,BID,CID,DID,CovID,S = ssid.N4SID(u.T,y.T,NumURows,NumUCols,4)
# For conistency with the Subspace ID literature,
# we transpose the signals so that each input and output instance is a column vector
# This is opposite from the lsim convention
# Check the singular values
plt.plot(S / S.sum())
# Check how the computed impulse resonse compares.
plt.figure()
NhSteps = 200
tout,hTrue = sp.dimpulse((A,B,C,D,dt),t=dt * np.arange(NhSteps))
tout,hID = sp.dimpulse((AID,BID,CID,DID,dt),t=dt*np.arange(NhSteps))
for i in range(NumOutputs):
for j in range(NumInputs):
plt.figure()
plt.plot(tout,hTrue[j][:,i])
plt.plot(tout,hID[j][:,i])
"""
Identify matrices using the N4SID algorithm with a stability constraint, as in:
Lacy and Bernstein, "Subspace Identification with Guaranteed Stability Using Constrained Optimization", 2002
"""
AID,BID,CID,DID,CovID,S = ssid.N4SID(u.T,y.T,NumURows,NumUCols,4,require_stable=True)
# The singular values will be exacly the same so will not be plotted
# Check how the computed impulse resonse compares.
plt.figure()
NhSteps = 200
tout,hTrue = sp.dimpulse((A,B,C,D,dt),t=dt * np.arange(NhSteps))
tout,hID = sp.dimpulse((AID,BID,CID,DID,dt),t=dt*np.arange(NhSteps))
for i in range(NumOutputs):
for j in range(NumInputs):
plt.figure()
plt.plot(tout,hTrue[j][:,i])
plt.plot(tout,hID[j][:,i])
```
| github_jupyter |
# TSG003 - Show BDC Spark sessions
## Steps
### Common functions
Define helper functions used in this notebook.
```
# Define `run` function for transient fault handling, suggestions on error, and scrolling updates on Windows
import sys
import os
import re
import platform
import shlex
import shutil
import datetime
from subprocess import Popen, PIPE
from IPython.display import Markdown
retry_hints = {} # Output in stderr known to be transient, therefore automatically retry
error_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help
install_hint = {} # The SOP to help install the executable if it cannot be found
def run(cmd, return_output=False, no_output=False, retry_count=0, base64_decode=False, return_as_json=False):
"""Run shell command, stream stdout, print stderr and optionally return output
NOTES:
1. Commands that need this kind of ' quoting on Windows e.g.:
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name}
Need to actually pass in as '"':
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='"'data-pool'"')].metadata.name}
The ' quote approach, although correct when pasting into Windows cmd, will hang at the line:
`iter(p.stdout.readline, b'')`
The shlex.split call does the right thing for each platform, just use the '"' pattern for a '
"""
MAX_RETRIES = 5
output = ""
retry = False
# When running `azdata sql query` on Windows, replace any \n in """ strings, with " ", otherwise we see:
#
# ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)')
#
if platform.system() == "Windows" and cmd.startswith("azdata sql query"):
cmd = cmd.replace("\n", " ")
# shlex.split is required on bash and for Windows paths with spaces
#
cmd_actual = shlex.split(cmd)
# Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries
#
user_provided_exe_name = cmd_actual[0].lower()
# When running python, use the python in the ADS sandbox ({sys.executable})
#
if cmd.startswith("python "):
cmd_actual[0] = cmd_actual[0].replace("python", sys.executable)
# On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail
# with:
#
# UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128)
#
# Setting it to a default value of "en_US.UTF-8" enables pip install to complete
#
if platform.system() == "Darwin" and "LC_ALL" not in os.environ:
os.environ["LC_ALL"] = "en_US.UTF-8"
# When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc`
#
if cmd.startswith("kubectl ") and "AZDATA_OPENSHIFT" in os.environ:
cmd_actual[0] = cmd_actual[0].replace("kubectl", "oc")
# To aid supportability, determine which binary file will actually be executed on the machine
#
which_binary = None
# Special case for CURL on Windows. The version of CURL in Windows System32 does not work to
# get JWT tokens, it returns "(56) Failure when receiving data from the peer". If another instance
# of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost
# always the first curl.exe in the path, and it can't be uninstalled from System32, so here we
# look for the 2nd installation of CURL in the path)
if platform.system() == "Windows" and cmd.startswith("curl "):
path = os.getenv('PATH')
for p in path.split(os.path.pathsep):
p = os.path.join(p, "curl.exe")
if os.path.exists(p) and os.access(p, os.X_OK):
if p.lower().find("system32") == -1:
cmd_actual[0] = p
which_binary = p
break
# Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this
# seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound)
#
# NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split.
#
if which_binary == None:
which_binary = shutil.which(cmd_actual[0])
# Display an install HINT, so the user can click on a SOP to install the missing binary
#
if which_binary == None:
print(f"The path used to search for '{cmd_actual[0]}' was:")
print(sys.path)
if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None:
display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)")
else:
cmd_actual[0] = which_binary
start_time = datetime.datetime.now().replace(microsecond=0)
print(f"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)")
print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})")
print(f" cwd: {os.getcwd()}")
# Command-line tools such as CURL and AZDATA HDFS commands output
# scrolling progress bars, which causes Jupyter to hang forever, to
# workaround this, use no_output=True
#
# Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait
#
wait = True
try:
if no_output:
p = Popen(cmd_actual)
else:
p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1)
with p.stdout:
for line in iter(p.stdout.readline, b''):
line = line.decode()
if return_output:
output = output + line
else:
if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file
regex = re.compile(' "(.*)"\: "(.*)"')
match = regex.match(line)
if match:
if match.group(1).find("HTML") != -1:
display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"'))
else:
display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"'))
wait = False
break # otherwise infinite hang, have not worked out why yet.
else:
print(line, end='')
if wait:
p.wait()
except FileNotFoundError as e:
if install_hint is not None:
display(Markdown(f'HINT: Use {install_hint} to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e
exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait()
if not no_output:
for line in iter(p.stderr.readline, b''):
try:
line_decoded = line.decode()
except UnicodeDecodeError:
# NOTE: Sometimes we get characters back that cannot be decoded(), e.g.
#
# \xa0
#
# For example see this in the response from `az group create`:
#
# ERROR: Get Token request returned http error: 400 and server
# response: {"error":"invalid_grant",# "error_description":"AADSTS700082:
# The refresh token has expired due to inactivity.\xa0The token was
# issued on 2018-10-25T23:35:11.9832872Z
#
# which generates the exception:
#
# UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte
#
print("WARNING: Unable to decode stderr line, printing raw bytes:")
print(line)
line_decoded = ""
pass
else:
# azdata emits a single empty line to stderr when doing an hdfs cp, don't
# print this empty "ERR:" as it confuses.
#
if line_decoded == "":
continue
print(f"STDERR: {line_decoded}", end='')
if line_decoded.startswith("An exception has occurred") or line_decoded.startswith("ERROR: An error occurred while executing the following cell"):
exit_code_workaround = 1
# inject HINTs to next TSG/SOP based on output in stderr
#
if user_provided_exe_name in error_hints:
for error_hint in error_hints[user_provided_exe_name]:
if line_decoded.find(error_hint[0]) != -1:
display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.'))
# Verify if a transient error, if so automatically retry (recursive)
#
if user_provided_exe_name in retry_hints:
for retry_hint in retry_hints[user_provided_exe_name]:
if line_decoded.find(retry_hint) != -1:
if retry_count < MAX_RETRIES:
print(f"RETRY: {retry_count} (due to: {retry_hint})")
retry_count = retry_count + 1
output = run(cmd, return_output=return_output, retry_count=retry_count)
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
elapsed = datetime.datetime.now().replace(microsecond=0) - start_time
# WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so
# don't wait here, if success known above
#
if wait:
if p.returncode != 0:
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n')
else:
if exit_code_workaround !=0 :
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(exit_code_workaround)}.\n')
print(f'\nSUCCESS: {elapsed}s elapsed.\n')
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
# Hints for tool retry (on transient fault), known errors and install guide
#
retry_hints = {'python': [ ], 'azdata': ['Endpoint sql-server-master does not exist', 'Endpoint livy does not exist', 'Failed to get state for cluster', 'Endpoint webhdfs does not exist', 'Adaptive Server is unavailable or does not exist', 'Error: Address already in use', 'Login timeout expired (0) (SQLDriverConnect)', 'SSPI Provider: No Kerberos credentials available', ], 'kubectl': ['A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond', ], }
error_hints = {'python': [['Library not loaded: /usr/local/opt/unixodbc', 'SOP012 - Install unixodbc for Mac', '../install/sop012-brew-install-odbc-for-sql-server.ipynb'], ['WARNING: You are using pip version', 'SOP040 - Upgrade pip in ADS Python sandbox', '../install/sop040-upgrade-pip.ipynb'], ], 'azdata': [['Please run \'azdata login\' to first authenticate', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['The token is expired', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Reason: Unauthorized', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Max retries exceeded with url: /api/v1/bdc/endpoints', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Look at the controller logs for more details', 'TSG027 - Observe cluster deployment', '../diagnose/tsg027-observe-bdc-create.ipynb'], ['provided port is already allocated', 'TSG062 - Get tail of all previous container logs for pods in BDC namespace', '../log-files/tsg062-tail-bdc-previous-container-logs.ipynb'], ['Create cluster failed since the existing namespace', 'SOP061 - Delete a big data cluster', '../install/sop061-delete-bdc.ipynb'], ['Failed to complete kube config setup', 'TSG067 - Failed to complete kube config setup', '../repair/tsg067-failed-to-complete-kube-config-setup.ipynb'], ['Data source name not found and no default driver specified', 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ['Can\'t open lib \'ODBC Driver 17 for SQL Server', 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ['Control plane upgrade failed. Failed to upgrade controller.', 'TSG108 - View the controller upgrade config map', '../diagnose/tsg108-controller-failed-to-upgrade.ipynb'], ['NameError: name \'azdata_login_secret_name\' is not defined', 'SOP013 - Create secret for azdata login (inside cluster)', '../common/sop013-create-secret-for-azdata-login.ipynb'], ['ERROR: No credentials were supplied, or the credentials were unavailable or inaccessible.', 'TSG124 - \'No credentials were supplied\' error from azdata login', '../repair/tsg124-no-credentials-were-supplied.ipynb'], ['Please accept the license terms to use this product through', 'TSG126 - azdata fails with \'accept the license terms to use this product\'', '../repair/tsg126-accept-license-terms.ipynb'], ], 'kubectl': [['no such host', 'TSG010 - Get configuration contexts', '../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb'], ['No connection could be made because the target machine actively refused it', 'TSG056 - Kubectl fails with No connection could be made because the target machine actively refused it', '../repair/tsg056-kubectl-no-connection-could-be-made.ipynb'], ], }
install_hint = {'azdata': [ 'SOP063 - Install azdata CLI (using package manager)', '../install/sop063-packman-install-azdata.ipynb' ], 'kubectl': [ 'SOP036 - Install kubectl command line interface', '../install/sop036-install-kubectl.ipynb' ], }
print('Common functions defined successfully.')
```
### Get the Kubernetes namespace for the big data cluster
Get the namespace of the Big Data Cluster use the kubectl command line
interface .
**NOTE:**
If there is more than one Big Data Cluster in the target Kubernetes
cluster, then either:
- set \[0\] to the correct value for the big data cluster.
- set the environment variable AZDATA_NAMESPACE, before starting Azure
Data Studio.
```
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = run(f'kubectl get namespace --selector=MSSQL_CLUSTER -o jsonpath={{.items[0].metadata.name}}', return_output=True)
except:
from IPython.display import Markdown
print(f"ERROR: Unable to find a Kubernetes namespace with label 'MSSQL_CLUSTER'. SQL Server Big Data Cluster Kubernetes namespaces contain the label 'MSSQL_CLUSTER'.")
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print(f'The SQL Server Big Data Cluster Kubernetes namespace is: {namespace}')
```
### Get the controller username and password
Get the controller username and password from the Kubernetes Secret
Store and place in the required AZDATA_USERNAME and AZDATA_PASSWORD
environment variables.
```
# Place controller secret in AZDATA_USERNAME/AZDATA_PASSWORD environment variables
import os, base64
os.environ["AZDATA_USERNAME"] = run(f'kubectl get secret/controller-login-secret -n {namespace} -o jsonpath={{.data.username}}', return_output=True, base64_decode=True)
os.environ["AZDATA_PASSWORD"] = run(f'kubectl get secret/controller-login-secret -n {namespace} -o jsonpath={{.data.password}}', return_output=True, base64_decode=True)
print(f"Controller username '{os.environ['AZDATA_USERNAME']}' and password stored in environment variables")
```
### Run azdata to list spark sessions
```
run('azdata bdc spark session list')
print("Notebook execution is complete.")
```
| github_jupyter |
"With whom do users initiate?" Mlogit Modeling
===
Multiple notes in other places about this...
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import os
import re
import pandas as pd
import numpy as np
from collections import Counter, defaultdict
import sqlite3
from tqdm import tqdm
import random
import pickle
from datetime import datetime
import bisect
import matplotlib.pyplot as plt
import matplotlib.dates as md
import matplotlib
import pylab as pl
from IPython.core.display import display, HTML
import networkx as nx
import sys
sys.path.append("/home/srivbane/levon003/repos/qual-health-journeys/annotation_data")
import journal as journal_utils
working_dir = "/home/lana/shared/caringbridge/data/projects/sna-social-support/author_initiations"
assert os.path.exists(working_dir)
git_root_dir = !git rev-parse --show-toplevel
git_root_dir = git_root_dir[0]
figures_dir = os.path.join(git_root_dir, 'figures')
figures_dir
start_date = datetime.fromisoformat('2005-01-01')
start_timestamp = int(start_date.timestamp() * 1000)
end_date = datetime.fromisoformat('2016-06-01')
end_timestamp = int(end_date.timestamp() * 1000)
subset_start_date = datetime.fromisoformat('2014-01-01')
subset_start_timestamp = int(subset_start_date.timestamp() * 1000)
```
### Read in the data
```
# load the list of valid users
data_selection_working_dir = "/home/lana/shared/caringbridge/data/projects/sna-social-support/data_selection"
valid_user_ids = set()
with open(os.path.join(data_selection_working_dir, "valid_user_ids.txt"), 'r') as infile:
for line in infile:
user_id = line.strip()
if user_id == "":
continue
else:
valid_user_ids.add(int(user_id))
len(valid_user_ids)
# load the list of valid sites
data_selection_working_dir = "/home/lana/shared/caringbridge/data/projects/sna-social-support/data_selection"
valid_site_ids = set()
with open(os.path.join(data_selection_working_dir, "valid_site_ids.txt"), 'r') as infile:
for line in infile:
site_id = line.strip()
if site_id == "":
continue
else:
valid_site_ids.add(int(site_id))
len(valid_site_ids)
# read the journal metadata with author type info added
s = datetime.now()
author_type_dir = "/home/lana/shared/caringbridge/data/projects/sna-social-support/author_type"
journal_metadata_filepath = os.path.join(author_type_dir, "journal_metadata_with_author_type.df")
journal_df = pd.read_feather(journal_metadata_filepath)
print(datetime.now() - s)
len(journal_df)
# as a quick fix for invalid dates in journals, when created_at is 0 we use the updated_at instead
# note that only 41 updates have this issue
invalid_created_at = journal_df.created_at <= 0
journal_df.loc[invalid_created_at, 'created_at'] = journal_df.loc[invalid_created_at, 'updated_at']
health_cond_filepath = os.path.join("/home/lana/shared/caringbridge/data/projects/sna-social-support/user_metadata", "assigned_health_conditions.feather")
user_health_conds_df = pd.read_feather(health_cond_filepath)
len(user_health_conds_df)
# read the user author type dataframe
author_type_dir = "/home/lana/shared/caringbridge/data/projects/sna-social-support/author_type"
user_patient_proportions_filepath = os.path.join(author_type_dir, 'user_patient_proportions.df')
user_df = pd.read_feather(user_patient_proportions_filepath)
len(user_df)
# read the user->user interactions dataframe
metadata_dir = "/home/lana/shared/caringbridge/data/projects/sna-social-support/user_metadata"
u2u_df = pd.read_feather(os.path.join(metadata_dir,"u2u_df.feather"))
len(u2u_df)
# read the site-level metadata
site_metadata_working_dir = "/home/lana/shared/caringbridge/data/derived/site_metadata"
site_metadata_filepath = os.path.join(site_metadata_working_dir, "site_metadata.feather")
site_metadata_df = pd.read_feather(site_metadata_filepath)
len(site_metadata_df)
# currently not necessary, since we use the processed user->user interactions...
# read in the interactions dataframe
#metadata_dir = "/home/lana/shared/caringbridge/data/projects/sna-social-support/user_metadata"
#author_to_site = os.path.join(metadata_dir, "interaction_metadata.h5")
#ints_df = pd.read_hdf(author_to_site)
#len(ints_df)
```
## Compute and merge the features
```
user_df = user_df[user_df.user_id.isin(valid_user_ids)]
len(user_df)
user_df['is_multisite_author'] = user_df.num_sites > 1
np.sum(user_df.is_multisite_author) / len(user_df)
is_mixedsite_author_dict = {}
site_author_sets = journal_df[journal_df.user_id.isin(valid_user_ids)].groupby('site_id').agg({'user_id': lambda user_ids: set(user_ids)})
for site_id, user_ids in zip(site_author_sets.index, site_author_sets.user_id):
if len(user_ids) > 1:
for user_id in user_ids:
is_mixedsite_author_dict[user_id] = True
is_mixedsite_author = [user_id in is_mixedsite_author_dict for user_id in user_df.user_id]
user_df['is_mixedsite_author'] = is_mixedsite_author
# 21.8% of authors have written updates on a site on which another valid author has written an update
np.sum(is_mixedsite_author), np.sum(is_mixedsite_author) / len(is_mixedsite_author)
# merge in the health condition data
user_health_cond_dict = {user_id: assigned_health_cond for user_id, assigned_health_cond in zip(user_health_conds_df.user_id, user_health_conds_df.assigned_health_cond)}
health_condition = [user_health_cond_dict[user_id] for user_id in user_df.user_id]
user_df['health_condition'] = health_condition
# number of journal updates, first update, last update
user_updates_df = journal_df[journal_df.user_id.isin(valid_user_ids)].groupby('user_id').agg({
'journal_oid': lambda group: len(group),
'created_at': lambda created_ats: (np.min(created_ats), np.max(created_ats))
}).reset_index() # note that columns are not renamed appropriately, but are reused immediately
user_update_count_dict = {
user_id: count for user_id, count
in zip(user_updates_df.user_id, user_updates_df.journal_oid)}
user_first_update_dict = {
user_id: created_at[0] for user_id, created_at
in zip(user_updates_df.user_id, user_updates_df.created_at)}
user_last_update_dict = {
user_id: created_at[1] for user_id, created_at
in zip(user_updates_df.user_id, user_updates_df.created_at)}
update_count = [user_update_count_dict[user_id] for user_id in user_df.user_id]
first_update = [user_first_update_dict[user_id] for user_id in user_df.user_id]
last_update = [user_last_update_dict[user_id] for user_id in user_df.user_id]
user_df['update_count'] = update_count
user_df['first_update'] = first_update
user_df['last_update'] = last_update
user_df['author_tenure'] = user_df.last_update - user_df.first_update
assert np.all(user_df.author_tenure > 0)
# posting frequency (updates per month, across all sites)
tenure_in_months = user_df.author_tenure / (1000 * 60 * 60 * 24 * 30)
user_df['update_frequency'] = user_df.update_count / tenure_in_months
# is_interacted_with
# computed from the user->user interaction data
interacted_with_user_ids = set(u2u_df.to_user_id)
is_interacted_with = [user_id in interacted_with_user_ids for user_id in user_df.user_id]
user_df['is_interacted_with'] = is_interacted_with
np.sum(is_interacted_with), np.sum(is_interacted_with) / len(is_interacted_with)
# is this user an initiator at any point
initiating_user_ids = set(u2u_df.from_user_id)
is_initiator = [user_id in initiating_user_ids for user_id in user_df.user_id]
user_df['is_initiator'] = is_initiator
np.sum(is_initiator), np.sum(is_initiator) / len(is_initiator)
```
#### Compute the dictionary for user->(created_at)
```
user_updates_dict = journal_df.sort_values(by='created_at', ascending=True).groupby('user_id').agg({
'created_at': lambda created_at: created_at.tolist()
}).created_at.to_dict()
```
#### Compute the visits of the most-visited site authored by a user
```
# construct user->site dictionary
# contains all sites that authors have updated at least one journal update on
user_site_dict = defaultdict(set)
for row in tqdm(journal_df.itertuples(), total=len(journal_df)):
user_site_dict[row.user_id].add(row.site_id)
# construct site->visits dictionary
site_visits_dict = {site_id: visits for site_id, visits in zip(site_metadata_df.site_id, site_metadata_df.visits)}
# construct user->visits dictionary
# pools across multiple sites by taking the site with the maximum number of visits
user_visits_dict = {user_id: max(site_visits_dict[site_id] for site_id in user_site_dict[user_id] if site_id in site_visits_dict)
for user_id in user_df.user_id}
```
### Filter the u2u links
```
valid_u2u_df = u2u_df[(u2u_df.from_user_id.isin(valid_user_ids))&(u2u_df.to_user_id.isin(valid_user_ids))]
len(valid_u2u_df), len(valid_u2u_df) / len(u2u_df)
inits_df = valid_u2u_df.sort_values(by='created_at', ascending=True).drop_duplicates(subset=['from_user_id', 'to_user_id'], keep='first')
len(inits_df), len(inits_df) / len(u2u_df)
model_start_date = datetime.fromisoformat('2014-01-01')
model_start_timestamp = int(model_start_date.timestamp() * 1000)
model_end_date = datetime.fromisoformat('2016-01-01')
model_end_timestamp = int(model_end_date.timestamp() * 1000)
```
### Implementation of high-level graph code
```
class WccGraph:
def __init__(self, node_uids):
self.node_uids = node_uids
self.node_dict = {} # maps node_uid to component_uid
self.component_dict = {} # maps component_uid to a set of node_uids
for component_uid, node_uid in enumerate(self.node_uids):
self.node_dict[node_uid] = component_uid
self.component_dict[component_uid] = set((node_uid,))
self.edge_count = 0
def add_edge(self, from_node_uid, to_node_uid):
self.edge_count += 1
from_component_uid = self.node_dict[from_node_uid]
to_component_uid = self.node_dict[to_node_uid]
if from_component_uid == to_component_uid:
# these nodes are already weakly connected
is_intra_component_edge = True
from_component_size, to_component_size = 0, 0
else: # two different components are being merged with this edge
is_intra_component_edge = False
from_component_nodes = self.component_dict[from_component_uid]
to_component_nodes = self.component_dict[to_component_uid]
from_component_size = len(from_component_nodes)
to_component_size = len(to_component_nodes)
if from_component_size >= to_component_size:
# merge To component into From component, deleting the To component
from_component_nodes.update(to_component_nodes)
del self.component_dict[to_component_uid]
for node_uid in to_component_nodes:
# update the merged in component ids
self.node_dict[node_uid] = from_component_uid
else:
# merge From component into To component, deleting the From component
to_component_nodes.update(from_component_nodes)
del self.component_dict[from_component_uid]
for node_uid in from_component_nodes:
# update the merged in component ids
self.node_dict[node_uid] = to_component_uid
return is_intra_component_edge, from_component_size, to_component_size
def are_weakly_connected(self, user_id1, user_id2):
# two nodes are weakly connected if they exist in the same WCC
return self.node_dict[user_id1] == self.node_dict[user_id2]
def are_connected(G, source, target):
nodes = []
visited = set()
visited.add(source)
nodes.extend(G[source])
are_connected = False
while len(nodes) != 0:
node = nodes.pop(0)
if node in visited:
continue
visited.add(node)
if node == target:
are_connected = True
break
nodes.extend(G[node])
return are_connected
def are_strongly_connected(G, user_id1, user_id2):
# we assume that user_id1 and user_id2 are known to be weakly connected
# thus, the two are strongly connected if we can find a path from one to the other and back
if len(G[user_id1]) == 0 or len(G[user_id2]) == 0:
# if there are zero outbound edges from one of the nodes, they can't be strongly connected
return False
return are_connected(G, user_id1, user_id2) and are_connected(G, user_id2, user_id1)
def compute_is_friend_of_friend(G, user_id1, user_id2):
if len(G[user_id1]) == 0 or len(G[user_id2]) == 0:
# if there are zero outbound edges from one of the nodes, they can't be strongly connected
return False
return are_fof_connected(G, user_id1, user_id2) and are_fof_connected(G, user_id2, user_id1)
def are_fof_connected(G, source, target):
# must be a direct connection from either source -> target, or from source -> neighbor -> target
if target in G[source]:
return True
for neighbor in G[source]:
if target in G[neighbor]:
return True
return False
```
### Build the initial graph subset
```
inits_subset = inits_df[inits_df.created_at < model_start_timestamp]
len(inits_subset)
s = datetime.now()
base_graph = nx.DiGraph()
nodes = set(inits_subset.from_user_id) | set(inits_subset.to_user_id)
edges = [tuple(row) for row in inits_subset[["from_user_id", "to_user_id"]].values]
base_graph.add_nodes_from(nodes)
base_graph.add_edges_from(edges)
print(f"{datetime.now() - s}")
# this second graph tracks only weakly connected component info
s = datetime.now()
user_set = set(inits_df.from_user_id) | set(inits_df.to_user_id)
wcc_graph = WccGraph(user_set)
for from_user_id, to_user_id in inits_subset[["from_user_id", "to_user_id"]].values:
wcc_graph.add_edge(from_user_id, to_user_id)
print(f"{datetime.now() - s}")
G = base_graph.copy() # okay to edit this one
n = 20000
s = 24
# sample n initiations
# using s negative samples
# valid candidate users are ALL valid authors who have posted their first update at this time
inits_subset = inits_df[(inits_df.created_at >= model_start_timestamp)&(inits_df.created_at <= model_end_timestamp)]
inits_subset = inits_subset.sample(n=n).sort_values(by='created_at', ascending=True)
inits_subset.head()
user_df['time_to_first_update'] = user_df.first_update - model_start_timestamp
# if first update is positive, it is still in the future
# if first update is <= 0, then it should already be an eligible node
# however, it might not be in the network, since the base network only contains connected nodes
active_user_ids = user_df.loc[user_df.time_to_first_update <= 0, 'user_id']
len(active_user_ids) / len(user_df)
# create data structures storing all of the edges that do not yet but will exist in the model
# these will be added incrementally as computation continues
model_subset = inits_df[(inits_df.created_at >= model_start_timestamp)&(inits_df.created_at <= model_end_timestamp)]
all_edges = [(created_at, tuple(row))
for created_at, row
in zip(model_subset.created_at, model_subset[["from_user_id", "to_user_id"]].values)]
edge_df = pd.DataFrame(all_edges, columns=['created_at', 'edge'])
edge_df['time_to_existence'] = edge_df.created_at - model_start_timestamp
# if time_to_existence <= 0, it should exist in the network
assert np.all(edge_df.time_to_existence > 0)
len(edge_df)
prev_timestep = model_start_timestamp
active_user_ids = user_df.loc[user_df.time_to_first_update <= 0, 'user_id']
sampled_initiations = []
for from_user_id, to_user_id, created_at in tqdm(zip(inits_subset.from_user_id, inits_subset.to_user_id, inits_subset.created_at), total=len(inits_subset)):
curr_timestep = created_at
elapsed_time = curr_timestep - prev_timestep
if elapsed_time > 0: # if 2+ sampled initiations occur at the same time, elapsed_time == 0
# update the active users set
user_df.time_to_first_update -= elapsed_time
active_user_ids = user_df.loc[user_df.time_to_first_update <= 0, 'user_id']
# update the graph with all initiations between previous timestep and now
edge_df.time_to_existence -= elapsed_time
new_edge_mask = edge_df.time_to_existence < 0 # edges that exist AT zero happen at the current timestep, including the edge from_user_id, to_user_id
new_edges = edge_df[new_edge_mask]
edge_df = edge_df[~new_edge_mask] # TODO Use loc for assignment?
#assert np.all(edge_df[edge_df.time_to_existence==0].created_at == created_at)
G.add_edges_from(new_edges.edge)
# also add edges to the WCC graph
for from_user_id, to_user_id in new_edges.edge:
wcc_graph.add_edge(from_user_id, to_user_id)
# candidate users are all active users...
candidate_user_ids = set(active_user_ids)
# ... minus the true initiation target...
candidate_user_ids.discard(to_user_id)
# ... minus users already initiated to by this user
if from_user_id in G:
candidate_user_ids -= set(G[from_user_id].keys())
# we only sample s of the candidate users
negative_sampled_users = list(random.sample(candidate_user_ids, s))
# now, extract ids for the target user and all of the negative sampled users
indegree_list = []
outdegree_list = []
is_reciprocal_list = []
is_weakly_connected_list = []
is_friend_of_friend_list = []
#is_strongly_connected_list = []
for user_id in [to_user_id] + negative_sampled_users:
is_friend_of_friend = False
if user_id in G:
indegree = G.in_degree(user_id)
outdegree = G.out_degree(user_id)
is_reciprocal = from_user_id in G[user_id]
is_weakly_connected = wcc_graph.are_weakly_connected(from_user_id, user_id)
if is_weakly_connected:
is_friend_of_friend = compute_is_friend_of_friend(G, from_user_id, user_id)
#is_strongly_connected = are_strongly_connected(G, from_user_id, user_id)
else:
indegree = 0
outdegree = 0
is_reciprocal = False
is_weakly_connected = False
indegree_list.append(indegree)
outdegree_list.append(outdegree)
is_reciprocal_list.append(is_reciprocal)
is_weakly_connected_list.append(is_weakly_connected)
is_strongly_connected_list.append(is_strongly_connected)
is_friend_of_friend_list.append(is_friend_of_friend)
d = {
'initiator_user_id': from_user_id,
'target_user_id': to_user_id,
'negative_user_ids': negative_sampled_users,
'created_at': created_at,
'indegree_list': indegree_list,
'outdegree_list': outdegree_list,
'is_reciprocal_list': is_reciprocal_list,
'is_weakly_connected_list': is_weakly_connected_list,
'is_friend_of_friend_list': is_friend_of_friend_list
}
sampled_initiations.append(d)
prev_timestep = curr_timestep
sampled_inits_df = pd.DataFrame(sampled_initiations)
len(sampled_inits_df)
# save the sampled initiations dataframe with graph features
# so that the expensive graph feature computation can be saved
sampled_inits_df_filename = "sampled_inits_df.pickle"
sampled_inits_df_filepath = os.path.join(working_dir, sampled_inits_df_filename)
sampled_inits_df.to_pickle(sampled_inits_df_filepath)
print("Finished.")
# read the sampled initiations dataframe with graph features
sampled_inits_df_filename = "sampled_inits_df.pickle"
sampled_inits_df_filepath = os.path.join(working_dir, sampled_inits_df_filename)
sampled_inits_df = pd.read_pickle(sampled_inits_df_filepath)
len(sampled_inits_df)
sampled_inits_df.head()
# dictionaries for computing user-level features
author_type_dict = {row.user_id: row.user_author_type for row in user_df.itertuples()}
health_condition_dict = {row.user_id: row.health_condition for row in user_df.itertuples()}
is_multisite_author_dict = {row.user_id: row.is_multisite_author for row in user_df.itertuples()}
is_mixedsite_author_dict = {row.user_id: row.is_mixedsite_author for row in user_df.itertuples()}
update_count_dict = {row.user_id: row.update_count for row in user_df.itertuples()}
update_frequency_dict = {row.user_id: row.update_frequency for row in user_df.itertuples()}
# compute days_since_most_recent_update
# given a target user_id and a created_at timestamp
def get_most_recent_update(user_id, created_at):
update_times = user_updates_dict[user_id]
# update_times is a sorted list of created_at times for all updates by the given user_id
ind = bisect.bisect_right(update_times, created_at)
most_recent_update = update_times[ind-1]
return most_recent_update
def compute_days_since_most_recent_update(user_id, created_at):
most_recent_update = get_most_recent_update(user_id, created_at)
ms_since_most_recent_update = created_at - most_recent_update
days_since_most_recent_update = ms_since_most_recent_update / (1000 * 60 * 60 * 24)
return days_since_most_recent_update
def compute_days_since_first_update(user_id, created_at):
update_times = user_updates_dict[user_id]
ind = bisect.bisect_right(update_times, created_at)
most_recent_update = update_times[ind-1]
first_update = update_times[0]
ms_since_first_update = most_recent_update - first_update
days_since_first_update = ms_since_first_update / (1000 * 60 * 60 * 24)
return days_since_first_update
sampled_initiations_filename = "author_initiation_choices_train_20000.csv"
sampled_initiations_filepath = os.path.join(working_dir, sampled_initiations_filename)
with open(sampled_initiations_filepath, 'w') as outfile:
header = """
choice_id,
initiator_user_id,
candidate_user_id,
is_target,
target_outdegree,
target_indegree,
target_has_indegree,
is_reciprocal,
is_weakly_connected,
is_friend_of_friend,
is_author_type_shared,
target_author_type,
initiator_author_type,
target_health_condition,
is_health_condition_shared,
target_is_multisite_author,
target_is_mixedsite_author,
target_update_count,
target_update_frequency,
target_days_since_most_recent_update,
target_days_since_first_update,
target_site_visits
"""
header = re.sub(r'\s+', '', header).strip() + "\n"
format_str = "iiiiiiiiiiiccciiiidddi"
outfile.write(header)
for i, row in tqdm(enumerate(sampled_inits_df.itertuples()), total=len(sampled_inits_df)):
choice_id = i
initiator_user_id = row.initiator_user_id
initiator_author_type = author_type_dict[initiator_user_id]
initiator_health_condition = health_condition_dict[initiator_user_id]
for i, user_id in enumerate([row.target_user_id] + row.negative_user_ids):
is_target = int(i == 0)
candidate_user_id = user_id
target_outdegree = row.outdegree_list[i]
target_indegree = row.indegree_list[i]
target_has_indegree = int(target_indegree > 0)
is_reciprocal = int(row.is_reciprocal_list[i])
is_weakly_connected = int(row.is_weakly_connected_list[i])
is_friend_of_friend = int(row.is_friend_of_friend_list[i])
# Include the user-level features for the candidates
target_author_type = author_type_dict[candidate_user_id]
is_author_type_shared = int(initiator_author_type == target_author_type)
target_health_condition = health_condition_dict[candidate_user_id]
is_health_condition_shared = int(initiator_health_condition == target_health_condition)
target_is_multisite_author = int(is_multisite_author_dict[candidate_user_id])
target_is_mixedsite_author = int(is_mixedsite_author_dict[candidate_user_id])
target_update_count = update_count_dict[candidate_user_id]
target_update_frequency = update_frequency_dict[candidate_user_id]
target_days_since_most_recent_update = compute_days_since_most_recent_update(candidate_user_id, row.created_at)
target_days_since_first_update = compute_days_since_first_update(candidate_user_id, row.created_at)
target_site_visits = user_visits_dict[candidate_user_id]
line_vars = [
choice_id,
initiator_user_id,
candidate_user_id,
is_target,
target_outdegree,
target_indegree,
target_has_indegree,
is_reciprocal,
is_weakly_connected,
is_friend_of_friend,
is_author_type_shared,
target_author_type,
initiator_author_type,
target_health_condition,
is_health_condition_shared,
target_is_multisite_author,
target_is_mixedsite_author,
target_update_count,
target_update_frequency,
target_days_since_most_recent_update,
target_days_since_first_update,
target_site_visits
]
line = ",".join([str(v) for v in line_vars]) + "\n"
#line = f"{choice_id},{initiator_user_id},{candidate_user_id},{is_target},{target_outdegree},{target_indegree},{target_has_indegree},{is_reciprocal},{is_author_type_shared},{target_author_type},{initiator_author_type}\n"
outfile.write(line)
print(f"R column types format string: {format_str}")
sampled_initiations_filepath
# TODO create test set as well
```
| github_jupyter |
## <center> Solving Linear Systems Using `Numpy` </center>##
```
import numpy as np
```
### Linear Systems ###
An $m\times n$ [linear system of equations](https://en.wikipedia.org/wiki/System_of_linear_equations) is a collection of linear equations:
$$
\begin{eqnarray*}
a_{11}x_1 + a_{12}x_2 + \cdots + a_{1n}x_n &=& b_1 \\
a_{21}x_1 + a_{22}x_2 + \cdots + a_{2n}x_n &=& b_2 \\
\vdots \hspace{0.5in} & & \\
a_{m1}x_1 + a_{m2}x_2 + \cdots + a_{mn}x_n &=& b_m \\
\end{eqnarray*}
$$
In matrix notation, a linear system is $A\vec{x} = \vec{b}$, where
$$
A = \begin{bmatrix}
a_{11} & a_{12} & \cdots & a_{1n} \\
a_{21} & a_{22} & \cdots & a_{2n} \\
\vdots & & & \\
a_{m1} & a_{m2} & \cdots & a_{mn}
\end{bmatrix},\ \ \ \
\vec{x} = \left[ \begin{array}{c}
x_1 \\ x_2 \\ \vdots \\ x_n
\end{array}
\right], \ \ \ \
\vec{b} = \begin{bmatrix} b_1 \\ b_2 \\ b_3 \end{bmatrix}
$$
The corresponding $m\times (n+1)$ augmented matrix is
$$ [\mathrm{A}|\vec{b}]\ = \left[ \begin{array}{cccc|c}
a_{11} & a_{12} & \cdots & a_{1n} & b_1 \\
a_{21} & a_{22} & \cdots & a_{2n} & b_2\\
\vdots & & & \\
a_{m1} & a_{m2} & \cdots & a_{mn} & b_m\\
\end{array}
\right]
$$
### Solving Linear Systems with `numpy.linalg.solve` or `numpy.linalg.inv` ###
If a coefficient matrix $A$ is square and nonsingular (i.e., $\mathrm{det}\,A\neq 0$), we can use the function [numpy.linalg.solve](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.solve.html#numpy.linalg.solve) or [numpy.linalg.inv](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.inv.html#numpy.linalg.inv). (For more `numpy.linalg` functions [see](https://docs.scipy.org/doc/numpy/reference/routines.linalg.html#module-numpy.linalg) this.)
The functions return a solution of the system $A\vec{x}=\vec{b}$. For example,
```
A = np.array([[1,1],[1,-1]])
print(A)
print("")
print("det(A) = ", np.linalg.det(A))
```
Since $\mathrm{det}\,A\neq 0$, the linear system has a unique solution and we may use `numpy.linalg.solve` to find the solution.
```
b1 = np.array([2,0])
print(b1)
x1 = np.linalg.solve(A,b1)
print(x1)
```
Note that the output is returned as a 1D NumPy array when the vector (the right hand side) is entered as a 1D NumPy array. If we input as a 2D NumPy array, then the output is a 2D NumPy array. For example:
```
A = np.array([[1,1],[1,-1]])
b2 = np.array([2,0]).reshape(2,1)
x2 = np.linalg.solve(A,b2)
print(x2)
```
If we want to use `numpy.linalg.inv`,
```
A_inv = np.linalg.inv(A)
print(A_inv)
```
And multiply $A^{-1}\vec{b}$ to find $\vec{x}$:
```
x = A_inv @ b1
print(x)
```
We get the same result.
**Inverse or Solve ?** It's not a good idea to use the inverse $A^{-1}$ to solve $A\vec{x} = \vec{b}$ if $A$ is large. It's too computationally expensive. Let's create a large random matrix $A$ and a vector $\vec{b}$, and compute the solution $\vec{x}$ in two ways:
```
N = 1000
B = np.random.rand(N,N)
b3 = np.random.rand(N,1)
```
Check the first entries of $B$ and $\vec{b}_3$:
```
print(B[:3,:3])
print(" ")
print(b2[:3,:])
```
Now, we compare the speed of `numpy.linalg.solve` with `numpy.linalg.inv`.
```
%%timeit
x = np.linalg.solve(B,b3)
%%timeit
x = np.linalg.inv(B) @ b3
```
Solving with `scipy.linalg.solve` is about three times faster. (The exact speed may be different depending on your machine.)
** Remark:** If $A$ is not square or nonsingular, then there are other `numpy.linalg` functions to be employed. For example, `np.linalg.lstsq`.
### Gaussian Elimination###
The general procedure to solve a linear system of equation is called [Gaussian elimination](https://en.wikipedia.org/wiki/Gaussian_elimination). This method is for any dimensions of matrices, including non-square matrices. The idea is to perform [elementary row operations](https://en.wikipedia.org/wiki/Elementary_matrix#Elementary_row_operations) to reduce the system to its row echelon form and then solve. Elementary row operations are
1. Interchange rows $i$ and $j$ ($R_i\leftrightarrow R_j$).
2. Multiply row $i$ by a *nonzero* scalar $k$ ($kR_i\rightarrow R_i$).
3. Add a *nonzero* scalar $k$ times row $i$ to row $j$ ($kR_i+R_j\rightarrow R_j$), where $i\neq j$.
### Elementary Matrices###
Each of the elementary row operations is the result of matrix multiplication by an elementary matrix (on the left).
[1] For $R_i\leftrightarrow R_j$ in an $m\times n$ matrix $A$, we multiply $A$ by an $m\times m$ matrix $E$ where $E$ is equal to the identity matrix $I_m$ except $E_{ii}=E_{jj}=0,$ and $E_{ij}=E_{ji}=1$. (Equivalently, we can interchange $i$-th row and $j$-th row of the identity matrix $I$ to get $E$.) For example, if $A$ is $3\times 4$ and we want to switch $R_1$ and $R_2$, then
$$
\begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & 1\\ 0 & 1 & 0 \end{bmatrix}
$$
Let's verify it:
```
A1 = np.array([[1,1,2, -1],[0, -1,3,1],[1,0,5,2]])
print("A is ")
print(A)
print(" ")
E1 = np.array([[1,0,0],[0,0,1],[0,1,0]])
print("E1 is")
print(E1)
print(" ")
A1_new = E1 @ A1
print("A1_new is ")
print(A1_new)
```
[2] For $kR_i\rightarrow R_i$ in an $m\times n$ matrix $A$, we multiply $A$ by the matrix $E$ where $E$ is equal to the identity matrix $I_m$ except $E_{ii}=k$. For example, if $A$ is 4 by 2 and we want to multiply row 2 by -2 then
$$
\begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & -2 & 0 &0\\ 0 & 0 & 1 &0 \\
0& 0 &0 &1 \end{bmatrix}
$$
Let's verify it:
```
A2 = np.array([[2, -1],[0, -1],[3,1],[1,0]])
print("A2 is ")
print(A2)
print(" ")
E2 = np.array([[1,0,0,0],[0,-2,0,0],[0,0,1,0],[0,0,0,1]])
print("E2 is")
print(E2)
print(" ")
A2_new = E2 @ A2
print("A_new2 is ")
print(A2_new)
```
[3] For $kR_i+R_j\rightarrow R_j$ in an $m\times n$ matrix $A$, we multiply $A$ by the matrix $E$ where $E$ is equal to the identity matrix $I_m$ except $E_{ji}=k$. For example, if $A$ is 3 by 3 and we want to add 3 times $R_3$ to row $R_1$ (i.e., $3R_3+R_1\rightarrow R_1$), then
$$
\begin{bmatrix} 1 & 0 & 3 \\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}
$$
Let's verify it:
```
A3 = np.array([[1,1,2],[-1,3,1],[0,5,2]])
print("A3 is ")
print(A3)
print(" ")
E3 = np.array([[1,0,3],[0,1,0],[0,0,1]])
print("E3 is")
print(E3)
print(" ")
A3_new = E3 @ A3
print("A3_new is ")
print(A3_new)
```
### Implementation ###
Let's write function to implement the elementary row operations.
[1] First of all, let's write a function called `switch_rows` which takes 3 input parameters $A, i$ and $j$ and returns the matrix that results from interchanging rows $i$ and $j$ in the matrix $A$.
```
def switch_rows(A,i,j):
"Switch rows 1 and j in matrix A."
m = A.shape[0]
E = np.eye(m)
E[i,i] = 0
E[j,j] = 0
E[i,j] = 1
E[j,i] = 1
return E @ A
A4 = np.array([[1,1,1],[1,-1,0]])
print(A4)
switch_rows(A4,0,1) # Note that the index in python starts from 0 but not 1.
```
[2] Second, let's write a function called `scale_row` which takes 3 input parameters $A, k,$ and $i$ and returns the matrix that results from $k$ times row $i$ in the matrix $A$.
```
def scale_row(A,k,i):
"Multiply row i by k in matrix A"
m = A.shape[0]
E = np.eye(m)
E[i,i] = k
return E @ A
A5 = np.array([[3,1],[-2,7]])
print(A5)
scale_row(A5,3,1)
```
[3] Finally, let's write a function called `add_rows` which takes input parameters $A, k, i$ and $j$ and returns the NumPy array resulting from adding $k$ times row $i$ to row $j$ in the matrix $A$. Note that we expect $i\neq j$, so have to put a warning message.
```
def add_row(A,k,i,j):
"Add k times row i to row j in matrix A."
m = A.shape[0]
E = np.eye(m)
if i==j:
print("Warning: i=j. i and j must be different.")
return None
else:
E[j,i] = k
return E @ A
A6 = np.array([[1,1],[3,2]])
print(A6)
add_row(A6,2,0,0)
```
## Exercises ##
1. You want to make a certain kind of tropical punch, using bananas, oranges, and papayas. Suppose you don't know how many of each to put in the punch, but you know that there are seven pieces of fruit in the mix, and there are twice as many oranges as bananas. You also know that the seven pieces of fruit cost 15.25 Dhs, where bananas cost 1.5 Dhs each, oranges cost 2.25 Dhs, and papayas cost 3.75 Dhs each.
(a) Construct a system of equation describing the problem.
(b) Solve the system using `numpy.linalg.solve`.
(c) How many bananas, oranges and papayas would you need to make this punch?
2. Consider the *tridiagonal* linear system
$$ \begin{eqnarray*}
3x_1 - x_2 &=& 2\\
-x_1+3x_2-x_3 & = & 1\\
&\vdots & \\
-x_{n-2}+3x_{n-1}-x_{n} &=& 1\\
-x_{n-1}+3x_n &=& 2
\end{eqnarray*}
$$
(a) Compare the speed of `numpy.linalg.solve` with `numpy.linalg.inv` when solving the linear system for $n=10$.
(b) Compare the speed of `numpy.linalg.solve` with `numpy.linalg.inv` when solving the linear system for $n=100$.
3. Use the functions defined in this notebook (`switch_rows`, `scale_row`, and `add_row` to find the general solution of the system
$$ \begin{eqnarray*}
3x_2 - 6x_3 + 6x_4 +4x_5 &=& -5\\
3x_1 - 7x_2 + 8x_3 - 5x_4 + 8x_5 &=& 9\\
3x_1 - 9x_2 + 12x_3 - 9x_4 +6x_5 &=& 15
\end{eqnarray*}
$$
| github_jupyter |
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
<a href="https://colab.research.google.com/github/lmoroney/dlaicourse/blob/master/TensorFlow%20In%20Practice/Course%203%20-%20NLP/Course%203%20-%20Week%202%20-%20Lesson%203.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
# NOTE: PLEASE MAKE SURE YOU ARE RUNNING THIS IN A PYTHON3 ENVIRONMENT
import tensorflow as tf
print(tf.__version__)
# Double check TF 2.0x is installed. If you ran the above block, there was a
# 'reset all runtimes' button at the bottom that you needed to press
import tensorflow as tf
print(tf.__version__)
# If the import fails, run this
# !pip install -q tensorflow-datasets
import tensorflow_datasets as tfds
imdb, info = tfds.load("imdb_reviews/subwords8k", with_info=True, as_supervised=True)
train_data, test_data = imdb['train'], imdb['test']
tokenizer = info.features['text'].encoder
print(tokenizer.subwords)
sample_string = 'TensorFlow, from basics to mastery'
tokenized_string = tokenizer.encode(sample_string)
print ('Tokenized string is {}'.format(tokenized_string))
original_string = tokenizer.decode(tokenized_string)
print ('The original string: {}'.format(original_string))
for ts in tokenized_string:
print ('{} ----> {}'.format(ts, tokenizer.decode([ts])))
BUFFER_SIZE = 10000
BATCH_SIZE = 64
train_dataset = train_data.shuffle(BUFFER_SIZE)
train_dataset = train_dataset.padded_batch(BATCH_SIZE, tf.compat.v1.data.get_output_shapes(train_dataset))
test_dataset = test_data.padded_batch(BATCH_SIZE, tf.compat.v1.data.get_output_shapes(test_data))
embedding_dim = 64
model = tf.keras.Sequential([
tf.keras.layers.Embedding(tokenizer.vocab_size, embedding_dim),
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(6, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.summary()
num_epochs = 10
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
history = model.fit(train_dataset, epochs=num_epochs, validation_data=test_dataset)
import matplotlib.pyplot as plt
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
plot_graphs(history, "accuracy")
plot_graphs(history, "loss")
e = model.layers[0]
weights = e.get_weights()[0]
print(weights.shape) # shape: (vocab_size, embedding_dim)
import io
out_v = io.open('vecs.tsv', 'w', encoding='utf-8')
out_m = io.open('meta.tsv', 'w', encoding='utf-8')
for word_num in range(1, tokenizer.vocab_size):
word = tokenizer.decode([word_num])
embeddings = weights[word_num]
out_m.write(word + "\n")
out_v.write('\t'.join([str(x) for x in embeddings]) + "\n")
out_v.close()
out_m.close()
try:
from google.colab import files
except ImportError:
pass
else:
files.download('vecs.tsv')
files.download('meta.tsv')
```
| github_jupyter |
### ``Regularization`` - Keras
* **Regularization** is a set of techniques that can prevent overfitting in neural networks and thus improve the accuracy of a Deep Learning model when facing completely new data from the problem domain.
1. **Overfitting**
One of the most important aspects when training neural networks is avoiding overfitting. **Overfitting** refers to the phenomenon where a neural network models the training data very well but fails when it sees new data from the same problem domain. **Overfitting** is caused by noise in the training data that the neural network picks up during training and learns it as an underlying concept of the data.
2. What is **Regularization**?
Regularization refers to a set of different techniques that lower the complexity of a neural network model during training, and thus prevent the overfitting. There are `3` main techniques that can be used for regularization which are:
* **L1** regularization
* **L2** regularization
* **Dropout**
2.1 **L2 Regularization** -
The L2 regularization is the most common type of all regularization techniques and is also commonly known as weight decay or Ride Regression. Performing L2 regularization encourages the weight values towards zero **(but not exactly zero)**
The 2 regularization penalty is computed as:
````
loss = l2 * reduce_sum(square(x))
````
2.2 **L1 Regularization** -
In the case of L1 regularization (also knows as Lasso regression), we simply use another regularization term Ω. This term is the sum of the absolute values of the weight parameters in a weight matrix. During the L1 regularization, the weight are always forced all the way towards zero. Performing L1 regularization encourages the weight values to be zero
The L1 regularization penalty is computed as:
````
loss = l1 * reduce_sum(abs(x))
````
2.3 **Dropout**
In a nutshell, dropout means that during training with some probability P a neuron of the neural network gets turned off during training.
### Notes:
* Overfitting occurs in more complex neural network models (many layers, many neurons)
* Complexity of the neural network can be reduced by using L1 and L2 regularization as well as dropout.
* L1 regularization forces the weight parameters to become zero.
* L2 regularization forces the weight parameters towards zero (but never exactly zero).
* Smaller weight parameters make some neurons neglectable → neural network becomes less complex → less overfitting
* During dropout, some neurons get deactivated with a random probability P → Neural network becomes less complex → less overfitting
1. [Ref](https://towardsdatascience.com/regularization-in-deep-learning-l1-l2-and-dropout-377e75acc036)
2. [Ref](https://www.analyticsvidhya.com/blog/2018/04/fundamentals-deep-learning-regularization-techniques/)
### Applying a dropout layer.
The following snippet shows how we can apply a dropout layer as a way of reducing overfitting in keras.
```python
model = keras.Sequentail([
...
keras.layers.Dense(100, activation=None),
keras.layers.Dropout(p=.5)
...
])
```
### Appling `l1` or `l2` regularization.
These regularizations are applied, during layers creation. The following snippen show how we can apply these regulirization.
* Regularizers allow you to apply penalties on layer parameters or layer activity during optimization. These penalties are summed into the loss function that the network optimizes.
* Regularization penalties are applied on a per-layer basis. The exact API will depend on the layer, but many layers (e.g. ``Dense``, ``Conv1D``, ``Conv2D`` and ``Conv3D``) have a unified API.
These layers expose 3 keyword arguments:
1. ``kernel_regularizer``: Regularizer to apply a penalty on the layer's kernel
2. ``bias_regularizer``: Regularizer to apply a penalty on the layer's bias
3. ``activity_regularizer``: Regularizer to apply a penalty on the layer's output
```python
from tensorflow.keras import regularizers
....
model = keras.Sequentail([
...
keras.layers.Dense(100, activation=None,
kernel_regularizer=regularizers.l1_l2(l1=1e-5, l2=1e-4),
bias_regularizer=regularizers.l2(1e-4),
activity_regularizer=regularizers.l2(1e-5)
),
keras.layers.Dropout(p=.5)
...
])
```
* [Ref](https://keras.io/api/layers/regularizers/)
### Creating custom regularizers
```python
def my_regularizer(x):
return 1e-3 * tf.reduce_sum(tf.square(x))
```
Subclassing:
```python
class MyRegularizer(regularizers.Regularizer):
def __init__(self, strength):
self.strength = strength
def __call__(self, x):
return self.strength * tf.reduce_sum(tf.square(x))
## get_config
def get_config(self):
return {'strength': self.strength}
```
* [Ref](https://keras.io/api/layers/regularizers/)
### Practical Example `cifar10` dataset from `Keras`
### Imports
```
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import datasets
from sklearn.metrics import confusion_matrix
import itertools
```
### Data Prep
```
(X_train, y_train), (X_test, y_test) = datasets.cifar10.load_data()
X_train.shape
class_names = [
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck",
]
class_names
```
> ``Normalize`` the features and convert the features and labels to tensorflow `tensors`
```
X_test_tensors = tf.convert_to_tensor(X_test/255., dtype=tf.float32)
X_train_tensors = tf.convert_to_tensor(X_train/255., dtype=tf.float32)
y_test_tensors = tf.one_hot(tf.squeeze(y_test), depth=10)
y_train_tensors = tf.one_hot(tf.squeeze(y_train), depth=10)
```
> Displaying some sample image examples with their corresponding labels.
```
def display_images(images_and_classes, labels, cols=5):
rows = 3
fig = plt.figure()
fig.set_size_inches(cols * 2, rows * 2)
for i, (image, label) in enumerate(zip(images_and_classes, labels)):
plt.subplot(rows, cols, i + 1)
plt.axis('off')
plt.imshow(image)
plt.title(class_names[label], fontsize=16, c="green")
display_images(X_test_tensors[:15], tf.squeeze(y_test))
```
> Aplying the `Regulirization` consept to the model.
```
model = keras.Sequential([
keras.layers.Input(shape=(32, 32, 3)),
keras.layers.Conv2D(64, 3, activation='relu',
kernel_regularizer =keras.regularizers.l2(0.01),
),
keras.layers.Conv2D(128, 3, activation='relu',
kernel_regularizer= keras.regularizers.l2(l2=0.01)
),
keras.layers.MaxPool2D(pool_size=(2, 2)),
keras.layers.Conv2D(256, 3, activation='relu',
kernel_regularizer = keras.regularizers.l2(0.01)
),
keras.layers.Conv2D(64, 3, activation='relu',
kernel_regularizer = keras.regularizers.l2(0.01)
),
keras.layers.MaxPool2D(pool_size=(2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(64, activation='relu',
kernel_regularizer= keras.regularizers.l2(0.01)
),
keras.layers.Dense(10, activation='softmax'),
], name='my_model')
model.compile(
loss = "categorical_crossentropy",
metrics=['acc'],
optimizer = 'adam'
)
model.summary()
history = model.fit(X_train_tensors, y_train_tensors, batch_size=64, epochs=20, verbose=1,
validation_data=(X_test_tensors, y_test_tensors)
)
```
### Plotting the model training `histoty`
```
import pandas as pd
pd.DataFrame(history.history).plot(xlabel="epochs", title="Model History")
```
> As we can see the `L1` regularization reduces overfitting though our goal was't for the model accuracy but we managed to handle the overfitting problem for `10` epochs trainning.
```
```
| github_jupyter |
# 2.3 高斯分布
高斯分布,又叫正态分布,是连续变量经常使用的一个分布模型,一维的高斯分布如下:
$$
\mathcal{N}\left(x\left|~\mu,\sigma^2\right.\right) = \frac{1}{(2\pi\sigma^2)^{1/2}} \exp\left\{-\frac{1}{2\sigma^2}(x-\mu)^2\right\}
$$
其中 $\mu$ 是均值,$\sigma$ 是方差。
$D$-维的高斯分布如下:
$$
\mathcal{N}\left(\mathbf x\left|~\mathbf{\mu, \Sigma}\right.\right) = \frac{1}{(2\pi)^{D/2}} \frac{1}{|\mathbf\Sigma|^{1/2}} \exp \left\{-\frac{1}{2}(\mathbf x - \mathbf \mu)^\top\mathbf\Sigma^{-1}(\mathbf x - \mathbf \mu)\right\}
$$
其中,$D$ 维向量 $\mathbf \mu$ 是均值,$D\times D$ 矩阵 $\mathbf\Sigma$ 是方差,$|\mathbf\Sigma|$ 是其行列式。
之前我们已经看到,在均值和方差固定时,高斯函数是熵最大的连续分布,因此高斯分布的应用十分广泛。
而中心极限定理告诉我们,对于某个分布一组样本 $x_1, \dots, x_N$,他们的均值 $(x_1+\dots+x_N)/N$ 的分布会随着 $N$ 的增大而越来越接近一个高斯分布。
### 验证高斯分布
下面,我们来验证高斯分布是一个概率分布。
考虑高斯分布的结构中跟 $\mathbf x$ 有关的这个二次型:
$$
\Delta^2 = (\mathbf x - \mathbf \mu)^\top\mathbf\Sigma^{-1}(\mathbf x - \mathbf \mu)
$$
这里的 $\Delta$ 叫做 $\mathbf \mu$ 和 $\mathbf x$ 的马氏距离(Mahalanobis distance),当 $\mathbf \Sigma$ 是单位矩阵的时候为欧氏距离。
在 $\Delta^2$ 相等的地方,高斯分布的概率密度也相等。
不失一般性,我们只需要考虑 $\mathbf\Sigma$ 是对称的情况。
事实上,如果 $\mathbf\Sigma$ 不对称,考虑其逆矩阵 $\mathbf\Lambda$,它总可以写成一个对称矩阵和反对称矩阵的形式:
$$\mathbf\Lambda = \mathbf\Lambda^S + \mathbf\Lambda^A =\frac{\mathbf{\Lambda + \Lambda^\top}}{2} + \frac{\mathbf{\Lambda - \Lambda^\top}}{2}$$
而这个反对称的矩阵的对角线均为 0,从而它的二次型为 $0$ 因此,我们有:
$$
(\mathbf x - \mathbf \mu)^\top\mathbf\Lambda(\mathbf x - \mathbf \mu) = (\mathbf x - \mathbf \mu)^\top\mathbf\Lambda^S(\mathbf x - \mathbf \mu)
$$
所以 $\mathbf\Lambda$ 可以等价于 $\mathbf\Lambda^S$,而对称矩阵的逆矩阵也是对称的,因此 只需要考虑 $\mathbf\Sigma$ 是对称的情况。
对于实对称矩阵,可以找到一组正交的特征向量使得:
$$
\mathbf{\Sigma u}_i = \lambda_i,\lambda_i\geq=0 \mathbf u_i, \mathbf u_i^\top\mathbf u_j = I_{ij}, i,j=1,\dots,D,
$$
其中 $I_{ij}$ 表示单位矩阵的第 $i$ 行 $j$ 列的元素:
$$
I_{ij} = \left\{\begin{align}
1, &~i=j \\ 0, &~i \neq j
\end{align}\right.
$$
从而 $\mathbf\Sigma$ 可以表示为
$$
\mathbf\Sigma = \sum_{i=1}^D \lambda_i \mathbf u_i\mathbf u_i^\top = \mathbf{U\Lambda U}^\top
$$
其中 $\Lambda =\text{diag}(\lambda_1, \dots,\lambda_D)$ 为对角阵,$\mathbf U$ 是 $u_1,\dots,u_D$ 按列拼成的正交矩阵 $\mathbf U^\top\mathbf U = \mathbf U\mathbf U^\top = \mathbf I$。
由正交性:
$$
\mathbf\Sigma^{-1} = (\mathbf{U\Lambda U}^\top)^{-1} = (\mathbf U^\top)^{-1}\mathbf\Lambda^{-1} \mathbf U^{-1} = \mathbf U \mathbf\Lambda^{-1} \mathbf U^\top = \sum_{i=1}^D \frac{1}{\lambda_i} \mathbf u_i\mathbf u_i^\top
$$
令 $y_i=\mathbf u_i^\top (\mathbf{x-\mu})$,则
$$
\Delta^2 = \sum_{i=1}^D \frac{y_i^2}{\lambda_i}
$$
写成向量形式:
$$
\mathbf y = \mathbf{U}^\top\mathbf{(x-\mu)}
$$
因此,马氏距离相当于将 $\mathbf{x}$ 做平移变换后,再在 $\mathbf u_i$ 方向上作了一个大小为 $\lambda_i^{1/2}$ 的尺度变换。
为了满足高斯分布定义的合法性,需要使得所有的 $\lambda_i$ 均大于 0,即 $\mathbf\Sigma$ 是正定的。
考虑新的坐标系下的高斯分布形式,由 $\mathbf U$ 的正交性,我们有
$$
\mathbf x = \mathbf{U} \mathbf{y} + \mu
$$
因此 $\mathbf x$ 对 $\mathbf y$ 的雅克比矩阵为:
$$
J_{ij} = \frac{\partial x_i}{\partial y_j} = U_{ij}
$$
因此其行列式的绝对值为 1:
$$
|\mathbf J|^2 = \left|\mathbf U\right|^2 = \left|\mathbf U\right|\left|\mathbf U^\top\right| = \left|\mathbf U\right|\left|\mathbf U^\top\right| = |\mathbf I| = 1
$$
另一方面,矩阵 $\mathbf\Sigma$ 的行列式可以成特征值的乘积:
$$
|\mathbf\Sigma| = \prod_{j=1}^D \lambda_j
$$
所以在新的坐标系下的分布为:
$$
p(\mathbf y) = p(\mathbf x)|\mathbf J|=\prod_{j=1}^D \frac{1}{\lambda_j^{1/2}}\exp\left(-\frac{y_j^2}{2\lambda_j}\right)
$$
即 $D$ 个独立的单变量高斯分布的乘积。
之前我们证明过单变量高斯分布的积分是 1,因此:
$$
\int p(\mathbf y)\mathbf y = \prod_{j=1}^D \int_{-\infty}^{\infty} \frac{1}{\lambda_j^{1/2}}\exp\left(-\frac{y_j^2}{2\lambda_j}\right) dy_j = 1
$$
所以多维高斯分布的确是归一化的,非负性显然,所以它是一个概率分布。
```
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
%matplotlib inline
fig, ax = plt.subplots()
tt = np.linspace(-np.pi, np.pi, 100)
xx = 1 * np.sin(tt)
yy = 2 * np.cos(tt)
ax.plot(- xx * np.sin(np.pi/6) - yy * np.cos(np.pi/6),
xx * np.cos(np.pi/6) - yy * np.sin(np.pi/6), 'r')
ax.set_xlim([-3, 3])
ax.set_ylim([-2, 2])
ax.set_xlabel(r"$x_1$", fontsize="xx-large")
ax.set_ylabel(r"$x_2$", fontsize="xx-large")
ax.set_xticks([])
ax.set_yticks([])
ax.text(0, 0.3, r"$\mathbf{\mu}$", fontsize="xx-large")
ax.annotate(r"$y_1$",
fontsize="xx-large",
xy=(-2.5, -2.5 / np.sqrt(3)), xycoords='data',
xytext=(2.5, 2.5 / np.sqrt(3)), textcoords='data',
arrowprops=dict(arrowstyle="<-",
connectionstyle="arc3",
color="blue"))
ax.annotate(r"$y_2$",
fontsize="xx-large",
xy=(1, -np.sqrt(3)), xycoords='data',
xytext=(-1, np.sqrt(3)), textcoords='data',
arrowprops=dict(arrowstyle="<-",
connectionstyle="arc3",
color="blue"))
ax.annotate("",
xy=(0.55, -0.55 * np.sqrt(3)), xycoords='data',
xytext=(0.55 + np.sqrt(3), 1-0.55 * np.sqrt(3)), textcoords='data',
arrowprops=dict(arrowstyle="<->",
connectionstyle="arc3"),
)
ax.annotate("",
xy=(-1.05*np.sqrt(3), -1.05), xycoords='data',
xytext=(-1.05*np.sqrt(3)-0.5, -1.05+0.5*np.sqrt(3)), textcoords='data',
arrowprops=dict(arrowstyle="<->",
connectionstyle="arc3"),
)
ax.text(-2.7, -1, r"$\lambda_2^{1/2}$", fontsize="xx-large")
ax.text(1.4, -0.9, r"$\lambda_1^{1/2}$", fontsize="xx-large")
plt.show()
```
### 均值和协方差
计算一阶矩即均值,令 $\mathbf{z=x-\mu}$:
$$
\begin{aligned}
\mathbb E[\mathbf x] & = \frac{1}{(2\pi)^{D/2}} \frac{1}{|\mathbf\Sigma|^{1/2}} \int \exp \left\{-\frac{1}{2}(\mathbf x - \mathbf \mu)^\top\mathbf\Sigma^{-1}(\mathbf x - \mathbf \mu)\right\} \mathbf x d\mathbf x\\
& = \frac{1}{(2\pi)^{D/2}} \frac{1}{|\mathbf\Sigma|^{1/2}} \int \exp \left\{-\frac{1}{2}\mathbf z^\top\mathbf\Sigma^{-1}\mathbf z\right\} \mathbf{(z+\mu)} d\mathbf z
\end{aligned}
$$
含有 $\mathbf{z + \mu}$ 中 $\mathbf z$ 的部分会消失,因为此时 $\mathbf z$ 的部分是一个奇函数,而 $\mathbf \mu$ 的部分相当于它乘上一个多维高斯分布的积分,因此:
$$
\mathbb E[\mathbf x] = \mu
$$
考虑二阶矩,令 $\mathbf{z=x-\mu}$:
$$
\begin{aligned}
\mathbb E[\mathbf x\mathbf x^\top] & = \frac{1}{(2\pi)^{D/2}} \frac{1}{|\mathbf\Sigma|^{1/2}} \int \exp \left\{-\frac{1}{2}(\mathbf x - \mathbf \mu)^\top\mathbf\Sigma^{-1}(\mathbf x - \mathbf \mu)\right\} \mathbf x \mathbf x^\top d\mathbf x\\
& = \frac{1}{(2\pi)^{D/2}} \frac{1}{|\mathbf\Sigma|^{1/2}} \int \exp \left\{-\frac{1}{2}\mathbf z^\top\mathbf\Sigma^{-1}\mathbf z\right\} \mathbf{(z+\mu)} \mathbf{(z+\mu)}^\top d\mathbf z
\end{aligned}
$$
展开后面的乘积,$\mathbf{\mu z}^\top$ 和 $\mathbf{z\mu}^\top$ 项的积分会因为奇函数的性质消去,$\mathbf{\mu\mu}^\top$ 项是常数乘以分布,因此只剩下 $\mathbf{\mu\mu}^\top$。现在考虑 $\mathbf{zz}^\top$ 项:
从之前的推导中,我们知道:
$$
\mathbf{z=x-\mu=Uy}=\sum_{j=1}^D y_j \mathbf u_j
$$
$$
\mathbf z^\top\mathbf\Sigma^{-1}\mathbf z = \sum_{k=1}^D \frac{y_k^2}{\lambda_k}
$$
其中 $y_j = \mathbf u_j^\top \mathbf z$。
因此我们有:
\begin{aligned}
& \frac{1}{(2\pi)^{D/2}} \frac{1}{|\mathbf\Sigma|^{1/2}} \int \exp \left\{-\frac{1}{2}\mathbf z^\top\mathbf\Sigma^{-1}\mathbf z\right\} \mathbf{z} \mathbf{z}^\top d\mathbf z \\
= & \frac{1}{(2\pi)^{D/2}} \frac{1}{|\mathbf\Sigma|^{1/2}} \sum_{i=1}^D \sum_{j=1}^D \int \exp \left\{-\sum_{k=1}^D \frac{y_k^2}{2\lambda_k}\right\} y_iy_j \mathbf u_i \mathbf u_j^\top d\mathbf y
\end{aligned}
当 $i \neq j$ 时,奇函数的性质使得积分为 0,所以只有 $i = j$ 的积分项才能保留,因此:
$$
\mathbb E[\mathbf{zz}^\top] = \sum_{i=1}^D \mathbf{u}_i \mathbf u_i^\top \lambda_i = \mathbf \Sigma
$$
$i=j$ 时,非 $i$ 项的 $y_k$ 的积分为 1,$i$ 项的积分相当于对一个均值为 $0$,方差为 $\lambda_i$ 的一维高斯函数求二阶矩,因此为 $0^2 + \lambda_i = \lambda_i$。
所以二阶矩为
$$
\mathbb E[\mathbf{xx}^\top] = \mathbf{\Sigma + \mu\mu}^\top
$$
从而协方差矩阵为
$$
{\rm cov}[{\bf x}] = \mathbb E[(\bf x-\mathbb E[\bf x])(\bf x-\mathbb E[\bf x])^\top] = \mathbb E[\mathbf{xx}^\top] - \mathbb E[\bf x]\mathbb E[\bf x]^\top = \bf \Sigma
$$
### 高斯分布的局限性
尽管高斯分布是一个广泛使用的模型,但是它仍然存在一定的局限性。
一个 $D$ 维的高斯分布的独立参数的数量级是 $O(D^2)$ 的(协方差 $D(D+1)/2$,均值 $D$),因此随着 $D$ 的增大,参数量的增长是平方量级的,此外,求逆操作的时间复杂度也随之增大。
一个解决这个问题的方法是限制协方差矩阵的形式,例如将其限制为对角形式 $\bf \Sigma={\rm diag}(\sigma_i^2)$,或者更进一步,限制为 $\bf \Sigma = \sigma^2 \bf I$,即各向同性(`isotropic`)的高斯分布。
但这样做限制了高斯分布的自由度。
一般高斯分布,对角线形式的高斯分布和各向同性的高斯分布的等高线如下图所示:
```
from matplotlib.mlab import bivariate_normal
x, y = np.mgrid[-1:1:.01, -1:1:.01]
fig, ax = plt.subplots(1, 3, figsize=(10, 3))
# 斜的椭圆
z = bivariate_normal(x, y, sigmax=2, sigmay=1, sigmaxy=1.5)
ax[0].contour(x, y, z, colors="r",
levels=np.linspace(0.1*z.min() + 0.9*z.max(), z.max(), 5))
ax[0].set_xticks([])
ax[0].set_yticks([])
# 坐标轴平行的椭圆
z = bivariate_normal(x, y, sigmax=2, sigmay=1, sigmaxy=0)
ax[1].contour(x, y, z, colors="r",
levels=np.linspace(0.2*z.min() + 0.8*z.max(), z.max(), 5))
ax[1].set_xticks([])
ax[1].set_yticks([])
# 圆
z = bivariate_normal(x, y, sigmax=2, sigmay=2, sigmaxy=0)
ax[2].contour(x, y, z, colors="r",
levels=np.linspace(0.25*z.min() + 0.75*z.max(), z.max(), 5))
ax[2].set_xticks([])
ax[2].set_yticks([])
plt.show()
```
高斯分布的另一个问题在于他只有一个峰值(`unimodal`),所以不能很好的表示多模态(`multimodal`)的分布。之后我们可以看到,这个问题可以通过引入隐变量来解决。
## 2.3.1 条件高斯分布
高斯分布的一个重要性质是如果两组变量的联合分布是高斯分布,那么其中一组相对于另一组的条件分布也是高斯分布,每组变量的边缘分布也是高斯的。
### 基本设定
考虑 $D$ 维的高斯分布 $p(\mathbf x)= \cal N(\bf x~|~\bf{\mu,\Sigma})$,将其分成两组变量 ${\bf x}_a, {\bf x}_b$。不失一般性,我们假设 ${\bf x}_a$ 是 $\bf x$ 的前 $M$ 个分量,${\bf x}_b$ 是 $\bf x$ 的后 $D-M$ 个分量,即
$$
{\bf x} = \begin{pmatrix}
{\bf x}_a \\
{\bf x}_b
\end{pmatrix}
$$
对应的均值和协方差矩阵可以写为:
$$
\begin{align}
{\bf \mu} & = \begin{pmatrix}
{\bf \mu}_a \\
{\bf \mu}_b
\end{pmatrix} \\
{\bf \Sigma} & = \begin{pmatrix}
{\bf \Sigma}_{aa} & {\bf \Sigma}_{ab} \\
{\bf \Sigma}_{ba} & {\bf \Sigma}_{bb}
\end{pmatrix}
\end{align}
$$
由 ${\bf\Sigma = \Sigma}^{\text T}$,我们可以知道 ${\bf \Sigma}_{aa}, {\bf \Sigma}_{bb}$ 是对称的,且 ${\bf \Sigma}_{ab}={\bf \Sigma}_{ba}^\top$。
为了方便,我们使用协方差的逆矩阵,或者叫精确度矩阵(`precision matrix`):
$$
{\bf\Lambda \equiv \Sigma}^{-1}
$$
它可以写成:
$$
{\bf \Lambda} = \begin{pmatrix}
{\bf \Lambda}_{aa} & {\bf \Lambda}_{ab} \\
{\bf \Lambda}_{ba} & {\bf \Lambda}_{bb}
\end{pmatrix}
$$
由对称性,我们有 ${\bf \Lambda}_{aa}, {\bf \Lambda}_{bb}$ 是对称的,且 ${\bf \Lambda}_{ab}={\Lambda}_{ba}^\top$。
### 使用精确度矩阵表示
我们不从条件分布的定义出发计算,换一个角度,考虑二次型的部分:
$$
\begin{align}
-\frac{1}{2}(\mathbf x - \mathbf \mu)^\top\mathbf\Sigma^{-1}(\mathbf x - \mathbf \mu) =
&
-\frac{1}{2}(\mathbf x_a - \mathbf \mu_a)^\top\mathbf\Lambda_{aa}(\mathbf x_a - \mathbf \mu_a) -
\frac{1}{2}(\mathbf x_a - \mathbf \mu_a)^\top\mathbf\Lambda_{ab}(\mathbf x_b - \mathbf \mu_b)
\\
& - \frac{1}{2}(\mathbf x_b - \mathbf \mu_b)^\top\mathbf\Lambda_{ba}(\mathbf x_a - \mathbf \mu_a) -
\frac{1}{2}(\mathbf x_b - \mathbf \mu_b)^\top\mathbf\Lambda_{bb}(\mathbf x_b - \mathbf \mu_b)
\end{align}
$$
我们看到,对于 $\mathbf x_a$ 来说,它仍然是一个二次型,因此,条件分布 $p(\mathbf x_a|\mathbf x_b)$ 应该是一个高斯分布。
因为高斯分布由其均值和协方差完全确定,我们的目的变为计算条件分布的均值和方差。
为了计算均值和方法,我们可以使用配方法,对于一个高斯分布来说,我们有:
$$
-\frac{1}{2}(\mathbf x - \mathbf \mu)^\top\mathbf\Sigma^{-1}(\mathbf x - \mathbf \mu) = -\frac{1}{2}\mathbf x^\top \mathbf\Sigma^{-1} \mathbf x + \mathbf x^\top \mathbf\Sigma^{-1} \mathbf \mu + {\rm const}
$$
其中常量表示与 $\mathbf x$ 无关的部分,另外用到了 $\bf \Sigma$ 的对称性。
这告诉我们,$\mathbf\Sigma^{-1}$ 是 $\mathbf x$ 二次项的中间部分,$\mathbf x$ 一次项的中间部分是 $\mathbf\Sigma^{-1} \mathbf \mu$。
回过头来,我们观察到二次型部分中,$\mathbf x_a$ 的二次项为:
$$
-\frac{1}{2}\mathbf x_a^\top \mathbf\Lambda_{aa} \mathbf x_a
$$
我们立刻可以得到条件分布 $p(\mathbf x_a|\mathbf x_b)$ 的协方差矩阵为:
$$
\mathbf \Sigma_{a|b} = \mathbf\Lambda_{aa}^{-1}
$$
在考虑含有 $\mathbf x_a$ 的线性部分(利用 $\mathbf\Lambda_{ab} = \mathbf\Lambda_{ba}^\top$):
$$
\mathbf x_a^\top\{\mathbf \Lambda_{aa}\mathbf\mu_a - \mathbf \Lambda_{ab}(\mathbf x_b - \mathbf \mu_b)\}
$$
我们可以得到:
$$
\mathbf \Sigma_{a|b}^{-1}\mathbf \mu_{a|b} = \mathbf \Lambda_{aa}\mathbf\mu_a - \mathbf \Lambda_{ab}(\mathbf x_b - \mathbf \mu_b)
$$
从而:
$$
\begin{align}
\mathbf \mu_{a|b} & = \mathbf \Sigma_{a|b}\{\mathbf \Lambda_{aa}\mathbf\mu_a - \mathbf \Lambda_{ab}(\mathbf x_b - \mathbf \mu_b)\} \\
& = \mathbf \mu_a - \mathbf\Lambda_{aa}^{-1}\mathbf \Lambda_{ab}(\mathbf x_b - \mathbf \mu_b)
\end{align}
$$
### 使用协方差矩阵表示
我们也可以使用协方差矩阵的分量来表示条件分布的均值和方差,首先我们使用分块矩阵的 Schur 补(`Schur complement`)来表示分块矩阵的逆:
$$
\begin{pmatrix}
\bf A & \bf B \\
\bf C & \bf D \\
\end{pmatrix}^{-1}
= \begin{pmatrix}
\bf M & {\bf -MBD}^{-1} \\
-{\bf D}^{-1}{\bf CM} & {\bf D}^{-1}+{\bf CMBD}^{-1} \\
\end{pmatrix}
$$
其中 $\mathbf M =(\mathbf{A - BD}^{-1}\mathbf C)^{-1}$,是分块矩阵的 Schur 补。
结合定义式:
$$
\begin{pmatrix}
{\bf \Sigma}_{aa} & {\bf \Sigma}_{ab} \\
{\bf \Sigma}_{ba} & {\bf \Sigma}_{bb}
\end{pmatrix}^{-1} = \begin{pmatrix}
{\bf \Lambda}_{aa} & {\bf \Lambda}_{ab} \\
{\bf \Lambda}_{ba} & {\bf \Lambda}_{bb}
\end{pmatrix}
$$
我们有:
$$
\begin{align}
{\bf \Lambda}_{aa} & = (\mathbf{\Sigma}_{aa} - \mathbf\Sigma_{ab}\mathbf\Sigma_{bb}^{-1}\mathbf \Sigma_{ba})^{-1} \\
{\bf \Lambda}_{ab} & = -(\mathbf{\Sigma}_{aa} - \mathbf\Sigma_{ab}\mathbf\Sigma_{bb}^{-1}\mathbf \Sigma_{ba})^{-1}\mathbf\Sigma_{ab}\mathbf\Sigma_{bb}^{-1}
\end{align}
$$
带入均值和方差的表达式,我们有:
$$
\begin{align}
\mathbf \mu_{a|b} & = \mathbf \mu_a - \mathbf\Lambda_{aa}^{-1}\mathbf \Lambda_{ab}(\mathbf x_b - \mathbf \mu_b) \\
& = \mu_a + \mathbf\Sigma_{ab}\mathbf\Sigma_{bb}^{-1} (\mathbf x_b-\mathbf \mu_b) \\
\mathbf \Sigma_{a|b} & = \mathbf\Lambda_{aa}^{-1} \\
& = \mathbf{\Sigma}_{aa} - \mathbf\Sigma_{ab}\mathbf\Sigma_{bb}^{-1}\mathbf \Sigma_{ba} \\
\end{align}
$$
可以看到,使用精确度矩阵表示形式上更简单一些。
注意到,条件分布 $p(\mathbf x_a|\mathbf x_b)$ 的均值与 $\mathbf x_b$ 呈线性关系,而协方差与 $\mathbf x_a$ 独立,这表示了一个线性高斯模型。
## 2.3.2 边缘高斯分布
现在考虑边缘分布的形式:
$$
p(\mathbf x_a)=\int p(\mathbf x_a, \mathbf x_b) d\mathbf x_b
$$
它也是一个高斯分布。
因为是对 $\mathbf x_b$ 的积分,我们考虑二次型中关于 $\mathbf x_b$ 的部分:
$$
-\frac{1}{2}\mathbf x_b^{\text T}\mathbf\Lambda_{bb}\mathbf x_b + \mathbf x_b^\top\mathbf m =
-\frac{1}{2} (\mathbf x_b-\mathbf\Lambda^{-1}_{bb}\mathbf m)^\top (\mathbf x_b-\mathbf\Lambda^{-1}_{bb}\mathbf m) +
\frac{1}{2} \mathbf m^\top\mathbf \Lambda_{bb}^{-1}\mathbf m
$$
其中
$$
\mathbf m = \mathbf\Lambda_{bb}\mathbf\mu_{b}-\mathbf\Lambda_{ba}(\mathbf x_a - \mathbf\mu_a)
$$
是一个与 $\mathbf x_b$ 无关的项。
配方之后我们发现,关于 $\mathbf x_b$ 的二次型部分可以拆分为两部分:一个高斯分布的二次型再加上一个 跟 $\mathbf x_b$ 无关的部分。
对于高斯分布的二次型部分,积分
$$
\int \exp\left\{-\frac{1}{2} (\mathbf x_b-\mathbf\Lambda^{-1}_{bb}\mathbf m)^\top (\mathbf x_b-\mathbf\Lambda^{-1}_{bb}\mathbf m)\right\} d\mathbf x_b
$$
是一个与 $\mathbf x_a, \mathbf x_b$ 无关的常数项。
考虑第二部分,并加上与 $\mathbf x_a$ 有关的项,我们有:
$$
\begin{align}
& \frac{1}{2} \mathbf m^\top\mathbf \Lambda_{bb}^{-1}\mathbf m
-\frac{1}{2}\mathbf x_a^\top \mathbf\Lambda_{aa} \mathbf x_a + \mathbf x_a^\top\{\mathbf \Lambda_{aa}\mathbf\mu_a + \mathbf \Lambda_{ab}\mathbf \mu_b\} + \text{const} \\
= & -\frac{1}{2} \mathbf x_a^\top (\mathbf\Lambda_{aa} - \mathbf\Lambda_{ba}\mathbf\Lambda_{bb}^{-1}\mathbf\Lambda_{ba})\mathbf x_a + \mathbf x_a^\top (\mathbf\Lambda_{aa} - \mathbf\Lambda_{ba}\mathbf\Lambda_{bb}^{-1}\mathbf\Lambda_{ba}) \mathbf \mu_a + \text{const}
\end{align}
$$
从之前的讨论立即可以得到协方差矩阵:
$$
\mathbf\Sigma_a = (\mathbf\Lambda_{aa} - \mathbf\Lambda_{ba}\mathbf\Lambda_{bb}^{-1}\mathbf\Lambda_{ba})^{-1}
$$
以及均值
$$
\mathbf\Sigma_a (\mathbf\Lambda_{aa} - \mathbf\Lambda_{ba}\mathbf\Lambda_{bb}^{-1}\mathbf\Lambda_{ba}) \mathbf \mu_a = \mathbf\mu_a
$$
再来利用之前 `Schur` 补的结论:
$$
\begin{pmatrix}
\bf A & \bf B \\
\bf C & \bf D \\
\end{pmatrix}^{-1}
= \begin{pmatrix}
\bf M & {\bf -MBD}^{-1} \\
-{\bf D}^{-1}{\bf CM} & {\bf D}^{-1}+{\bf CMBD}^{-1} \\
\end{pmatrix}
$$
其中 $\mathbf M =(\mathbf{A - BD}^{-1}\mathbf C)^{-1}$,是分块矩阵的 Schur 补。
$$
\begin{pmatrix}
{\bf \Lambda}_{aa} & {\bf \Lambda}_{ab} \\
{\bf \Lambda}_{ba} & {\bf \Lambda}_{bb}
\end{pmatrix}^{-1} =
\begin{pmatrix}
{\bf \Sigma}_{aa} & {\bf \Sigma}_{ab} \\
{\bf \Sigma}_{ba} & {\bf \Sigma}_{bb}
\end{pmatrix}
$$
我们用协方差矩阵来表示这个结果,可以得到:
$$
\mathbf \Sigma_{aa} = (\mathbf\Lambda_{aa} - \mathbf\Lambda_{ba}\mathbf\Lambda_{bb}^{-1}\mathbf\Lambda_{ba})^{-1}
$$
从而:
$$
\begin{align}
\mathbb E[\mathbf x_a]& = \mathbf\mu_a\\
{\rm cov}[\mathbf x_a]&=\mathbf \Sigma_{aa}
\end{align}
$$
可以看到,边缘分布的均值和协方差就是原来的均值和协方差中相对应的部分。
### 结论
已知 $p(\mathbf x)= \cal N(\bf x~|~\bf{\mu,\Sigma})$ 以及 $\mathbf\Lambda\equiv\mathbf\Sigma^{-1}$ 以及
$$
\begin{align}
{\bf x} = \begin{pmatrix}
{\bf x}_a \\
{\bf x}_b
\end{pmatrix},~& {\bf \mu} = \begin{pmatrix}
{\bf \mu}_a \\
{\bf \mu}_b
\end{pmatrix} \\
{\bf \Sigma} = \begin{pmatrix}
{\bf \Sigma}_{aa} & {\bf \Sigma}_{ab} \\
{\bf \Sigma}_{ba} & {\bf \Sigma}_{bb}
\end{pmatrix},~& {\bf \Lambda} = \begin{pmatrix}
{\bf \Lambda}_{aa} & {\bf \Lambda}_{ab} \\
{\bf \Lambda}_{ba} & {\bf \Lambda}_{bb}
\end{pmatrix}
\end{align}
$$
条件分布为:
$$
\begin{align}
p(\mathbf x_a|\mathbf x_b) & = \mathcal N (\mathbf x|\mathbf \mu_{a|b}, \mathbf \Lambda_{aa}^{-1}) \\
\mathbf \mu_{a|b} & = \mathbf \mu_a - \mathbf \Lambda_{aa}^{-1}\mathbf \Lambda_{ab} (\mathbf x_b-\mathbf \mu_b)
\end{align}
$$
边缘分布为:
$$
p(\mathbf x_a)=\mathcal N (\mathbf x_a|\mathbf \mu_{a}, \mathbf \Sigma_{aa})
$$
```
from scipy.stats import norm
from matplotlib.mlab import bivariate_normal
x, y = np.mgrid[0:1:.01, 0:1:.01]
fig, ax = plt.subplots(1, 2, figsize=(10, 4.5), dpi=200)
mu_a = 0.5
mu_b = 0.5
sigma_aa = 0.15
sigma_bb = 0.15
sigma_ab = 0.0215
# 联合分布
z = bivariate_normal(x, y,
mux=mu_a,
muy=mu_b,
sigmax=sigma_aa,
sigmay=sigma_bb,
sigmaxy=sigma_ab)
ax[0].contour(x, y, z, colors="g",
levels = [0.99 * z.min() + 0.01 * z.max(),
0.9 * z.min() + 0.1 * z.max(),
0.5 * z.min() + 0.5 * z.max()])
ax[0].plot([0, 1], [0.7, 0.7], 'r')
ax[0].set_xlim(0, 1)
ax[0].set_ylim(0, 1)
ax[0].set_xticks([0, 0.5, 1])
ax[0].set_yticks([0, 0.5, 1])
ax[0].set_xlabel(r"$x_a$", fontsize="xx-large")
ax[0].set_ylabel(r"$x_b$", fontsize="xx-large")
ax[0].text(0.2, 0.8, r"$x_b=0.7$", fontsize="xx-large")
ax[0].text(0.6, 0.3, r"$p(x_a, x_b)$", fontsize="xx-large")
# 边缘分布
xx = np.linspace(0, 1, 100)
pa = norm.pdf(xx, loc=mu_a, scale = sigma_aa)
ax[1].plot(xx, pa, 'b')
pa_b = norm.pdf(xx,
loc = mu_a + sigma_ab / sigma_bb ** 2 * (0.7 - mu_a),
scale = np.sqrt(sigma_aa ** 2 - sigma_ab ** 2 / sigma_bb ** 2))
ax[1].set_xlabel(r"$x_a$", fontsize="xx-large")
ax[1].set_xlim(0, 1)
ax[1].set_ylim(0, 10)
ax[1].set_xticks([0, 0.5, 1])
ax[1].set_yticks([0, 5, 10])
ax[1].text(0.2, 7, r"$p(x_a|x_b=0.7)$", fontsize="xx-large")
ax[1].text(0.2, 3, r"$p(x_a)$", fontsize="xx-large")
ax[1].plot(xx, pa_b, 'r')
plt.show()
```
## 2.3.3 高斯变量的贝叶斯理论
之前提到 $p(\mathbf x_a|\mathbf x_b)$ 的均值是 $\mathbf x_b$ 的线性函数。
现在我们假设 $p(\mathbf x)$ 是一个已知的高斯分布,而 $p(\bf y|x)$ 是一个条件高斯分布,且其均值是 $\bf x$ 的线性函数,方差与 $\bf x$ 独立。这就是所谓的高斯线性模型(`linear Gaussian model`):
$$
\begin{align}
p(\mathbf x)&=\mathcal N(\mathbf x~|~\mathbf \mu, \mathbf \Lambda^{-1}) \\
p(\mathbf y|\mathbf x)&=\mathcal N(\mathbf y~|~\mathbf{Ax+b}, \mathbf L^{-1}) \\
\end{align}
$$
其中 $\mathbf \mu, \mathbf A, \mathbf b$ 是均值的参数,$\mathbf \Lambda, \mathbf L$ 是精确度矩阵。$\mathbf{x,y}$ 的维度分别为 $M, D$,则 $\mathbf A$ 的维度为 $D\times M$。
我们的目的是求 $p(\mathbf y)$ 的分布。
考虑它们的联合分布,令 $\mathbf z=\begin{pmatrix}\mathbf x\\ \mathbf y\end{pmatrix}$,则:
$$
\begin{align}
\ln p(\mathbf z)&=\ln p(\mathbf x) + \ln p(\mathbf{y|x}) \\
&=-\frac{1}{2}(\mathbf{x-\mu})^\top\mathbf\Lambda(\mathbf{x-\mu})
-\frac{1}{2}(\mathbf{y-Ax-b})^\top\mathbf L(\mathbf{y-Ax-b}) + \text{const}
\end{align}
$$
为了得到 $\mathbf z$ 的精确度矩阵,我们考虑二次项:
$$
\begin{align}
& -\frac{1}{2}\mathbf x^\top (\mathbf\Lambda + \mathbf A^{\text T}\mathbf L\mathbf A) \mathbf x -
\frac{1}{2}\mathbf y^\top \mathbf L\mathbf y +
\frac{1}{2}\mathbf y^\top \mathbf L\mathbf A\mathbf+
\frac{1}{2}\mathbf x^\top \mathbf A^\top\mathbf L\mathbf y \\
= & -\frac{1}{2}
\begin{pmatrix}
\mathbf x\\
\mathbf y
\end{pmatrix}^\top
\begin{pmatrix}
\mathbf\Lambda + \mathbf A^\top\mathbf L\mathbf A & -\mathbf A^\top\mathbf L \\
-\mathbf L\mathbf A & \mathbf L
\end{pmatrix}
\begin{pmatrix}
\mathbf x\\
\mathbf y
\end{pmatrix} \\
= & -\frac{1}{2} \mathbf {z^\top Rz}
\end{align}
$$
$\mathbf z$ 的精确度矩阵为:
$$
\mathbf{R} =
\begin{pmatrix}
\mathbf\Lambda + \mathbf A^\top \mathbf L\mathbf A & -\mathbf A^\top \mathbf L \\
-\mathbf L\mathbf A & \mathbf L
\end{pmatrix}
$$
协方差矩阵为(使用 `Schur` 补的结论):
$$
\text{cov}[\mathbf z]=\mathbf{R}^{-1} = \begin{pmatrix}
\mathbf\Lambda + \mathbf A^\top\mathbf L\mathbf A & -\mathbf A^\top\mathbf L \\
-\mathbf L\mathbf A & \mathbf L
\end{pmatrix}^{-1}
= \begin{pmatrix}
\mathbf\Lambda^{-1} & \mathbf\Lambda^{-1}\mathbf A^\top\\
\mathbf A\mathbf\Lambda^{-1} & \mathbf L^{-1}+\mathbf A\mathbf \Lambda^{-1}\mathbf A^\top
\end{pmatrix}
$$
考虑线性项:
$$
\mathbf x^\text{T}\mathbf{\Lambda\mu}-\mathbf x^\top\mathbf A^\top\mathbf{Lb} + \mathbf y^\top\mathbf{Lb} =
\begin{pmatrix}
\mathbf x\\
\mathbf y
\end{pmatrix}^\top
\begin{pmatrix}
\mathbf{\Lambda\mu-A}^\top\mathbf{Lb}\\
\mathbf{Lb}
\end{pmatrix}
$$
我们有
$$
\mathbb E[\mathbf z]=
\mathbf R^{-1}
\begin{pmatrix}
\mathbf{\Lambda\mu-A^\top Lb}\\
\mathbf{Lb}
\end{pmatrix}
= \begin{pmatrix}
\mathbf{\mu}\\
\mathbf{A\mu+b}
\end{pmatrix}
$$
然后我们由边缘高斯分布的结论马上可以得到:
$$
\begin{align}
\mathbb E[\mathbf y] &= \mathbf{A\mu+b} \\
{\rm cov} [\mathbf y] &= \mathbf L^{-1}+\mathbf A\mathbf \Lambda^{-1}\mathbf A^\top
\end{align}
$$
再来,我们由条件高斯分布的结论得到:
$$
\begin{align}
\mathbb E[\mathbf{x|y}] &= (\mathbf\Lambda + \mathbf A^\top \mathbf L\mathbf A)^{-1}\left\{ \mathbf A^\top \mathbf{L(y-b)} + \mathbf{\Lambda\mu} \right\} \\
{\rm cov} [\mathbf{x|y}] &= (\mathbf\Lambda + \mathbf A^\top\mathbf L\mathbf A)^{-1}
\end{align}
$$
### 结论
已知:
$$
\begin{align}
p(\mathbf x)&=\mathcal N(\mathbf x~|~\mathbf \mu, \mathbf \Lambda^{-1}) \\
p(\mathbf y|\mathbf x)&=\mathcal N(\mathbf y~|~\mathbf{Ax+b}, \mathbf L^{-1}) \\
\end{align}
$$
我们有
$$
\begin{align}
p(\mathbf y)&=\mathcal N(\mathbf y~|~\mathbf{A\mu+b}, \mathbf L^{-1}+\mathbf A\mathbf \Lambda^{-1}\mathbf A^\top) \\
p(\mathbf x|\mathbf y)&=\mathcal N(\mathbf y~|~\mathbf \Sigma\left\{ \mathbf A^\top \mathbf{L(y-b)} + \mathbf{\Lambda\mu} \right\}, \mathbf \Sigma) \\
\end{align}
$$
其中 $\mathbf \Sigma = (\mathbf\Lambda + \mathbf A^\top\mathbf L\mathbf A)^{-1}$。
## 2.3.4 高斯分布最大似然
对于一组独立同分布的高斯观测数据 $\mathbf X=(x_1, \dots,x_N)^\text{T}$,
高斯分布的似然函数为:
$$
\ln p({\bf X|\mu,\Sigma}) = -\frac{ND}{2}\ln (2\pi)-\frac{N}{2}\ln|\mathbf\Sigma|-\frac{1}{2}\sum_{n=1}^N ({\bf x_n-\mu})^\top\mathbf\Sigma^{-1}({\bf x_n-\mu})
$$
其充分统计量为
$$
\sum_{n=1}^N \mathbf x_n, \sum_{n=1}^N \mathbf x_n\mathbf x_n^\top
$$
考虑对 $\mathbf\mu$ 求偏导:
$$
\frac{\partial}{\partial \mathbf \mu} \ln p({\bf X|\mu,\Sigma}) = \sum_{n=1}^N \mathbf\Sigma^{-1} (\bf x_n-\mu)
$$
令其为零(严格来说要验证这是一个最大值点):
$$
\mathbf \mu_{ML} = \frac{1}{N} \sum_{n=1}^N \mathbf x_n
$$
所以最大似然的均值就是样本均值。
协方差的最大似然是(可以求偏导,对矩阵求偏导的方法参见附录 C)
$$
{\bf\Sigma}_{ML} = \frac{1}{N} \sum_{n=1}^N (\mathbf x_n-\mathbf \mu_{ML})(\mathbf x_n-\mathbf \mu_{ML})^\top
$$
另一方面,在真实的分布下,最大似然估计的均值为
$$
\begin{align}
\mathbb E [\mu_{ML}] & = \mathbf \mu \\
\mathbb E [\Sigma_{ML}] & = \frac{N-1}{N} \bf \Sigma
\end{align}
$$
所以最大似然估计的协方差不是无偏估计,为此,可以使用
$$
\mathbf{\tilde\Sigma}=\frac{1}{N-1} \sum_{n=1}^N (\mathbf x_n-\mathbf \mu_{ML})(\mathbf x_n-\mathbf \mu_{ML})^\top
$$
来纠正。
## 2.3.5 序列估计
序列化方法要求我们每次只处理一个数据,然后扔掉这个数据。
这种方法在对于处理数据量很大(大到不能一次处理所有数据)的问题很重要。
我们考虑从 $N-1$ 个观测数据到 $N$ 个观测数据时,均值的最大似然估计的变化:
$$
\begin{align}
\mu_{ML}^{(N)} & = \frac{1}{N} \sum_{n=1}^N \mathbf x_n \\
& = \frac{1}{N} \mathbf x_N + \frac{1}{N} \sum_{n=1}^N-1 \mathbf x_n\\
& = \frac{1}{N} \mathbf x_N + \frac{N-1}{N} \mu_{ML}^{(N-1)} \\
& = \mu_{ML}^{(N-1)} + \frac{1}{N}(\mathbf x_N - \mu_{ML}^{(N-1)})
\end{align}
$$
这相当于在原来的均值的基础上,增加了一个误差项 $\mathbf x_N - \mu_{ML}^{(N-1)}$ 的变化;同时随着 $N$ 的增大,误差项调整的权重逐渐变小。
### Robbins-Monro 算法
在上面的序列估计算法中,我们得到的是最大似然解的精确结果,而在很多情况下,我们并不一定能得到精确的结果。为此,我们引入 `Robbins-Monro` 算法。
考虑联合分布 $p(z,\theta)$,$z$ 关于 $\theta$ 的条件分布的均值是一个关于 $\theta$ 的函数:
$$
f(\theta)\equiv \mathbb E[z|\theta]=\int zp(z|\theta)dz
$$
我们的目的是找到一个 $\theta^\star$ 使得 $f(\theta^\star)=0$。
对于这个问题,如果我们有很多 $z$ 和 $\theta$ 的数据,我们可以建模来解决这个问题。
不过,现在假设我们每次只观察一个 $z$ 值,然后根据这个值来更新我们对 $\theta^\star$ 的估计,为此,我们先假定 $z$ 的条件方差是有限的:
$$
\mathbb E[(z-f)^2|\theta]<\infty
$$
并且不失一般性的假设 $f(\theta)$ 在 $\theta^\star$ 附近是单调递增的。`Robbins-Monro` 过程给出:
$$
\theta^{(N)} = \theta^{(N-1)} + a_{N-1} z(\theta^{(N-1)})
$$
其中 $z(\theta^{(N)})$ 是 $z$ 在 $\theta$ 取 $\theta^{(N)}$ 时的观测值,参数 $a_N$ 满足:
$$
\begin{align}
\lim_{N\to\infty} a_N & = 0 \\
\sum_{N=1}^\infty a_N & = \infty \\
\sum_{N=1}^\infty a_N^2 & < \infty
\end{align}
$$
第一个条件保证了收敛性;第二个条件保证了不会收敛到最优值附近的点;第三个条件保证误差项最终不会破坏收敛性。
对于最大似然问题,我们要求
$$
\frac{\partial}{\partial \theta} \left\{\left.\frac{1}{N} \sum_{n=1}^N \ln p({x_n|\theta}) \right\}\right|_{\theta_{ML}} = 0
$$
交换求和和偏导,并求极限有:
$$
\lim_{N\to\infty}\frac{1}{N} \sum_{n=1}^N \frac{\partial}{\partial \theta}\ln p({x_n|\theta}) = \mathbb E_x \left[ \frac{\partial}{\partial \theta}\ln p({x|\theta})\right]
$$
我们可以看到,最大化极大似然对应与寻找一个函数的根。`Robbins-Monro` 过程给出:
$$
\theta^{(N)} = \theta^{(N-1)} + a_{N-1} \frac{\partial}{\partial \theta^{(N-1)}}\ln p({x_N|\theta^{(N-1)}})
$$
如果 $\theta$ 是均值,我们有
$$
z = \frac{\partial}{\partial \mu_{ML}}\ln p({x|\mu_{ML}, \sigma^2}) = \frac{1}{\sigma^2} (x-\mu_{ML})
$$
所以 $z$ 的分布是一个均值为 $\mu-\mu_{ML}$ 的高斯分布,我们相应的 $a_N$ 为 $\frac{\sigma^2}{N}$。
## 2.3.6 高斯分布的贝叶斯估计
极大似然估计给出了均值和协方差矩阵的点估计,现在我们通过引入先验分布来考虑它的贝叶斯估计。
### 均值未知,方差已知
考虑一维的高斯变量 $x$,并假设方差 $\sigma^2$ 已知,有 $N$ 个数据点 $\mathbf X=\{x_1,\dots,x_N\}$,其似然函数为
$$
p(\mathbf X|\mu)=\prod_{n=1}^N p(x_n|\mu) =\frac{1}{(2\pi\sigma^2)^\frac{N}{2}}
\exp\left\{-\frac{1}{2\sigma^2} \sum_{n=1}^N (x_n-\mu)^2\right\}
$$
注意似然函数不是一个关于 $\mu$ 的概率分布。
考虑共轭性,我们引入一个高斯先验 $p(\mu)$:
$$
p(\mu)=\mathcal N(\mu|\mu_0, \sigma_0^2)
$$
因此后验分布为:
$$
p(\mu|\mathbf X) \propto p(\mathbf X|\mu)p(\mu)
$$
用上文类似的配方法可以得到:
$$
p(\mu|\mathbf X) = \mathcal N(\mu|\mu_N,\sigma_N^2)
$$
其中
$$
\begin{align}
\mu_N & = \frac{\sigma^2}{N\sigma_0^2 + \sigma^2} \mu_0 + \frac{N\sigma_0^2}{N\sigma_0^2 + \sigma^2} \mu_{ML}\\
\frac{1}{\sigma_N^2} & = \frac{1}{\sigma_0^2} + \frac{N}{\sigma^2}
\end{align}
$$
其中
$$
\mu_{ML} = \frac{1}{N} \sum_{n=1}^N x_n
$$
当 $N = 0$ 时,后验分布就是先验分布,当 $N\to\infty$ 时,后验分布的参数趋于最大似然的结果。
### 均值已知,方差未知
我们使用精确度 $\lambda\equiv 1~/~\sigma^2$ 来进行计算,其似然函数为:
$$
p(\mathbf X|\lambda)=\prod_{n=1}^N \mathcal N(x_n|\mu,\lambda^{-1}) \propto \lambda^{N/2} \exp\left\{-\frac{\lambda}{2} \sum_{n=1}^N(x_n-\mu)^2\right\}
$$
考虑关于 $\lambda$ 的共轭性,我们引入的先验分布为 `Gamma` 分布:
$$
\mathrm{Gam}(\lambda~|~a, b) = \frac{1}{\Gamma(a)} b^a {\lambda^{a-1} \exp(-b\lambda)}, a > 0
$$
不同参数的 `Gam` 分布的图像如下图所示:
```
from scipy.stats import gamma
x = np.linspace(0.01, 2, 100)
fig, axes = plt.subplots(1, 3, figsize=(10, 2))
A = [0.1, 1, 4]
B = [0.1, 1, 6]
for a, b, ax in zip(A, B, axes):
y = gamma.pdf(x, a = a, scale=1.0/b)
ax.plot(x, y, color="r")
ax.set_ylim(0, 2)
ax.set_xticks([0, 1, 2])
ax.set_yticks([0, 1, 2])
ax.set_xlabel(r"$\lambda$", fontsize="xx-large")
ax.text(1, 1, "$a={}$\n$b={}$".format(a, b), fontsize="xx-large")
plt.show()
```
其均值和方差分别为:
$$
\begin{align}
\mathbb E[\lambda] & = \frac{a}{b}\\
{\rm var}[\lambda] & = \frac{a}{b^2}
\end{align}
$$
给定先验分布 ${\rm Gam}(\lambda~|~a_0, b_0)$,后验分布为:
$$
p(\lambda|\mathbf X)\propto\lambda^{a_0-1}\lambda^{N/2} \exp\left\{-b_0\lambda_0-\frac{\lambda}{2} \sum_{i=1}^N (x_n-\mu)^2\right\}
$$
从而这也是一个 `Gam` 分布 ${\rm Gam}(\lambda~|~a_N, b_N)$,其中:
$$
\begin{align}
a_N & = a_0 + \frac N 2 \\
b_N & = b_0 + \frac{1}{2} \sum_{i=1}^N (x_n-\mu)^2 = b_0 + \frac{N}{2} \sigma^2_{ML}
\end{align}
$$
因此,我们看到,$N$ 个数据点向参数 $a$ 添加了 $\frac N 2$ 的贡献,因此,我们可以认为初始状态向 $a$ 贡献了 $2a_0$ 个数据点;$N$ 个数据点向参数 $b$ 添加了 $\frac{N}{2} \sigma^2_{ML}$ 的贡献,我们可以认为这 $2a_0$ 个数据点平均每个贡献了 $2b_0/(2a_0)=b_0/a_0$ 的方差。
如果直接考虑方差,而不是精确度,我们使用的先验将是逆 `Gamma` 分布。不过用精确度表示起来更加方便。
### 均值和方差都未知
在这种情况下,似然函数为:
$$
p(\mathbf X|\mu, \lambda)=\left(\frac{\lambda}{2\pi}\right)^\frac{N}{2}
\exp\left\{-\frac{\lambda}{2} \sum_{n=1}^N (x_n-\mu)^2\right\}
\propto \left[\lambda^{1/2} \exp\left(-\frac{\lambda\mu^2}{2}\right)\right]^N
\exp\left\{ \lambda\mu \sum_{n=1}^N x_n - \frac{\lambda}{2} \sum_{n=1}^N x_n^2 \right\}
$$
这是一个指数族函数(`exponential family`)的形式。
由共轭性,考虑这样的先验分布:
$$
p(\mu,\lambda)\propto \left[\lambda^{1/2} \exp\left(-\frac{\lambda\mu^2}{2}\right)\right]^\beta
\exp\left\{ c\lambda\mu - d\lambda \right\}
= \exp\left\{-\frac{\beta\lambda}{2}(\mu-c/\beta)^2\right\} \lambda^{\beta/2}
\exp\left\{-\left(d-\frac{c^2}{2\beta}\right)\lambda\right\}
$$
由于 $p(\mu,\lambda) = p(\mu|\lambda)p(\lambda)$,通过观察我们发现:
$$
p(\mu,\lambda) = \mathcal N(\mu|\mu_0, (\beta\lambda)^{-1}) \mathrm{Gam}(\lambda|a,b)
$$
其中 $\mu_0 = c/\beta, a=1+\beta/2, b=d-c^2/(2\beta)$。这个分布叫做高斯-伽马分布(`Gaussian-gamma`)或者正态-伽马分布(`normal-gamma`),其图像如图所示:
```
from scipy.stats import gamma
mu, lam = np.mgrid[-2:2:.01, 0.01:2:.01]
a, b = 5, 6
mu_0 = 0
beta = 2
z = (beta*lam/2/np.pi) ** 1/2 * np.exp(- beta*lam/2 * (mu-mu_0)**2) \
* gamma.pdf(lam, a=a, scale=1.0/b)
fig, ax = plt.subplots()
ax.contour(mu, lam, z, colors='r')
ax.set_xticks([-2, 0, 2])
ax.set_yticks([0, 1, 2])
ax.set_xlabel(r"$\mu$", fontsize="xx-large")
ax.set_ylabel(r"$\lambda$", fontsize="xx-large")
plt.show()
```
在 $D$ 维高斯分布 $\mathcal N(\mathbf x|\mathbf\mu, \mathbf\Lambda^{-1})$ 的情况下,若均值未知,精确度矩阵已知,先验分布为:
$$\mathcal N(\mathbf \mu|\mathbf \mu_0, \mathbf\Lambda_0^{-1})$$
若均值已知,精确度矩阵未知,先验分布为 `Wishart` 分布:
$$
\mathcal W(\mathbf\Lambda|\mathbf W, \nu)=B|\mathbf\Lambda|^{(\nu-D-1)/2}
\exp\left(-\frac{1}{2}\mathrm{Tr}(\mathbf W^{-1}\mathbf\Lambda)\right)
$$
其中 $\nu$ 叫做分布的自由度(`degrees of freedom`),$\mathbf W$ 是一个 $D\times D$ 矩阵,$\rm Tr$ 是矩阵的迹,归一化参数 $B$ 为
$$
B(\mathbf W, \nu) = |\mathbf W|^{-\nu/2}
\left(2^{\mu D/2} \pi^{D(D-1)/4} \prod_{i=1}^D
\Gamma\left(\frac{\nu+1-i}{2}\right)\right)^{-1}
$$
若均值和精确度矩阵都未知,先验分布为 `normal-Wishart`(或者 `Gaussian-Wishart`)分布:
$$
\mathcal N(\mathbf \mu|\mathbf \mu_0, (\beta\mathbf\Lambda)^{-1})
\mathcal W(\mathbf\Lambda|\mathbf W, \nu)
$$
## 2.3.7 学生 t 分布
之前我们看到精确度的先验分布是一个 Gamma 分布。现在考虑一个单变量高斯分布 $\mathcal N(x|\mu,\tau^{-1})$ 和精确度的一个伽马分布先验 ${\rm Gam}(\tau|a,b)$,我们对精确度进行积分之后,得到 $x$ 的边缘分布为
$$
\begin{align}
p(x|\mu, a,b) & = \int_{0}^{\infty} \mathcal N(x|\mu,\tau^{-1}) {\rm Gam}(\tau|a,b) d\tau \\
& = \int_{0}^{\infty}
\frac{b^a e^{-b\tau} \tau^{a-1}}{\Gamma(a)}
\left(\frac{\tau}{2\pi}\right)^{1/2}
\exp\left\{-\frac{\tau}{2}(x-\mu)^2\right\}
d\tau\\
& =
\frac{b^a}{\Gamma(a)} \left(\frac{1}{2\pi}\right)^{1/2}
\left[b+\frac{(x-\mu)^2}{2}\right]^{-a-1/2} \Gamma(a+1/2)
\end{align}
$$
(关于 $\tau$ 可以凑成一个 ${\rm Gam}(a+1/2, b+\frac{(x-\mu)^2}{2})$ 的核)
令参数 $\nu=2a, \lambda=a/b$,我们有
$$
\text{St}(x|\mu,\lambda,\nu)= \frac{\Gamma(v/2+1/2)}{\Gamma(v/2)} \left(\frac{\lambda}{\pi\nu}\right)^{1/2}
\left[1+\frac{\lambda(x-\mu)^2}{\nu}\right]^{-\nu/2-1/2}
$$
这就是学生 t 分布(`Student’s t-distribution`)。其中,参数 $\lambda$ 叫做精确度,$\mu$ 叫做自由度。
当自由度为 1 时,学生 t 分布退化为柯西分布,当自由度趋于无穷时,学生 t 分布变成均值 $\lambda$ 精度 $\lambda$ 的高斯分布(忽略所有与 $x$ 无关的项,对 $\nu$ 求极限,利用 $\lim_{x\to\infty}(1+\frac{1}{x})^x = e$)
其概率密度函数如图所示:
```
from scipy.stats import t, norm
xx = np.linspace(-5, 5, 100)
dfs = [0.1, 1]
fig, ax = plt.subplots()
for df in dfs:
ax.plot(xx, t.pdf(xx, df))
ax.plot(xx, norm.pdf(xx))
ax.set_xlim(-5, 5)
ax.set_xticks([-5, 0, 5])
ax.set_ylim(0, 0.5)
ax.legend([r"$\nu=0.1$", r"$\nu=1.0$", r"$\nu\to\infty$"], fontsize="xx-large")
ax.set_title(r"$\lambda=1,\ \mu=0$", fontsize="xx-large")
plt.show()
```
学生 t 分布可以看成是无穷多的高斯分布的一个混合。
从图中可以看出,它产生的分布的尾巴要比高斯分布的长。事实上,在有异常值的情况下,它比高斯分布更鲁棒一些:
```
from scipy.stats import t, norm, uniform
tt = norm.rvs(size=30)
xx = np.linspace(-5, 10, 100)
fig, axes = plt.subplots(1, 2, figsize=(10, 4))
axes[0].hist(tt,
normed=True,
rwidth=0.5,
color="yellow",
bins = np.linspace(-5, 10, 30))
axes[0].set_xlim(-5, 10)
axes[0].set_xticks([-5, 0, 5, 10])
axes[1].set_ylim(0, 0.7)
l, s = norm.fit(tt)
axes[0].plot(xx, norm.pdf(xx, loc=l, scale=s), 'g')
d, l, s = t.fit(tt)
axes[0].plot(xx, t.pdf(xx, df=d, loc=l, scale=s), 'r')
axes[0].legend(["MLE for Gaussian", "MLE for t"])
tt1 = np.hstack([tt, uniform.rvs(size=4, loc=8, scale=2)])
axes[1].hist(tt1,
normed=True,
rwidth=0.5,
color="yellow",
bins = np.linspace(-5, 10, 30))
axes[1].set_xlim(-5, 10)
axes[1].set_ylim(0, 0.7)
axes[1].set_xticks([-5, 0, 5, 10])
l, s = norm.fit(tt1)
axes[1].plot(xx, norm.pdf(xx, loc=l, scale=s), 'g')
d, l, s = t.fit(tt1)
axes[1].plot(xx, t.pdf(xx, df=d, loc=l, scale=s), 'r')
axes[1].legend(["MLE for Gaussian", "MLE for t"])
plt.show()
```
如果我们取 $\nu=2a, \lambda=a/b, \eta=\tau b /a$,我们有:
$$
\mathrm{St}(x|\mu,\lambda,\nu) = \int_{0}^{\infty} \mathcal N(x~|~\mu,(\eta\lambda)^{-1}) {\rm Gam}(\eta ~|~\nu/2,\nu/2) d\eta
$$
(这也说明了学生 t 分布是一个概率分布:对 $x$ 积分等于 1。)
我们看到,自由度项只在 Gamma 分布出现,因此,我们可以将其推广到高维形式:
$$
\mathrm{St}(x|\mathbf{\mu,\Lambda},\nu) = \int_{0}^{\infty} \mathcal N(x~|~{\mathbf \mu,(\eta\mathbf\Lambda)^{-1}}) {\rm Gam}(\eta ~|~\nu/2,\nu/2) d\eta
$$
与一维的结果类似,我们有:
$$
\mathrm{St}(x|\mathbf{\mu,\Lambda},\nu) = \frac{\Gamma(v/2+D/2)}{\Gamma(v/2)} \frac{|\mathbf\Lambda|^{1/2}}{(\pi\nu)^{D/2}}
\left[1+\frac{\Delta^2}{\nu}\right]^{-\nu/2-D/2}
$$
其中 $\Delta^2 \mathbf{= (x -\mu)^\top\Lambda(x - \mu)}$。
学生 t 分布的均值方差和众数分别为:
$$
\begin{align}
\mathbb E[\mathbf x] & = \mathbf \mu & {\rm if} && \nu > 1\\
{\rm cov}[\mathbf x] & = \frac{\nu}{\nu-2} \mathbf\Lambda^{-1} & {\rm if} && \nu > 2 \\
{\rm mode}[\mathbf x] & = \mathbf \mu \\
\end{align}
$$
## 2.3.8 周期变量和 von Mises 分布
高斯分布并不能很好的解决周期变量的问题。
对于周期变量,一种解决方法是使用极坐标系 $0\leq\theta<2\pi$ 的周期性,使用极坐标的角度 $\theta$ 表示周期变量。
在极坐标系下,我们需要考虑这样的问题,两个角度 $1^\circ, 359^\circ$ 十分接近,如果我们选择 $0^\circ$ 作为原点,均值和标准差分别为 $180^\circ, 179^\circ$;选择 $180^\circ$ 作为原点,均值和标准差分别为 $0^\circ, 1^\circ$。因此我们需要一种解决这种问题的特殊方法。
考虑找到一组周期变量 $\mathcal D=\{\theta_1,\dots, \theta_N\}$ 的均值的问题。之前我们看到,随着坐标系的变化,直接计算的方法的结果也在变化。
为了找到一种不随坐标原点变化的均值衡量方法,我们将这些变量看成是单位圆上的点,即将角度 $\theta$ 映射为点 $\mathbf x$:
```
xx = np.linspace(0, 2 * np.pi, 100)
_, ax = plt.subplots(figsize=(7, 7))
ax.plot(np.cos(xx), np.sin(xx), 'r')
ax.text(1, -0.2, "$x_1$", fontsize="xx-large")
ax.text(1, 0.2, "$x_2$", fontsize="xx-large")
ax.text(0.2, 1, "$x_3$", fontsize="xx-large")
ax.text(-0.4, 1, "$x_4$", fontsize="xx-large")
ax.axis("equal")
ax.set_xlim(-1.2, 1.2)
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0))
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0))
ax.set_xticks([])
ax.set_yticks([])
ax.plot([0, 0.4], [0, 0.6])
ax.text(0.12, 0.3, r'$\bar r$', fontsize="xx-large")
ax.text(0.42, 0.55, r'$\bar x$', fontsize="xx-large")
ax.text(0.35, 0.2, r'$\bar{\theta}$', fontsize="xx-large")
tt = np.linspace(0.2, np.sqrt(0.2 ** 2 + 0.3 ** 2))
ax.fill_between([0, 0.2], [0, 0.3], color="c")
ax.fill_between(tt, np.sqrt(0.2 ** 2 + 0.3 ** 2 - tt ** 2 + 0.001), color="c")
plt.show()
```
我们不直接求 $\theta$ 的均值,而是求这些点 $\bf x$ 的均值:
$$
\mathbf{\bar x} = \frac{1}{N} \sum_{n=1}^N \mathbf x_n
$$
然后得到相应的极坐标的角度 $\bar\theta$。
这样的设定能保证我们的均值 $\bf \bar x$ 不随极坐标的原点设置变化而变化。一般来说,这个均值通常落在单位圆内。
考虑坐标变换,$\mathbf x_n = (\cos\theta_n,\sin\theta_n), \mathbf{\bar x}=(\bar r\cos\bar\theta, \bar r\sin\theta)$,我们有:
$$
\bar r\cos\bar\theta = \frac{1}{N} \sum_{n=1}^N \cos\theta_n, \bar r\sin\theta \frac{1}{N} \sum_{n=1}^N \sin\theta_n
$$
利用正切函数,我们有:
$$
\bar\theta = \tan^{-1} \frac{\sum_n \sin\theta_n}{\sum_n \cos\theta_n}
$$
### von Mises 分布
对于周期函数,我们考虑一个周期为 $2\pi$ 的概率分布 $p(\theta)$,它应当满足这三个条件:
$$
\begin{align}
p(\theta) & \geq 0 \\
\int_{0}^{2\pi} p(\theta) & =1 \\
p(\theta+2\pi) &= p(\theta)
\end{align}
$$
按照单位圆的设定,我们可以假定 $\mathbf x=(x_1,x_2)$ 满足一个高斯分布 $\mathcal N(\mathbf\mu, \sigma^2\mathbf I)$,其中 $\mathbf\mu=\mu_1,\mu_2$,所以:
$$
p(x_1,x_2) = \frac{1}{2\pi\sigma^2} \exp\left\{-\frac{(x_1-\mu_1)^2 + (x_2-\mu_2)^2}{2\sigma^2}\right\}
$$
它的分布等高线是一系列的圆:
```
xx = np.linspace(0, 2 * np.pi, 100)
_, ax = plt.subplots(figsize=(7, 7))
ax.plot(np.cos(xx), np.sin(xx), 'r')
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0))
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0))
ax.set_xticks([])
ax.set_yticks([])
mu1, mu2 = 0.5, 0.8
for r in [0.25, 0.5, 0.75]:
ax.plot(r * np.cos(xx) + mu1, r * np.sin(xx) + mu2, 'b')
ax.axis('equal')
ax.set_xlim(-1.5, 2)
ax.text(1.35, 1, r"$p(\mathbf{x})$", fontsize="xx-large")
ax.text(-1,-1, r'$r=1$', fontsize="xx-large")
ax.text(1.8, -.2, "$x_1$", fontsize="xx-large")
ax.text(-.2, 1.8, "$x_2$", fontsize="xx-large")
plt.show()
```
考虑极坐标的变换:
$$
\begin{align}
x_1 = r\cos\theta, &&& x_2 = r\sin\theta \\
\mu_1 = r_0\cos\theta_0, &&& \mu_2 = r_0\sin\theta_0
\end{align}
$$
并将 $\mathbf x$ 限制在单位圆上即 $r=1$,我们有:
$$
-\frac{1}{2\sigma^2}\left\{(\cos\theta-r_0\cos\theta_0)^2 + (\sin\theta-r_0\sin\theta_0)^2\right\} =
-\frac{1}{2\sigma^2}\left\{1 + r_0^2 -2r_0\cos\theta\cos\theta_0 -2r_0\sin\theta\sin\theta_0\right\} =
\frac{r_0}{\sigma^2} \cos(\theta-\theta_0) + \mathrm{const}
$$
常数是与 $\theta$ 无关的项。
如果我们定义 $m=\frac{r_0}{\sigma^2}$,我们可以得到这样的一个分布:
$$
p(\theta|\theta_0,m) = \frac{1}{2\pi I_0(m)}\exp\{m\cos(\theta-\theta_0)\}
$$
这样我们就得到了 von Mises 分布,或者叫 `circular normal` 分布,这里 $\theta_0$ 是分布的均值,$m$ 类似于高斯分布中的精确度。
von Mises 分布在直角坐标系和极坐标系的图像如图所示:
```
from scipy.stats import vonmises
tt = np.linspace(0, 2 * np.pi, 100)
fig, axes = plt.subplots(1, 2, figsize=(10, 4))
axes[0].plot(tt, vonmises.pdf(tt, 5, loc=np.pi/4), 'r')
axes[0].plot(tt, vonmises.pdf(tt, 1, loc=3*np.pi/4), 'b')
axes[0].set_xlim(0, 2*np.pi)
axes[0].legend([r'$m=5,\ \theta_0=\pi/4$', r'$m=1,\ \theta_0=3\pi/4$'], fontsize="x-large", loc=0)
for ax in axes:
ax.set_xticks([])
ax.set_yticks([])
ax = fig.add_subplot(122, projection='polar')
ax.set_axis_off()
ax.plot(tt, vonmises.pdf(tt, 5, loc=np.pi/4), 'r')
ax.plot(tt, vonmises.pdf(tt, 1, loc=3*np.pi/4), 'b')
ax.legend([r'$m=5,\ \theta_0=\pi/4$', r'$m=1,\ \theta_0=3\pi/4$'], fontsize="x-large", loc=0)
ax.grid("off")
ax.plot([0, np.pi/4], [0, 1], 'k')
ax.plot([0, 0], [0, 1], 'k')
ax.plot([0, 3*np.pi/4], [0, 1], 'k')
ax.text(0.1, 0.8, r'$0$', fontsize="x-large")
ax.text(-0.2, 0.8, r'$2\pi$', fontsize="x-large")
ax.text(0.3*np.pi, 1, r'$\pi / 4$', fontsize="x-large")
ax.text(0.7*np.pi, 0.8, r'$3\pi / 4$', fontsize="x-large")
plt.show()
```
$I_0(m)$ 是用来归一化概率分布的常数,是修正的 0 阶 Bessel 函数:
$$
I_0(m) =\frac{1}{2\pi} \int_{0}^{2\pi} \exp\{m\cos\theta\} d\theta
$$
```
from scipy import special
_, ax = plt.subplots()
xx = np.r_[0:10:0.01]
ax.plot(xx, special.i0(xx), 'r')
ax.set_xlabel("$m$", fontsize='x-large')
ax.set_ylabel("$I_0(m)$", fontsize='x-large')
ax.set_xticks([0, 5, 10])
ax.set_yticks(np.r_[0:3001:1000])
plt.show()
```
现在考虑 `von Mises` 分布的最大似然:
$$
\ln p(\mathcal D|\theta_0,m) = -N\ln(2\pi)-N\ln I_0(m) + m\sum_{n=1}^N \cos(\theta_n-\theta_0)
$$
令其对 $\theta_0$ 的偏导为 0,我们有
$$
\sum_{n=1}^N \sin(\theta_n-\theta_0) = \cos\theta_0 \sum_{n=1}^N \sin\theta_n - \sin\theta_0 \sum_{n=1}^N \cos\theta_n = 0
$$
从而
$$
\theta_{ML} = \tan^{-1} \left\{\frac{\sum_{n} \sin \theta_n}{\sum_{n} \cos \theta_n}\right\}
$$
这与我们之前讨论的均值一致。
类似地,考虑对 $m$ 的偏导,利用 $I_0'(m) = I_1(m)$($I_1$ 是修正的 1 阶 Bessel 函数),我们有:
$$
\frac{I_1(m)}{I_0(m)} = \frac{1}{N} \sum_{n=1}^N \cos(\theta_n-\theta_0)
$$
令
$$
A(m) = \frac{I_1(m)}{I_0(m)}
$$
其图像如下图所示。
因此,我们有
$$
A(m_{ML}) = \frac{1}{N} \cos\theta_0^{ML} \sum_{n=1}^N \cos\theta_n - \frac{1}{N} \sin\theta_0^{ML} \sum_{n=1}^N \sin\theta_n
$$
我们可以通过数值的方法来求 $m_{ML}$ 的值。
```
_, ax = plt.subplots()
xx = np.r_[0:10:0.01]
ax.plot(xx, special.i1(xx) / special.i0(xx), 'r')
ax.set_xlabel("$m$", fontsize='x-large')
ax.set_ylabel("$A(m)$", fontsize='x-large')
ax.set_xticks(np.r_[0:11:5])
ax.set_yticks(np.r_[0:1.1:0.5])
plt.show()
```
## 2.3.9 高斯混合模型
之前说到,高斯分布是一个单峰模型,对于多峰情况的模拟效果并不好。
```
from scipy.stats import multivariate_normal
sz = 100
tt1 = multivariate_normal.rvs(cov=np.array([[2,1], [1,2]]),
size=sz)
x1, y1 = tt1[:,0], tt1[:,1]
tt2 = multivariate_normal.rvs(mean = np.array([8, 8]),
cov=np.array([[1,0.5], [0.5,2]]),
size=sz)
x2, y2 = tt2[:,0], tt2[:,1]
fig, axes = plt.subplots(1, 2, figsize=(10, 4))
m1 = np.mean(tt1, axis=0)
c1 = np.dot((tt1 - m1).T, (tt1 - m1)) / sz
m2 = np.mean(tt2, axis=0)
c2 = np.dot((tt2 - m2).T, (tt2 - m2)) / sz
tt = np.vstack([tt1, tt2])
m = np.mean(tt, axis=0)
c = np.dot((tt - m).T, (tt - m)) / sz / 2
x, y = np.mgrid[-5:12:0.1, -5:12:0.1]
z1 = bivariate_normal(x, y,
mux=m1[0],
muy=m1[1],
sigmax=np.sqrt(c1[0][0]),
sigmay=np.sqrt(c1[1][1]),
sigmaxy = c1[1][0])
z2 = bivariate_normal(x, y,
mux=m2[0],
muy=m2[1],
sigmax=np.sqrt(c2[0][0]),
sigmay=np.sqrt(c2[1][1]),
sigmaxy = c2[1][0])
z = bivariate_normal(x, y,
mux=m[0],
muy=m[1],
sigmax=np.sqrt(c[0][0]),
sigmay=np.sqrt(c[1][1]),
sigmaxy = c[1][0])
axes[0].contour(x, y, z, 4, colors='b')
axes[1].contour(x, y, z1, 4, colors='b')
axes[1].contour(x, y, z2, 4, colors='b')
for ax in axes:
ax.scatter(x1, y1, color='lime')
ax.scatter(x2, y2, color='lime')
plt.show()
```
但我们可以使用多个高斯分布的混合来模拟多峰模型。
高斯分布的一个简单线性混合就能产生十分复杂的分布,如图所示,红线是三个高斯分布线性组合后的概率密度,蓝线是三个高斯分布按相应比例进行缩放后的概率密度:
```
xx = np.linspace(-20, 20, 200)
y1 = norm.pdf(xx, loc=0, scale=4)
y2 = norm.pdf(xx, loc=10, scale=1.5)
y3 = norm.pdf(xx, loc=-8, scale=2)
fig, ax = plt.subplots()
ax.plot(xx, y1, 'b')
ax.plot(xx, y2, 'b')
ax.plot(xx, y3, 'b')
ax.plot(xx, y1 + y2 + y3, 'r')
ax.set_xticks([])
ax.set_yticks([])
ax.set_xlabel("$x$", fontsize="xx-large")
ax.set_ylabel("$p(x)$", fontsize="xx-large")
plt.show()
```
考虑 $K$ 个高斯分布的线性组合,即高斯混合分布(`mixture of Gaussians`):
$$
p(\mathbf x) = \sum_{k=1}^K \pi_k \mathcal N(\mathbf x|\mathbf \mu_k, \mathbf \Sigma_k)
$$
其中 $\mathcal N(\mathbf x|\mathbf \mu_k, \mathbf \Sigma_k)$ 是一个高斯分布,是混合分布的一个组分(`component of the mixture`)。
考虑概率分布的归一性 $\int p(\mathbf x) d\mathbf x = 1$:
$$
\sum_{k=1}^K \pi_k = 1
$$
同时考虑非负性 $p(\mathbf x) \geq 0$:
$$
\forall k, \pi_k \geq 0
$$
于是我们有:
$$
0 \leq \pi_k \leq 1
$$
另一方面,我们有:
$$
p(\mathbf x) = \sum_{k=1}^N p(k) p(\mathbf x|k)
$$
从而我们可以认为 $\pi_k = p(k)$ 是第 $k$ 个组分的先验分布,$p(\mathbf x|k)$ 是给定 $k$ 下的条件分布。
从而我们可以计算 $k$ 的后验分布:
$$
\begin{align}
\gamma_k(\mathbf x) & = p(k|\mathbf x) \\
& = \frac{p(k) p(\mathbf x|k)}{\sum_{k=1}^N p(k) p(\mathbf x|k)} \\
& = \frac{\pi_k \mathcal N(\mathbf x|\mathbf \mu_k, \mathbf \Sigma_k) }{\sum_{k=1}^K \pi_k \mathcal N(\mathbf x|\mathbf \mu_k, \mathbf \Sigma_k) }
\end{align}
$$
其参数为 $\bf \pi, \mu, \Sigma$,其中 $\mathbf \pi = {\pi_1, \dots, \pi_n}, \mathbf \mu = \{\mathbf\mu_1,\dots,\mathbf\mu_n\}, \mathbf\Sigma=\{\mathbf\Sigma_1,\dots,\mathbf\Sigma_n\}$。
对于数据 $\mathbf X=\{\mathbf x_1,\dots,\mathbf x_n\}$,似然函数为
$$
\ln p(\mathbf{X|\pi, \mu, \Sigma}) = \sum_{n=1}^N \ln \left\{ \sum_{k=1}^K \pi_k \mathcal N(\mathbf x_n|\mathbf \mu_k, \mathbf \Sigma_k) \right\}
$$
其最大似然解比较复杂,没有显式的表示形式,可以用数值法求解,或者用 EM 算法求解。
| github_jupyter |
```
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# **End to End Workflow with ML Pipeline Generator**
<table align="left">
<td>
<a href="https://colab.sandbox.google.com/github/GoogleCloudPlatform/ml-pipeline-generator-python/blob/master/examples/getting_started_notebook.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/ml-pipeline-generator-python/blob/master/examples/getting_started_notebook.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
## **Overview**
ML Pipeline Generator simplifies model building, training and deployment by generating the required training and deployment modules for your model. Using this tool, users with locally running scripts and notebooks can get started with AI Platform and Kubeflow Pipelines in a few steps, and will have the boilerplate code needed to customize their deployments and pipelines further.
[Insert Pic]
This demo shows you how to train and deploy Machine Learning models on a sample dataset. The demo is divided into two parts:
* Preparing an SVM classifier for training on Cloud AI platform
* Orchestrating the training of a Tensorflow model on Kubeflow Pipelines
### **Dataset**
This tutorial uses the [United States Census Income Dataset](https://archive.ics.uci.edu/ml/datasets/census+income) provided by the [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/index.php) containing information about people from a 1994 Census database, including age, education, marital status, occupation, and whether they make more than $50,000 a year. The dataset consists of over 30k rows, where each row corresponds to a different person. For a given row, there are 14 features that the model conditions on to predict the income of the person. A few of the features are named above, and the exhaustive list can be found both in the dataset link above.
## **Set up your local development environment**
**If you are using Colab or AI Platform Notebooks**, your environment already meets
all the requirements to run this notebook. If you are using **AI Platform Notebook**, make sure the machine configuration type is **1 vCPU, 3.75 GB RAM** or above. You can skip this step.
**Otherwise**, make sure your environment meets this notebook's requirements.
You need the following:
* The Google Cloud SDK
* Git
* Python 3
* virtualenv
* Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to [Setting up a Python development
environment](https://cloud.google.com/python/setup) and the [Jupyter
installation guide](https://jupyter.org/install) provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)
2. [Install Python 3.](https://cloud.google.com/python/setup#installing_python)
3. [Install
virtualenv](https://cloud.google.com/python/setup#installing_and_using_virtualenv)
and create a virtual environment that uses Python 3.
4. Activate that environment and run `pip install jupyter` in a shell to install
Jupyter.
5. Run `jupyter notebook` in a shell to launch Jupyter.
6. Open this notebook in the Jupyter Notebook Dashboard.
## **Set up your GCP project**
**If you do not have a GCP project then the following steps are required, regardless of your notebook environment.**
1. [Select or create a GCP project.](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)
3. [Create a GCP bucket](https://cloud.google.com/storage/docs/creating-buckets) so that we can store files.
## **PIP install packages and dependencies**
Install addional dependencies not installed in Notebook environment
**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
```
# Use the latest major GA version of the framework.
! pip install --upgrade ml-pipeline-gen PyYAML
```
**Note:** Try installing using `sudo`, if the above command throw any permission errors.
`Restart` the kernel to allow the package to be imported for Jupyter Notebooks.
## **Authenticate your GCP account**
**If you are using AI Platform Notebooks**, your environment is already
authenticated. Skip this step.
Only if you are on a local Juypter Notebook or Colab Environment, follow these steps:
1. [**Create a New Service Account**](https://cloud.google.com/iam/docs/creating-managing-service-account-keys#creating_service_account_keys).
3. Add the following roles:
**Compute Engine > Compute Admin**, **ML Engine > ML Engine Admin** and **Storage > Storage Object Admin**.
4. Download a JSON file that contains your key and it will be stored in your
local environment.
```
# If you are on Colab, run this cell and upload your service account's
# json key.
import os
import sys
if 'google.colab' in sys.modules:
from google.colab import files
keyfile_upload = files.upload()
keyfile = list(keyfile_upload.keys())[0]
keyfile_path = os.path.abspath(keyfile)
%env GOOGLE_APPLICATION_CREDENTIALS $keyfile_path
! gcloud auth activate-service-account --key-file $keyfile_path
# If you are running this notebook locally, replace the string below
# with the path to your service account key and run this cell
# to authenticate your GCP account.
%env GOOGLE_APPLICATION_CREDENTIALS /path/to/service/account
! gcloud auth activate-service-account --key-file '/path/to/service/account'
```
## **Before You Begin**
The tool requires following Google Cloud APIs to be enabled:
* [Google Cloud Storage](https://cloud.google.com/storage)
* [Cloud AI Platform](https://cloud.google.com/ai-platform)
* [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine)
Add your Project ID below, you can change the region below if you would like, but it is not a requirement.
```
PROJECT_ID = "[PROJECT-ID]" #@param {type:"string"}
COMPUTE_REGION = "us-central1" # Currently only supported region.
```
Also add your bucket name:
```
BUCKET_NAME = "[BUCKET-ID]" #@param {type:"string"}
!gcloud config set project {PROJECT_ID}
```
The tool requires following Google Cloud APIs to be enabled:
```
!gcloud services enable ml.googleapis.com \
compute.googleapis.com \
storage-component.googleapis.com
```
## **Create a model locally**
In this section we will create a model locally, which many users have. This section is done to illustrate the on-prem method of creating models and in the next section we will show how to train them on GCP so that you can leverage the benefits of the cloud like easy distributed training, paralllel hyperparameter tuning and fast, up-to-date accelerators.
The next block of code highlights how we will preprocess the census data. It is out of scope for this colab to dive into how the code works. All that is important is that the function `load_data` returns 4 values: the training features, the training predictor, the evaluation features and the evaluation predictor in that order (this function also uploads data into GCS). Run the hidden cell below.
```
#@title
# python3
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Train a simple TF classifier for MNIST dataset.
This example comes from the cloudml-samples keras demo.
github.com/GoogleCloudPlatform/cloudml-samples/blob/master/census/tf-keras
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
from six.moves import urllib
import tempfile
import numpy as np
import pandas as pd
import tensorflow.compat.v1 as tf
DATA_DIR = os.path.join(tempfile.gettempdir(), "census_data")
DATA_URL = ("https://storage.googleapis.com/cloud-samples-data/ai-platform"
+ "/census/data/")
TRAINING_FILE = "adult.data.csv"
EVAL_FILE = "adult.test.csv"
TRAINING_URL = os.path.join(DATA_URL, TRAINING_FILE)
EVAL_URL = os.path.join(DATA_URL, EVAL_FILE)
_CSV_COLUMNS = [
"age", "workclass", "fnlwgt", "education", "education_num",
"marital_status", "occupation", "relationship", "race", "gender",
"capital_gain", "capital_loss", "hours_per_week", "native_country",
"income_bracket",
]
_LABEL_COLUMN = "income_bracket"
UNUSED_COLUMNS = ["fnlwgt", "education", "gender"]
_CATEGORICAL_TYPES = {
"workclass": pd.api.types.CategoricalDtype(categories=[
"Federal-gov", "Local-gov", "Never-worked", "Private", "Self-emp-inc",
"Self-emp-not-inc", "State-gov", "Without-pay"
]),
"marital_status": pd.api.types.CategoricalDtype(categories=[
"Divorced", "Married-AF-spouse", "Married-civ-spouse",
"Married-spouse-absent", "Never-married", "Separated", "Widowed"
]),
"occupation": pd.api.types.CategoricalDtype([
"Adm-clerical", "Armed-Forces", "Craft-repair", "Exec-managerial",
"Farming-fishing", "Handlers-cleaners", "Machine-op-inspct",
"Other-service", "Priv-house-serv", "Prof-specialty", "Protective-serv",
"Sales", "Tech-support", "Transport-moving"
]),
"relationship": pd.api.types.CategoricalDtype(categories=[
"Husband", "Not-in-family", "Other-relative", "Own-child", "Unmarried",
"Wife"
]),
"race": pd.api.types.CategoricalDtype(categories=[
"Amer-Indian-Eskimo", "Asian-Pac-Islander", "Black", "Other", "White"
]),
"native_country": pd.api.types.CategoricalDtype(categories=[
"Cambodia", "Canada", "China", "Columbia", "Cuba", "Dominican-Republic",
"Ecuador", "El-Salvador", "England", "France", "Germany", "Greece",
"Guatemala", "Haiti", "Holand-Netherlands", "Honduras", "Hong",
"Hungary", "India", "Iran", "Ireland", "Italy", "Jamaica", "Japan",
"Laos", "Mexico", "Nicaragua", "Outlying-US(Guam-USVI-etc)", "Peru",
"Philippines", "Poland", "Portugal", "Puerto-Rico", "Scotland", "South",
"Taiwan", "Thailand", "Trinadad&Tobago", "United-States", "Vietnam",
"Yugoslavia"
]),
"income_bracket": pd.api.types.CategoricalDtype(categories=[
"<=50K", ">50K"
])
}
def _download_and_clean_file(filename, url):
"""Downloads data from url, and makes changes to match the CSV format.
The CSVs may use spaces after the comma delimters (non-standard) or include
rows which do not represent well-formed examples. This function strips out
some of these problems.
Args:
filename: filename to save url to
url: URL of resource to download
"""
temp_file, _ = urllib.request.urlretrieve(url)
with tf.io.gfile.GFile(temp_file, "r") as temp_file_object:
with tf.io.gfile.GFile(filename, "w") as file_object:
for line in temp_file_object:
line = line.strip()
line = line.replace(", ", ",")
if not line or "," not in line:
continue
if line[-1] == ".":
line = line[:-1]
line += "\n"
file_object.write(line)
tf.io.gfile.remove(temp_file)
def download(data_dir):
"""Downloads census data if it is not already present.
Args:
data_dir: directory where we will access/save the census data
Returns:
foo
"""
tf.io.gfile.makedirs(data_dir)
training_file_path = os.path.join(data_dir, TRAINING_FILE)
if not tf.io.gfile.exists(training_file_path):
_download_and_clean_file(training_file_path, TRAINING_URL)
eval_file_path = os.path.join(data_dir, EVAL_FILE)
if not tf.io.gfile.exists(eval_file_path):
_download_and_clean_file(eval_file_path, EVAL_URL)
return training_file_path, eval_file_path
def upload(train_df, eval_df, train_path, eval_path):
train_df.to_csv(os.path.join(os.path.dirname(train_path), TRAINING_FILE),
index=False, header=False)
eval_df.to_csv(os.path.join(os.path.dirname(eval_path), EVAL_FILE),
index=False, header=False)
def preprocess(dataframe):
"""Converts categorical features to numeric. Removes unused columns.
Args:
dataframe: Pandas dataframe with raw data
Returns:
Dataframe with preprocessed data
"""
dataframe = dataframe.drop(columns=UNUSED_COLUMNS)
# Convert integer valued (numeric) columns to floating point
numeric_columns = dataframe.select_dtypes(["int64"]).columns
dataframe[numeric_columns] = dataframe[numeric_columns].astype("float32")
# Convert categorical columns to numeric
cat_columns = dataframe.select_dtypes(["object"]).columns
dataframe[cat_columns] = dataframe[cat_columns].apply(
lambda x: x.astype(_CATEGORICAL_TYPES[x.name]))
dataframe[cat_columns] = dataframe[cat_columns].apply(
lambda x: x.cat.codes)
return dataframe
def standardize(dataframe):
"""Scales numerical columns using their means and standard deviation.
Args:
dataframe: Pandas dataframe
Returns:
Input dataframe with the numerical columns scaled to z-scores
"""
dtypes = list(zip(dataframe.dtypes.index, map(str, dataframe.dtypes)))
for column, dtype in dtypes:
if dtype == "float32":
dataframe[column] -= dataframe[column].mean()
dataframe[column] /= dataframe[column].std()
return dataframe
def load_data(train_path="", eval_path=""):
"""Loads data into preprocessed (train_x, train_y, eval_y, eval_y) dataframes.
Args:
train_path: Local or GCS path to uploaded train data to.
eval_path: Local or GCS path to uploaded eval data to.
Returns:
A tuple (train_x, train_y, eval_x, eval_y), where train_x and eval_x are
Pandas dataframes with features for training and train_y and eval_y are
numpy arrays with the corresponding labels.
"""
# Download Census dataset: Training and eval csv files.
training_file_path, eval_file_path = download(DATA_DIR)
train_df = pd.read_csv(
training_file_path, names=_CSV_COLUMNS, na_values="?")
eval_df = pd.read_csv(eval_file_path, names=_CSV_COLUMNS, na_values="?")
train_df = preprocess(train_df)
eval_df = preprocess(eval_df)
# Split train and eval data with labels. The pop method copies and removes
# the label column from the dataframe.
train_x, train_y = train_df, train_df.pop(_LABEL_COLUMN)
eval_x, eval_y = eval_df, eval_df.pop(_LABEL_COLUMN)
# Join train_x and eval_x to normalize on overall means and standard
# deviations. Then separate them again.
all_x = pd.concat([train_x, eval_x], keys=["train", "eval"])
all_x = standardize(all_x)
train_x, eval_x = all_x.xs("train"), all_x.xs("eval")
# Rejoin features and labels and upload to GCS.
if train_path and eval_path:
train_df = train_x.copy()
train_df[_LABEL_COLUMN] = train_y
eval_df = eval_x.copy()
eval_df[_LABEL_COLUMN] = eval_y
upload(train_df, eval_df, train_path, eval_path)
# Reshape label columns for use with tf.data.Dataset
train_y = np.asarray(train_y).astype("float32").reshape((-1, 1))
eval_y = np.asarray(eval_y).astype("float32").reshape((-1, 1))
return train_x, train_y, eval_x, eval_y
```
Now we train the a sklearn SVM model on this data.
```
from sklearn import svm
train_x, train_y, eval_x, eval_y = load_data()
train_y, eval_y = [np.ravel(x) for x in [train_y, eval_y]]
classifier = svm.SVC(C=1)
classifier.fit(train_x, train_y)
score = classifier.score(eval_x, eval_y)
print('Accuracy is {}'.format(score))
```
Usually, the pipelines have more complexities to it, such as hyperparameter tuning. However, at the end we have a single model which is the best and which we want to serve in production.
## Preparing an SVM classifier for training on Cloud AI platform
We now have a model which we think is good, but we want to add this model onto GCP while at the same time adding additional features such as training and prediction so future runs will be simple.
We can leverage the examples that are in thie ML Pipeline Generator as they give good examples and templates to follow. So first we clone the github repo.
```
!git clone https://github.com/GoogleCloudPlatform/ml-pipeline-generator-python.git
```
Then we copy the sklearn example to the current directory and go into this folder.
```
!cp -r ml-pipeline-generator-python/examples/sklearn sklearn-demo
%cd sklearn-demo
```
We now modify the config.yaml.example file with out project id, bucket id and model name. Note the training and evaluation data files should be stored in your bucket already, unless you decided to handle that upload in your preprocessing function (like in this lab).
```
%%writefile config.yaml
# Copyright 2020 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Config file for ML Pipeline Generator.
project_id: [PROJECT ID]
bucket_id: [BUCKET ID]
region: "us-central1"
scale_tier: "STANDARD_1"
runtime_version: "1.15"
python_version: "3.7"
package_name: "ml_pipeline_gen"
machine_type_pred: "mls1-c4-m2"
data:
schema:
- "age"
- "workclass"
- "education_num"
- "marital_status"
- "occupation"
- "relationship"
- "race"
- "capital_gain"
- "capital_loss"
- "hours_per_week"
- "native_country"
- "income_bracket"
train: "gs://[BUCKET ID]/[MODEL NAME]/data/adult.data.csv"
evaluation: "gs://[BUCKET ID]/[MODEL NAME]/data/adult.test.csv"
prediction:
input_data_paths:
- "gs://[BUCKET ID]/[MODEL NAME]/inputs/*"
input_format: "JSON"
output_format: "JSON"
model:
# Name must start with a letter and only contain letters, numbers, and
# underscores.
name: [MODEL NAME]
path: "model.sklearn_model"
target: "income_bracket"
model_params:
input_args:
C:
type: "float"
help: "Regularization parameter, must be positive."
default: 1.0
# Relative path.
hyperparam_config: "hptuning_config.yaml"
```
We now copy our previous preoprocessing code into the file `concensus_preprocess.py`. Run the hidden cell below.
```
#@title
%%writefile model/census_preprocess.py
# python3
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Train a simple TF classifier for MNIST dataset.
This example comes from the cloudml-samples keras demo.
github.com/GoogleCloudPlatform/cloudml-samples/blob/master/census/tf-keras
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
from six.moves import urllib
import tempfile
import numpy as np
import pandas as pd
import tensorflow.compat.v1 as tf
DATA_DIR = os.path.join(tempfile.gettempdir(), "census_data")
DATA_URL = ("https://storage.googleapis.com/cloud-samples-data/ai-platform"
+ "/census/data/")
TRAINING_FILE = "adult.data.csv"
EVAL_FILE = "adult.test.csv"
TRAINING_URL = os.path.join(DATA_URL, TRAINING_FILE)
EVAL_URL = os.path.join(DATA_URL, EVAL_FILE)
_CSV_COLUMNS = [
"age", "workclass", "fnlwgt", "education", "education_num",
"marital_status", "occupation", "relationship", "race", "gender",
"capital_gain", "capital_loss", "hours_per_week", "native_country",
"income_bracket",
]
_LABEL_COLUMN = "income_bracket"
UNUSED_COLUMNS = ["fnlwgt", "education", "gender"]
_CATEGORICAL_TYPES = {
"workclass": pd.api.types.CategoricalDtype(categories=[
"Federal-gov", "Local-gov", "Never-worked", "Private", "Self-emp-inc",
"Self-emp-not-inc", "State-gov", "Without-pay"
]),
"marital_status": pd.api.types.CategoricalDtype(categories=[
"Divorced", "Married-AF-spouse", "Married-civ-spouse",
"Married-spouse-absent", "Never-married", "Separated", "Widowed"
]),
"occupation": pd.api.types.CategoricalDtype([
"Adm-clerical", "Armed-Forces", "Craft-repair", "Exec-managerial",
"Farming-fishing", "Handlers-cleaners", "Machine-op-inspct",
"Other-service", "Priv-house-serv", "Prof-specialty", "Protective-serv",
"Sales", "Tech-support", "Transport-moving"
]),
"relationship": pd.api.types.CategoricalDtype(categories=[
"Husband", "Not-in-family", "Other-relative", "Own-child", "Unmarried",
"Wife"
]),
"race": pd.api.types.CategoricalDtype(categories=[
"Amer-Indian-Eskimo", "Asian-Pac-Islander", "Black", "Other", "White"
]),
"native_country": pd.api.types.CategoricalDtype(categories=[
"Cambodia", "Canada", "China", "Columbia", "Cuba", "Dominican-Republic",
"Ecuador", "El-Salvador", "England", "France", "Germany", "Greece",
"Guatemala", "Haiti", "Holand-Netherlands", "Honduras", "Hong",
"Hungary", "India", "Iran", "Ireland", "Italy", "Jamaica", "Japan",
"Laos", "Mexico", "Nicaragua", "Outlying-US(Guam-USVI-etc)", "Peru",
"Philippines", "Poland", "Portugal", "Puerto-Rico", "Scotland", "South",
"Taiwan", "Thailand", "Trinadad&Tobago", "United-States", "Vietnam",
"Yugoslavia"
]),
"income_bracket": pd.api.types.CategoricalDtype(categories=[
"<=50K", ">50K"
])
}
def _download_and_clean_file(filename, url):
"""Downloads data from url, and makes changes to match the CSV format.
The CSVs may use spaces after the comma delimters (non-standard) or include
rows which do not represent well-formed examples. This function strips out
some of these problems.
Args:
filename: filename to save url to
url: URL of resource to download
"""
temp_file, _ = urllib.request.urlretrieve(url)
with tf.io.gfile.GFile(temp_file, "r") as temp_file_object:
with tf.io.gfile.GFile(filename, "w") as file_object:
for line in temp_file_object:
line = line.strip()
line = line.replace(", ", ",")
if not line or "," not in line:
continue
if line[-1] == ".":
line = line[:-1]
line += "\n"
file_object.write(line)
tf.io.gfile.remove(temp_file)
def download(data_dir):
"""Downloads census data if it is not already present.
Args:
data_dir: directory where we will access/save the census data
Returns:
foo
"""
tf.io.gfile.makedirs(data_dir)
training_file_path = os.path.join(data_dir, TRAINING_FILE)
if not tf.io.gfile.exists(training_file_path):
_download_and_clean_file(training_file_path, TRAINING_URL)
eval_file_path = os.path.join(data_dir, EVAL_FILE)
if not tf.io.gfile.exists(eval_file_path):
_download_and_clean_file(eval_file_path, EVAL_URL)
return training_file_path, eval_file_path
def upload(train_df, eval_df, train_path, eval_path):
train_df.to_csv(os.path.join(os.path.dirname(train_path), TRAINING_FILE),
index=False, header=False)
eval_df.to_csv(os.path.join(os.path.dirname(eval_path), EVAL_FILE),
index=False, header=False)
def preprocess(dataframe):
"""Converts categorical features to numeric. Removes unused columns.
Args:
dataframe: Pandas dataframe with raw data
Returns:
Dataframe with preprocessed data
"""
dataframe = dataframe.drop(columns=UNUSED_COLUMNS)
# Convert integer valued (numeric) columns to floating point
numeric_columns = dataframe.select_dtypes(["int64"]).columns
dataframe[numeric_columns] = dataframe[numeric_columns].astype("float32")
# Convert categorical columns to numeric
cat_columns = dataframe.select_dtypes(["object"]).columns
dataframe[cat_columns] = dataframe[cat_columns].apply(
lambda x: x.astype(_CATEGORICAL_TYPES[x.name]))
dataframe[cat_columns] = dataframe[cat_columns].apply(
lambda x: x.cat.codes)
return dataframe
def standardize(dataframe):
"""Scales numerical columns using their means and standard deviation.
Args:
dataframe: Pandas dataframe
Returns:
Input dataframe with the numerical columns scaled to z-scores
"""
dtypes = list(zip(dataframe.dtypes.index, map(str, dataframe.dtypes)))
for column, dtype in dtypes:
if dtype == "float32":
dataframe[column] -= dataframe[column].mean()
dataframe[column] /= dataframe[column].std()
return dataframe
def load_data(train_path="", eval_path=""):
"""Loads data into preprocessed (train_x, train_y, eval_y, eval_y) dataframes.
Args:
train_path: Local or GCS path to uploaded train data to.
eval_path: Local or GCS path to uploaded eval data to.
Returns:
A tuple (train_x, train_y, eval_x, eval_y), where train_x and eval_x are
Pandas dataframes with features for training and train_y and eval_y are
numpy arrays with the corresponding labels.
"""
# Download Census dataset: Training and eval csv files.
training_file_path, eval_file_path = download(DATA_DIR)
train_df = pd.read_csv(
training_file_path, names=_CSV_COLUMNS, na_values="?")
eval_df = pd.read_csv(eval_file_path, names=_CSV_COLUMNS, na_values="?")
train_df = preprocess(train_df)
eval_df = preprocess(eval_df)
# Split train and eval data with labels. The pop method copies and removes
# the label column from the dataframe.
train_x, train_y = train_df, train_df.pop(_LABEL_COLUMN)
eval_x, eval_y = eval_df, eval_df.pop(_LABEL_COLUMN)
# Join train_x and eval_x to normalize on overall means and standard
# deviations. Then separate them again.
all_x = pd.concat([train_x, eval_x], keys=["train", "eval"])
all_x = standardize(all_x)
train_x, eval_x = all_x.xs("train"), all_x.xs("eval")
# Rejoin features and labels and upload to GCS.
if train_path and eval_path:
train_df = train_x.copy()
train_df[_LABEL_COLUMN] = train_y
eval_df = eval_x.copy()
eval_df[_LABEL_COLUMN] = eval_y
upload(train_df, eval_df, train_path, eval_path)
# Reshape label columns for use with tf.data.Dataset
train_y = np.asarray(train_y).astype("float32").reshape((-1, 1))
eval_y = np.asarray(eval_y).astype("float32").reshape((-1, 1))
return train_x, train_y, eval_x, eval_y
```
We perform a similar copy and paste into the `sklearn_model.py` file, with the addition of a parameter `C` which we will use for hyperparameter tuning. You can add as much hyperparameters as you requre to tune.
```
%%writefile model/sklearn_model.py
# python3
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Train a simple SVM classifier."""
import argparse
import numpy as np
from sklearn import svm
from model.census_preprocess import load_data
def get_model(params):
"""Trains a classifier."""
classifier = svm.SVC(C=params.C)
return classifier
```
We now speify the hyperparameters for our training runs based on the [hyperparameter tuning yaml format](https://cloud.google.com/ai-platform/training/docs/using-hyperparameter-tuning) for CAIP.
```
%%writefile hptuning_config.yaml
# Copyright 2020 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
trainingInput:
scaleTier: STANDARD_1
hyperparameters:
goal: MAXIMIZE
maxTrials: 2
maxParallelTrials: 2
hyperparameterMetricTag: score
enableTrialEarlyStopping: TRUE
params:
- parameterName: C
type: DOUBLE
minValue: .001
maxValue: 10
scaleType: UNIT_LOG_SCALE
```
## Run the Sklearn Model on CAIP
We only modified two yaml files and the `demo.py` file to specify training, hyperparameter tuning and model prediction. Then, we simply copied and pasted our existing code for preprocessing and building the model. We did not have to write any GCP specific code as yet, this will all be handled by this solution. Now we can submit our jobs to the cloud with a few commands
```
from ml_pipeline_gen.models import SklearnModel
from model.census_preprocess import load_data
```
Specify the path of your `config.yaml` file
```
config = "config.yaml"
```
Now, we can easily create our model, generate all the necessary Cloud AI Platform files needed to train the model, upload the data files and train the model in 4 simple commands. Note, our `load_data` function uploads the files for us automatically, you can also manually upload the files to the buckets you specified in the `config.yaml` file.
```
model = SklearnModel(config)
model.generate_files()
# this fn is from out preprocessing file and
# automatically uploads our data to GCS
load_data(model.data["train"], model.data["evaluation"])
job_id = model.train(tune=True)
```
After training, we would like to test our model's prediction. First, deploy the model (our code automatically returns a generated version). Then request online predictions.
```
pred_input = [
[0.02599666, 6, 1.1365801, 4, 0, 1, 4, 0.14693314, -0.21713187,
-0.034039237, 38],
]
version = model.deploy(job_id=job_id)
preds = model.online_predict(pred_input, version=version)
print("Features: {}".format(pred_input))
print("Predictions: {}".format(preds))
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import random
import time
sns.set()
def get_vocab(file, lower = False):
with open(file, 'r') as fopen:
data = fopen.read()
if lower:
data = data.lower()
vocab = list(set(data))
return data, vocab
def embed_to_onehot(data, vocab):
onehot = np.zeros((len(data), len(vocab)), dtype = np.float32)
for i in range(len(data)):
onehot[i, vocab.index(data[i])] = 1.0
return onehot
text, text_vocab = get_vocab('consumer.h', lower = False)
onehot = embed_to_onehot(text, text_vocab)
learning_rate = 0.01
batch_size = 64
sequence_length = 12
epoch = 1000
num_layers = 2
size_layer = 128
possible_batch_id = range(len(text) - sequence_length - 1)
dimension = onehot.shape[1]
epsilon = 1e-8
U = np.random.randn(size_layer, dimension) / np.sqrt(size_layer)
U_g = np.zeros(U.shape)
W = np.random.randn(size_layer, size_layer) / np.sqrt(size_layer)
W_g = np.zeros(W.shape)
V = np.random.randn(dimension, size_layer) / np.sqrt(dimension)
V_g = np.zeros(V.shape)
def tanh(x, grad=False):
if grad:
output = np.tanh(x)
return (1.0 - np.square(output))
else:
return np.tanh(x)
def softmax(x):
exp_scores = np.exp(x - np.max(x))
return exp_scores / (np.sum(exp_scores, axis=1, keepdims=True) + 1e-8)
def derivative_softmax_cross_entropy(x, y):
delta = softmax(x)
delta[range(X.shape[0]), y] -= 1
return delta
def forward_multiply_gate(w, x):
return np.dot(w, x)
def backward_multiply_gate(w, x, dz):
dW = np.dot(dz.T, x)
dx = np.dot(w.T, dz.T)
return dW, dx
def forward_add_gate(x1, x2):
return x1 + x2
def backward_add_gate(x1, x2, dz):
dx1 = dz * np.ones_like(x1)
dx2 = dz * np.ones_like(x2)
return dx1, dx2
def cross_entropy(Y_hat, Y, epsilon=1e-12):
Y_hat = np.clip(Y_hat, epsilon, 1. - epsilon)
N = Y_hat.shape[0]
return -np.sum(np.sum(Y * np.log(Y_hat+1e-9))) / N
def forward_recurrent(x, prev_state, U, W, V):
mul_u = forward_multiply_gate(x, U.T)
mul_w = forward_multiply_gate(prev_state, W.T)
add_previous_now = forward_add_gate(mul_u, mul_w)
current_state = tanh(add_previous_now)
mul_v = forward_multiply_gate(current_state, V.T)
return (mul_u, mul_w, add_previous_now, current_state, mul_v)
def backward_recurrent(x, prev_state, U, W, V, d_mul_v, saved_graph):
mul_u, mul_w, add_previous_now, current_state, mul_v = saved_graph
dV, dcurrent_state = backward_multiply_gate(V, current_state, d_mul_v)
dadd_previous_now = tanh(add_previous_now, True) * dcurrent_state.T
dmul_w, dmul_u = backward_add_gate(mul_w, mul_u, dadd_previous_now)
dW, dprev_state = backward_multiply_gate(W, prev_state, dmul_w)
dU, dx = backward_multiply_gate(U, x, dmul_u)
return (dprev_state, dU, dW, dV)
for i in range(epoch):
batch_x = np.zeros((batch_size, sequence_length, dimension))
batch_y = np.zeros((batch_size, sequence_length, dimension))
batch_id = random.sample(possible_batch_id, batch_size)
prev_s = np.zeros((batch_size, size_layer))
for n in range(sequence_length):
id1 = [k + n for k in batch_id]
id2 = [k + n + 1 for k in batch_id]
batch_x[:,n,:] = onehot[id1, :]
batch_y[:,n,:] = onehot[id2, :]
layers = []
out_logits = np.zeros((batch_size, sequence_length, dimension))
for n in range(sequence_length):
layers.append(forward_recurrent(batch_x[:,n,:], prev_s, U, W, V))
prev_s = layers[-1][3]
out_logits[:, n, :] = layers[-1][-1]
probs = softmax(out_logits.reshape((-1, dimension)))
y = np.argmax(batch_y.reshape((-1, dimension)),axis=1)
accuracy = np.mean(np.argmax(probs,axis=1) == y)
loss = cross_entropy(probs, batch_y.reshape((-1, dimension)))
delta = probs
delta[range(y.shape[0]), y] -= 1
delta = delta.reshape((batch_size, sequence_length, dimension))
dU = np.zeros(U.shape)
dV = np.zeros(V.shape)
dW = np.zeros(W.shape)
prev_state = np.zeros((batch_size, size_layer))
for n in range(sequence_length):
d_mul_v = delta[:, n, :]
dprev_s, dU_t, dW_t, dV_t = backward_recurrent(batch_x[:,n,:], prev_state, U, W, V, d_mul_v, layers[n])
prev_state = layers[n][3]
dV += dV_t
dU += dU_t
dW += dW_t
U_g += dU ** 2
U += -learning_rate * dU / np.sqrt(U_g + epsilon)
V_g += dV ** 2
V += -learning_rate * dV / np.sqrt(V_g + epsilon)
W_g += dW ** 2
W += -learning_rate * dW / np.sqrt(W_g + epsilon)
if (i+1) % 50 == 0:
print('epoch %d, loss %f, accuracy %f'%(i+1, loss, accuracy))
```
| github_jupyter |
```
import os
import json
import clang.cindex
import clang.enumerations
import csv
import numpy as np
import os
import re
import warnings
warnings.filterwarnings('ignore')
# set the config
try:
clang.cindex.Config.set_library_path("/usr/lib/x86_64-linux-gnu")
clang.cindex.Config.set_library_file('/usr/lib/x86_64-linux-gnu/libclang-6.0.so.1')
except:
pass
split_dir = '../data/neurips_parsed/neurips_data/'
parsed = '../data/neurips_parsed/parsed_results/'
# split_dir = '../data/chrome_debian/raw_code/'
# parsed = '../data/chrome_debian/parsed/'
ggnn_json_data = json.load(open('../data/ggnn_input/devign_cfg_full_text_files.json'))
files = [d['file_name'] for d in ggnn_json_data]
print(len(files))
l_funcs = ['StrNCat', 'getaddrinfo', '_ui64toa', 'fclose', 'pthread_mutex_lock', 'gets_s', 'sleep',
'_ui64tot', 'freopen_s', '_ui64tow', 'send', 'lstrcat', 'HMAC_Update', '__fxstat', 'StrCatBuff',
'_mbscat', '_mbstok_s', '_cprintf_s', 'ldap_search_init_page', 'memmove_s', 'ctime_s', 'vswprintf',
'vswprintf_s', '_snwprintf', '_gmtime_s', '_tccpy', '*RC6*', '_mbslwr_s', 'random',
'__wcstof_internal', '_wcslwr_s', '_ctime32_s', 'wcsncat*', 'MD5_Init', '_ultoa',
'snprintf', 'memset', 'syslog', '_vsnprintf_s', 'HeapAlloc', 'pthread_mutex_destroy',
'ChangeWindowMessageFilter', '_ultot', 'crypt_r', '_strupr_s_l', 'LoadLibraryExA', '_strerror_s',
'LoadLibraryExW', 'wvsprintf', 'MoveFileEx', '_strdate_s', 'SHA1', 'sprintfW', 'StrCatNW',
'_scanf_s_l', 'pthread_attr_init', '_wtmpnam_s', 'snscanf', '_sprintf_s_l', 'dlopen',
'sprintfA', 'timed_mutex', 'OemToCharA', 'ldap_delete_ext', 'sethostid', 'popen', 'OemToCharW',
'_gettws', 'vfork', '_wcsnset_s_l', 'sendmsg', '_mbsncat', 'wvnsprintfA', 'HeapFree', '_wcserror_s',
'realloc', '_snprintf*', 'wcstok', '_strncat*', 'StrNCpy', '_wasctime_s', 'push*', '_lfind_s',
'CC_SHA512', 'ldap_compare_ext_s', 'wcscat_s', 'strdup', '_chsize_s', 'sprintf_s', 'CC_MD4_Init',
'wcsncpy', '_wfreopen_s', '_wcsupr_s', '_searchenv_s', 'ldap_modify_ext_s', '_wsplitpath',
'CC_SHA384_Final', 'MD2', 'RtlCopyMemory', 'lstrcatW', 'MD4', 'MD5', '_wcstok_s_l', '_vsnwprintf_s',
'ldap_modify_s', 'strerror', '_lsearch_s', '_mbsnbcat_s', '_wsplitpath_s', 'MD4_Update', '_mbccpy_s',
'_strncpy_s_l', '_snprintf_s', 'CC_SHA512_Init', 'fwscanf_s', '_snwprintf_s', 'CC_SHA1', 'swprintf',
'fprintf', 'EVP_DigestInit_ex', 'strlen', 'SHA1_Init', 'strncat', '_getws_s', 'CC_MD4_Final',
'wnsprintfW', 'lcong48', 'lrand48', 'write', 'HMAC_Init', '_wfopen_s', 'wmemchr', '_tmakepath',
'wnsprintfA', 'lstrcpynW', 'scanf_s', '_mbsncpy_s_l', '_localtime64_s', 'fstream.open', '_wmakepath',
'Connection.open', '_tccat', 'valloc', 'setgroups', 'unlink', 'fstream.put', 'wsprintfA', '*SHA1*',
'_wsearchenv_s', 'ualstrcpyA', 'CC_MD5_Update', 'strerror_s', 'HeapCreate', 'ualstrcpyW', '__xstat',
'_wmktemp_s', 'StrCatChainW', 'ldap_search_st', '_mbstowcs_s_l', 'ldap_modify_ext', '_mbsset_s',
'strncpy_s', 'move', 'execle', 'StrCat', 'xrealloc', 'wcsncpy_s', '_tcsncpy*', 'execlp',
'RIPEMD160_Final', 'ldap_search_s', 'EnterCriticalSection', '_wctomb_s_l', 'fwrite', '_gmtime64_s',
'sscanf_s', 'wcscat', '_strupr_s', 'wcrtomb_s', 'VirtualLock', 'ldap_add_ext_s', '_mbscpy',
'_localtime32_s', 'lstrcpy', '_wcsncpy*', 'CC_SHA1_Init', '_getts', '_wfopen', '__xstat64',
'strcoll', '_fwscanf_s_l', '_mbslwr_s_l', 'RegOpenKey', 'makepath', 'seed48', 'CC_SHA256',
'sendto', 'execv', 'CalculateDigest', 'memchr', '_mbscpy_s', '_strtime_s', 'ldap_search_ext_s',
'_chmod', 'flock', '__fxstat64', '_vsntprintf', 'CC_SHA256_Init', '_itoa_s', '__wcserror_s',
'_gcvt_s', 'fstream.write', 'sprintf', 'recursive_mutex', 'strrchr', 'gethostbyaddr', '_wcsupr_s_l',
'strcspn', 'MD5_Final', 'asprintf', '_wcstombs_s_l', '_tcstok', 'free', 'MD2_Final', 'asctime_s',
'_alloca', '_wputenv_s', '_wcsset_s', '_wcslwr_s_l', 'SHA1_Update', 'filebuf.sputc', 'filebuf.sputn',
'SQLConnect', 'ldap_compare', 'mbstowcs_s', 'HMAC_Final', 'pthread_condattr_init', '_ultow_s', 'rand',
'ofstream.put', 'CC_SHA224_Final', 'lstrcpynA', 'bcopy', 'system', 'CreateFile*', 'wcscpy_s',
'_mbsnbcpy*', 'open', '_vsnwprintf', 'strncpy', 'getopt_long', 'CC_SHA512_Final', '_vsprintf_s_l',
'scanf', 'mkdir', '_localtime_s', '_snprintf', '_mbccpy_s_l', 'memcmp', 'final', '_ultoa_s',
'lstrcpyW', 'LoadModule', '_swprintf_s_l', 'MD5_Update', '_mbsnset_s_l', '_wstrtime_s', '_strnset_s',
'lstrcpyA', '_mbsnbcpy_s', 'mlock', 'IsBadHugeWritePtr', 'copy', '_mbsnbcpy_s_l', 'wnsprintf',
'wcscpy', 'ShellExecute', 'CC_MD4', '_ultow', '_vsnwprintf_s_l', 'lstrcpyn', 'CC_SHA1_Final',
'vsnprintf', '_mbsnbset_s', '_i64tow', 'SHA256_Init', 'wvnsprintf', 'RegCreateKey', 'strtok_s',
'_wctime32_s', '_i64toa', 'CC_MD5_Final', 'wmemcpy', 'WinExec', 'CreateDirectory*',
'CC_SHA256_Update', '_vsnprintf_s_l', 'jrand48', 'wsprintf', 'ldap_rename_ext_s', 'filebuf.open',
'_wsystem', 'SHA256_Update', '_cwscanf_s', 'wsprintfW', '_sntscanf', '_splitpath', 'fscanf_s',
'strpbrk', 'wcstombs_s', 'wscanf', '_mbsnbcat_s_l', 'strcpynA', 'pthread_cond_init', 'wcsrtombs_s',
'_wsopen_s', 'CharToOemBuffA', 'RIPEMD160_Update', '_tscanf', 'HMAC', 'StrCCpy', 'Connection.connect',
'lstrcatn', '_mbstok', '_mbsncpy', 'CC_SHA384_Update', 'create_directories', 'pthread_mutex_unlock',
'CFile.Open', 'connect', '_vswprintf_s_l', '_snscanf_s_l', 'fputc', '_wscanf_s', '_snprintf_s_l',
'strtok', '_strtok_s_l', 'lstrcatA', 'snwscanf', 'pthread_mutex_init', 'fputs', 'CC_SHA384_Init',
'_putenv_s', 'CharToOemBuffW', 'pthread_mutex_trylock', '__wcstoul_internal', '_memccpy',
'_snwprintf_s_l', '_strncpy*', 'wmemset', 'MD4_Init', '*RC4*', 'strcpyW', '_ecvt_s', 'memcpy_s',
'erand48', 'IsBadHugeReadPtr', 'strcpyA', 'HeapReAlloc', 'memcpy', 'ldap_rename_ext', 'fopen_s',
'srandom', '_cgetws_s', '_makepath', 'SHA256_Final', 'remove', '_mbsupr_s', 'pthread_mutexattr_init',
'__wcstold_internal', 'StrCpy', 'ldap_delete', 'wmemmove_s', '_mkdir', 'strcat', '_cscanf_s_l',
'StrCAdd', 'swprintf_s', '_strnset_s_l', 'close', 'ldap_delete_ext_s', 'ldap_modrdn', 'strchr',
'_gmtime32_s', '_ftcscat', 'lstrcatnA', '_tcsncat', 'OemToChar', 'mutex', 'CharToOem', 'strcpy_s',
'lstrcatnW', '_wscanf_s_l', '__lxstat64', 'memalign', 'MD2_Init', 'StrCatBuffW', 'StrCpyN', 'CC_MD5',
'StrCpyA', 'StrCatBuffA', 'StrCpyW', 'tmpnam_r', '_vsnprintf', 'strcatA', 'StrCpyNW', '_mbsnbset_s_l',
'EVP_DigestInit', '_stscanf', 'CC_MD2', '_tcscat', 'StrCpyNA', 'xmalloc', '_tcslen', '*MD4*',
'vasprintf', 'strxfrm', 'chmod', 'ldap_add_ext', 'alloca', '_snscanf_s', 'IsBadWritePtr', 'swscanf_s',
'wmemcpy_s', '_itoa', '_ui64toa_s', 'EVP_DigestUpdate', '__wcstol_internal', '_itow', 'StrNCatW',
'strncat_s', 'ualstrcpy', 'execvp', '_mbccat', 'EVP_MD_CTX_init', 'assert', 'ofstream.write',
'ldap_add', '_sscanf_s_l', 'drand48', 'CharToOemW', 'swscanf', '_itow_s', 'RIPEMD160_Init',
'CopyMemory', 'initstate', 'getpwuid', 'vsprintf', '_fcvt_s', 'CharToOemA', 'setuid', 'malloc',
'StrCatNA', 'strcat_s', 'srand', 'getwd', '_controlfp_s', 'olestrcpy', '__wcstod_internal',
'_mbsnbcat', 'lstrncat', 'des_*', 'CC_SHA224_Init', 'set*', 'vsprintf_s', 'SHA1_Final', '_umask_s',
'gets', 'setstate', 'wvsprintfW', 'LoadLibraryEx', 'ofstream.open', 'calloc', '_mbstrlen',
'_cgets_s', '_sopen_s', 'IsBadStringPtr', 'wcsncat_s', 'add*', 'nrand48', 'create_directory',
'ldap_search_ext', '_i64toa_s', '_ltoa_s', '_cwscanf_s_l', 'wmemcmp', '__lxstat', 'lstrlen',
'pthread_condattr_destroy', '_ftcscpy', 'wcstok_s', '__xmknod', 'pthread_attr_destroy', 'sethostname',
'_fscanf_s_l', 'StrCatN', 'RegEnumKey', '_tcsncpy', 'strcatW', 'AfxLoadLibrary', 'setenv', 'tmpnam',
'_mbsncat_s_l', '_wstrdate_s', '_wctime64_s', '_i64tow_s', 'CC_MD4_Update', 'ldap_add_s', '_umask',
'CC_SHA1_Update', '_wcsset_s_l', '_mbsupr_s_l', 'strstr', '_tsplitpath', 'memmove', '_tcscpy',
'vsnprintf_s', 'strcmp', 'wvnsprintfW', 'tmpfile', 'ldap_modify', '_mbsncat*', 'mrand48', 'sizeof',
'StrCatA', '_ltow_s', '*desencrypt*', 'StrCatW', '_mbccpy', 'CC_MD2_Init', 'RIPEMD160', 'ldap_search',
'CC_SHA224', 'mbsrtowcs_s', 'update', 'ldap_delete_s', 'getnameinfo', '*RC5*', '_wcsncat_s_l',
'DriverManager.getConnection', 'socket', '_cscanf_s', 'ldap_modrdn_s', '_wopen', 'CC_SHA256_Final',
'_snwprintf*', 'MD2_Update', 'strcpy', '_strncat_s_l', 'CC_MD5_Init', 'mbscpy', 'wmemmove',
'LoadLibraryW', '_mbslen', '*alloc', '_mbsncat_s', 'LoadLibraryA', 'fopen', 'StrLen', 'delete',
'_splitpath_s', 'CreateFileTransacted*', 'MD4_Final', '_open', 'CC_SHA384', 'wcslen', 'wcsncat',
'_mktemp_s', 'pthread_mutexattr_destroy', '_snwscanf_s', '_strset_s', '_wcsncpy_s_l', 'CC_MD2_Final',
'_mbstok_s_l', 'wctomb_s', 'MySQL_Driver.connect', '_snwscanf_s_l', '*_des_*', 'LoadLibrary',
'_swscanf_s_l', 'ldap_compare_s', 'ldap_compare_ext', '_strlwr_s', 'GetEnvironmentVariable',
'cuserid', '_mbscat_s', 'strspn', '_mbsncpy_s', 'ldap_modrdn2', 'LeaveCriticalSection', 'CopyFile',
'getpwd', 'sscanf', 'creat', 'RegSetValue', 'ldap_modrdn2_s', 'CFile.Close', '*SHA_1*',
'pthread_cond_destroy', 'CC_SHA512_Update', '*RC2*', 'StrNCatA', '_mbsnbcpy', '_mbsnset_s',
'crypt', 'excel', '_vstprintf', 'xstrdup', 'wvsprintfA', 'getopt', 'mkstemp', '_wcsnset_s',
'_stprintf', '_sntprintf', 'tmpfile_s', 'OpenDocumentFile', '_mbsset_s_l', '_strset_s_l',
'_strlwr_s_l', 'ifstream.open', 'xcalloc', 'StrNCpyA', '_wctime_s', 'CC_SHA224_Update', '_ctime64_s',
'MoveFile', 'chown', 'StrNCpyW', 'IsBadReadPtr', '_ui64tow_s', 'IsBadCodePtr', 'getc',
'OracleCommand.ExecuteOracleScalar', 'AccessDataSource.Insert', 'IDbDataAdapter.FillSchema',
'IDbDataAdapter.Update', 'GetWindowText*', 'SendMessage', 'SqlCommand.ExecuteNonQuery', 'streambuf.sgetc',
'streambuf.sgetn', 'OracleCommand.ExecuteScalar', 'SqlDataSource.Update', '_Read_s', 'IDataAdapter.Fill',
'_wgetenv', '_RecordsetPtr.Open*', 'AccessDataSource.Delete', 'Recordset.Open*', 'filebuf.sbumpc', 'DDX_*',
'RegGetValue', 'fstream.read*', 'SqlCeCommand.ExecuteResultSet', 'SqlCommand.ExecuteXmlReader', 'main',
'streambuf.sputbackc', 'read', 'm_lpCmdLine', 'CRichEditCtrl.Get*', 'istream.putback',
'SqlCeCommand.ExecuteXmlReader', 'SqlCeCommand.BeginExecuteXmlReader', 'filebuf.sgetn',
'OdbcDataAdapter.Update', 'filebuf.sgetc', 'SQLPutData', 'recvfrom', 'OleDbDataAdapter.FillSchema',
'IDataAdapter.FillSchema', 'CRichEditCtrl.GetLine', 'DbDataAdapter.Update', 'SqlCommand.ExecuteReader',
'istream.get', 'ReceiveFrom', '_main', 'fgetc', 'DbDataAdapter.FillSchema', 'kbhit', 'UpdateCommand.Execute*',
'Statement.execute', 'fgets', 'SelectCommand.Execute*', 'getch', 'OdbcCommand.ExecuteNonQuery',
'CDaoQueryDef.Execute', 'fstream.getline', 'ifstream.getline', 'SqlDataAdapter.FillSchema',
'OleDbCommand.ExecuteReader', 'Statement.execute*', 'SqlCeCommand.BeginExecuteNonQuery',
'OdbcCommand.ExecuteScalar', 'SqlCeDataAdapter.Update', 'sendmessage', 'mysqlpp.DBDriver', 'fstream.peek',
'Receive', 'CDaoRecordset.Open', 'OdbcDataAdapter.FillSchema', '_wgetenv_s', 'OleDbDataAdapter.Update',
'readsome', 'SqlCommand.BeginExecuteXmlReader', 'recv', 'ifstream.peek', '_Main', '_tmain', '_Readsome_s',
'SqlCeCommand.ExecuteReader', 'OleDbCommand.ExecuteNonQuery', 'fstream.get', 'IDbCommand.ExecuteScalar',
'filebuf.sputbackc', 'IDataAdapter.Update', 'streambuf.sbumpc', 'InsertCommand.Execute*', 'RegQueryValue',
'IDbCommand.ExecuteReader', 'SqlPipe.ExecuteAndSend', 'Connection.Execute*', 'getdlgtext', 'ReceiveFromEx',
'SqlDataAdapter.Update', 'RegQueryValueEx', 'SQLExecute', 'pread', 'SqlCommand.BeginExecuteReader', 'AfxWinMain',
'getchar', 'istream.getline', 'SqlCeDataAdapter.Fill', 'OleDbDataReader.ExecuteReader', 'SqlDataSource.Insert',
'istream.peek', 'SendMessageCallback', 'ifstream.read*', 'SqlDataSource.Select', 'SqlCommand.ExecuteScalar',
'SqlDataAdapter.Fill', 'SqlCommand.BeginExecuteNonQuery', 'getche', 'SqlCeCommand.BeginExecuteReader', 'getenv',
'streambuf.snextc', 'Command.Execute*', '_CommandPtr.Execute*', 'SendNotifyMessage', 'OdbcDataAdapter.Fill',
'AccessDataSource.Update', 'fscanf', 'QSqlQuery.execBatch', 'DbDataAdapter.Fill', 'cin',
'DeleteCommand.Execute*', 'QSqlQuery.exec', 'PostMessage', 'ifstream.get', 'filebuf.snextc',
'IDbCommand.ExecuteNonQuery', 'Winmain', 'fread', 'getpass', 'GetDlgItemTextCCheckListBox.GetCheck',
'DISP_PROPERTY_EX', 'pread64', 'Socket.Receive*', 'SACommand.Execute*', 'SQLExecDirect',
'SqlCeDataAdapter.FillSchema', 'DISP_FUNCTION', 'OracleCommand.ExecuteNonQuery', 'CEdit.GetLine',
'OdbcCommand.ExecuteReader', 'CEdit.Get*', 'AccessDataSource.Select', 'OracleCommand.ExecuteReader',
'OCIStmtExecute', 'getenv_s', 'DB2Command.Execute*', 'OracleDataAdapter.FillSchema', 'OracleDataAdapter.Fill',
'CComboBox.Get*', 'SqlCeCommand.ExecuteNonQuery', 'OracleCommand.ExecuteOracleNonQuery', 'mysqlpp.Query',
'istream.read*', 'CListBox.GetText', 'SqlCeCommand.ExecuteScalar', 'ifstream.putback', 'readlink',
'CHtmlEditCtrl.GetDHtmlDocument', 'PostThreadMessage', 'CListCtrl.GetItemText', 'OracleDataAdapter.Update',
'OleDbCommand.ExecuteScalar', 'stdin', 'SqlDataSource.Delete', 'OleDbDataAdapter.Fill', 'fstream.putback',
'IDbDataAdapter.Fill', '_wspawnl', 'fwprintf', 'sem_wait', '_unlink', 'ldap_search_ext_sW', 'signal', 'PQclear',
'PQfinish', 'PQexec', 'PQresultStatus']
import os, sys, argparse
from graphviz import Digraph
def read_csv(csv_file_path):
data = []
with open(csv_file_path) as fp:
header = fp.readline()
header = header.strip()
h_parts = [hp.strip() for hp in header.split('\t')]
for line in fp:
line = line.strip()
instance = {}
lparts = line.split('\t')
for i, hp in enumerate(h_parts):
if i < len(lparts):
content = lparts[i].strip()
else:
content = ''
instance[hp] = content
data.append(instance)
return data
def read_code_file(file_path):
code_lines = {}
with open(file_path) as fp:
for ln, line in enumerate(fp):
assert isinstance(line, str)
line = line.strip()
if '//' in line:
line = line[:line.index('//')]
code_lines[ln + 1] = line
return code_lines
def extract_nodes_with_location_info(nodes):
# Will return an array identifying the indices of those nodes in nodes array,
# another array identifying the node_id of those nodes
# another array indicating the line numbers
# all 3 return arrays should have same length indicating 1-to-1 matching.
node_indices = []
node_ids = []
line_numbers = []
node_id_to_line_number = {}
for node_index, node in enumerate(nodes):
assert isinstance(node, dict)
if 'location' in node.keys():
location = node['location']
if location == '':
continue
line_num = int(location.split(':')[0])
node_id = node['key'].strip()
node_indices.append(node_index)
node_ids.append(node_id)
line_numbers.append(line_num)
node_id_to_line_number[node_id] = line_num
return node_indices, node_ids, line_numbers, node_id_to_line_number
pass
def create_adjacency_list(line_numbers, node_id_to_line_numbers, edges, data_dependency_only=False):
adjacency_list = {}
for ln in set(line_numbers):
adjacency_list[ln] = [set(), set()]
for edge in edges:
edge_type = edge['type'].strip()
if True :#edge_type in ['IS_AST_PARENT', 'FLOWS_TO']:
start_node_id = edge['start'].strip()
end_node_id = edge['end'].strip()
if start_node_id not in node_id_to_line_numbers.keys() or end_node_id not in node_id_to_line_numbers.keys():
continue
start_ln = node_id_to_line_numbers[start_node_id]
end_ln = node_id_to_line_numbers[end_node_id]
if not data_dependency_only:
if edge_type == 'CONTROLS': #Control Flow edges
adjacency_list[start_ln][0].add(end_ln)
if edge_type == 'REACHES': # Data Flow edges
adjacency_list[start_ln][1].add(end_ln)
return adjacency_list
def create_visual_graph(code, adjacency_list, file_name='test_graph', verbose=False):
graph = Digraph('Code Property Graph')
for ln in adjacency_list:
graph.node(str(ln), str(ln) + '\t' + code[ln], shape='box')
control_dependency, data_dependency = adjacency_list[ln]
for anode in control_dependency:
graph.edge(str(ln), str(anode), color='red')
for anode in data_dependency:
graph.edge(str(ln), str(anode), color='blue')
graph.render(file_name, view=verbose)
def create_forward_slice(adjacency_list, line_no):
sliced_lines = set()
sliced_lines.add(line_no)
stack = list()
stack.append(line_no)
while len(stack) != 0:
cur = stack.pop()
if cur not in sliced_lines:
sliced_lines.add(cur)
adjacents = adjacency_list[cur]
for node in adjacents:
if node not in sliced_lines:
stack.append(node)
sliced_lines = sorted(sliced_lines)
return sliced_lines
def combine_control_and_data_adjacents(adjacency_list):
cgraph = {}
for ln in adjacency_list:
cgraph[ln] = set()
cgraph[ln] = cgraph[ln].union(adjacency_list[ln][0])
cgraph[ln] = cgraph[ln].union(adjacency_list[ln][1])
return cgraph
def invert_graph(adjacency_list):
igraph = {}
for ln in adjacency_list.keys():
igraph[ln] = set()
for ln in adjacency_list:
adj = adjacency_list[ln]
for node in adj:
igraph[node].add(ln)
return igraph
pass
def create_backward_slice(adjacency_list, line_no):
inverted_adjacency_list = invert_graph(adjacency_list)
return create_forward_slice(inverted_adjacency_list, line_no)
class Tokenizer:
# creates the object, does the inital parse
def __init__(self, path, tokenizer_type='original'):
self.index = clang.cindex.Index.create()
self.tu = self.index.parse(path)
self.path = self.extract_path(path)
self.symbol_table = {}
self.symbol_count = 1
self.tokenizer_type = tokenizer_type
# To output for split_functions, must have same path up to last two folders
def extract_path(self, path):
return "".join(path.split("/")[:-2])
def full_tokenize_cursor(self, cursor):
tokens = cursor.get_tokens()
result = []
for token in tokens:
if token.kind.name == "COMMENT":
continue
if token.kind.name == "LITERAL":
result += self.process_literal(token)
continue
if token.kind.name == "IDENTIFIER":
result += ["ID"]
continue
result += [token.spelling]
return result
def full_tokenize(self):
cursor = self.tu.cursor
return self.full_tokenize_cursor(cursor)
def process_literal(self, literal):
cursor_kind = clang.cindex.CursorKind
kind = literal.cursor.kind
if kind == cursor_kind.INTEGER_LITERAL:
return literal.spelling
if kind == cursor_kind.FLOATING_LITERAL:
return literal.spelling
if kind == cursor_kind.IMAGINARY_LITERAL:
return ["NUM"]
if kind == cursor_kind.STRING_LITERAL:
return ["STRING"]
sp = literal.spelling
if re.match('[0-9]+', sp) is not None:
return sp
return ["LITERAL"]
def split_functions(self, method_only):
results = []
cursor_kind = clang.cindex.CursorKind
cursor = self.tu.cursor
for c in cursor.get_children():
filename = c.location.file.name if c.location.file != None else "NONE"
extracted_path = self.extract_path(filename)
if (c.kind == cursor_kind.CXX_METHOD or (method_only == False and c.kind == cursor_kind.FUNCTION_DECL)) and extracted_path == self.path:
name = c.spelling
tokens = self.full_tokenize_cursor(c)
filename = filename.split("/")[-1]
results += [tokens]
return results
def tokenize(file_text):
try:
c_file = open('/tmp/test1.c', 'w')
c_file.write(file_text)
c_file.close()
tok = Tokenizer('/tmp/test1.c')
results = tok.split_functions(False)
return ' '.join(results[0])
except:
return None
import csv
def read_file(path):
with open(path) as f:
lines = f.readlines()
return ' '.join(lines)
def extract_line_number(idx, nodes):
while idx >= 0:
c_node = nodes[idx]
if 'location' in c_node.keys():
location = c_node['location']
if location.strip() != '':
try:
ln = int(location.split(':')[0])
return ln
except:
pass
idx -= 1
return -1
all_data = []
ggnn_json_data = json.load(open('../data/ggnn_input/devign_cfg_full_text_files.json'))
files = [d['file_name'] for d in ggnn_json_data]
print(len(files))
for i, file_name in enumerate(files):
label = file_name.strip()[:-2].split('_')[-1]
code_text = read_file(split_dir + file_name.strip())
nodes_file_path = parsed + file_name.strip() + '/nodes.csv'
edges_file_path = parsed + file_name.strip() + '/edges.csv'
nc = open(nodes_file_path)
nodes_file = csv.DictReader(nc, delimiter='\t')
nodes = [node for node in nodes_file]
call_lines = set()
array_lines = set()
ptr_lines = set()
arithmatic_lines = set()
if len(nodes) == 0:
continue
for node_idx, node in enumerate(nodes):
ntype = node['type'].strip()
if ntype == 'CallExpression':
function_name = nodes[node_idx + 1]['code']
if function_name is None or function_name.strip() == '':
continue
if function_name.strip() in l_funcs:
line_no = extract_line_number(node_idx, nodes)
if line_no > 0:
call_lines.add(line_no)
elif ntype == 'ArrayIndexing':
line_no = extract_line_number(node_idx, nodes)
if line_no > 0:
array_lines.add(line_no)
elif ntype == 'PtrMemberAccess':
line_no = extract_line_number(node_idx, nodes)
if line_no > 0:
ptr_lines.add(line_no)
elif node['operator'].strip() in ['+', '-', '*', '/']:
line_no = extract_line_number(node_idx, nodes)
if line_no > 0:
arithmatic_lines.add(line_no)
nodes = read_csv(nodes_file_path)
edges = read_csv(edges_file_path)
node_indices, node_ids, line_numbers, node_id_to_ln = extract_nodes_with_location_info(nodes)
adjacency_list = create_adjacency_list(line_numbers, node_id_to_ln, edges, False)
combined_graph = combine_control_and_data_adjacents(adjacency_list)
array_slices = []
array_slices_bdir = []
call_slices = []
call_slices_bdir = []
arith_slices = []
arith_slices_bdir = []
ptr_slices = []
ptr_slices_bdir = []
all_slices = []
all_keys = set()
_keys = set()
for slice_ln in call_lines:
forward_sliced_lines = create_forward_slice(combined_graph, slice_ln)
backward_sliced_lines = create_backward_slice(combined_graph, slice_ln)
all_slice_lines = forward_sliced_lines
all_slice_lines.extend(backward_sliced_lines)
all_slice_lines = sorted(list(set(all_slice_lines)))
key = ' '.join([str(i) for i in all_slice_lines])
if key not in _keys:
call_slices.append(backward_sliced_lines)
call_slices_bdir.append(all_slice_lines)
_keys.add(key)
if key not in all_keys:
all_slices.append(all_slice_lines)
all_keys.add(key)
_keys = set()
for slice_ln in array_lines:
forward_sliced_lines = create_forward_slice(combined_graph, slice_ln)
backward_sliced_lines = create_backward_slice(combined_graph, slice_ln)
all_slice_lines = forward_sliced_lines
all_slice_lines.extend(backward_sliced_lines)
all_slice_lines = sorted(list(set(all_slice_lines)))
key = ' '.join([str(i) for i in all_slice_lines])
if key not in _keys:
array_slices.append(backward_sliced_lines)
array_slices_bdir.append(all_slice_lines)
_keys.add(key)
if key not in all_keys:
all_slices.append(all_slice_lines)
all_keys.add(key)
_keys = set()
for slice_ln in arithmatic_lines:
forward_sliced_lines = create_forward_slice(combined_graph, slice_ln)
backward_sliced_lines = create_backward_slice(combined_graph, slice_ln)
all_slice_lines = forward_sliced_lines
all_slice_lines.extend(backward_sliced_lines)
all_slice_lines = sorted(list(set(all_slice_lines)))
key = ' '.join([str(i) for i in all_slice_lines])
if key not in _keys:
arith_slices.append(backward_sliced_lines)
arith_slices_bdir.append(all_slice_lines)
_keys.add(key)
if key not in all_keys:
all_slices.append(all_slice_lines)
all_keys.add(key)
_keys = set()
for slice_ln in ptr_lines:
forward_sliced_lines = create_forward_slice(combined_graph, slice_ln)
backward_sliced_lines = create_backward_slice(combined_graph, slice_ln)
all_slice_lines = forward_sliced_lines
all_slice_lines.extend(backward_sliced_lines)
all_slice_lines = sorted(list(set(all_slice_lines)))
key = ' '.join([str(i) for i in all_slice_lines])
if key not in _keys:
ptr_slices.append(backward_sliced_lines)
ptr_slices_bdir.append(all_slice_lines)
_keys.add(key)
if key not in all_keys:
all_slices.append(all_slice_lines)
all_keys.add(key)
t_code = tokenize(code_text)
if t_code is None:
continue
data_instance = {
'file_path': split_dir + file_name.strip(),
'code' : code_text,
'tokenized': t_code,
'call_slices_vd': call_slices,
'call_slices_sy': call_slices_bdir,
'array_slices_vd': array_slices,
'array_slices_sy': array_slices_bdir,
'arith_slices_vd': arith_slices,
'arith_slices_sy': arith_slices_bdir,
'ptr_slices_vd': ptr_slices,
'ptr_slices_sy': ptr_slices_bdir,
'label': int(label)
}
all_data.append(data_instance)
if i % 1000 == 0:
print(i, len(call_slices), len(call_slices_bdir),
len(array_slices), len(array_slices_bdir),
len(arith_slices), len(arith_slices_bdir), sep='\t')
output_file = open('../data/devign_full_data_with_slices', 'w')
json.dump(all_data, output_file)
output_file.close()
print(len(all_data))
```
| github_jupyter |
<!-- TODO: Self-organizing maps and hexagonal grids (Kohonen 1982; Huysmans et al. 2006a; Seret et al. 2012). A SOM -->
<!-- TODO: DBScan, TSNE, <https://speakerdeck.com/lmcinnes/umap-uniform-manifold-approximation-and-projection-for-dimension-reduction> -->
<!-- TODO: Good overview and connections to optimization <https://www.youtube.com/watch>?time_continue=1&v=9iol3Lk6kyU -->
<!-- TODO: - Generalized_low_rank_models_Udell -->
```
from IPython.display import Image
Image('../../Python_probability_statistics_machine_learning_2E.png',width=200)
%matplotlib inline
from matplotlib.pylab import subplots
import numpy as np
from sklearn.datasets import make_blobs
```
Clustering is the simplest member of a family of machine learning methods that
do not require supervision to learn from data. Unsupervised methods
have training sets that do not have a target variable. These unsupervised learning
methods rely upon a meaningful metric to group data into
clusters. This makes it an excellent exploratory data analysis
method because there are very few assumptions built into the method itself.
In this section, we focus on the popular K-means clustering method that is
available in Scikit-learn.
Let's manufacture some data to get going with `make_blobs` from Scikit-learn.
[Figure](#fig:clustering_001) shows some example clusters in two dimensions.
Clustering methods work by minimizing the following objective function,
```
from sklearn.datasets import make_blobs
fig,ax=subplots()
X, y = make_blobs(n_samples=300, centers=5,
random_state=0, cluster_std=0.5)
_=ax.scatter(X[:,0],X[:,1],c=y,s=50,cmap='cool',alpha=.8);
_=ax.set_xlabel('x-coordinate',fontsize=16)
_=ax.set_ylabel('y-coordinate',fontsize=16)
_=ax.axis(xmax=4,ymin=-2)
ax.tick_params(labelsize='x-large')
ax.set_aspect(1/1.6)
fig.savefig('fig-machine_learning/clustering_001.png')
```
<!-- dom:FIGURE: [fig-machine_learning/clustering_001.png, width=500 frac=0.85] The four clusters are pretty easy to see in this example and we want clustering methods to determine the extent and number of such clusters automatically. <div id="fig:clustering_001"></div> -->
<!-- begin figure -->
<div id="fig:clustering_001"></div>
<p>The four clusters are pretty easy to see in this example and we want clustering methods to determine the extent and number of such clusters automatically.</p>
<img src="fig-machine_learning/clustering_001.png" width=300>
<!-- end figure -->
$$
J = \sum_k \sum_i \Vert \mathbf{x}_i-\mathbf{\mu}_k \Vert^2
$$
The *distortion* for the $k^{th}$ cluster is the summand,
$$
\sum_i \Vert \mathbf{x}_i - \mathbf{ \mu }_k \Vert^2
$$
Thus, clustering algorithms work to minimize this by adjusting the
centers of the individual clusters, $\mu_k$. Intuitively, each $\mu_k$ is the
*center of mass* of the points in the cloud. The Euclidean distance is
the typical metric used for this,
$$
\Vert \mathbf{ x } \Vert^2 = \sum x_i^2
$$
There are many clever algorithms that can solve this problem for
the best $\mu_k$ cluster-centers. The K-means algorithm starts with a
user-specified number of $K$ clusters to optimize over. This is implemented in
Scikit-learn with the `KMeans` object that follows the usual fitting
conventions in Scikit-learn,
```
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=4)
kmeans.fit(X)
```
where we have chosen $K=4$. How do we choose the value of
$K$? This is the eternal question of generalization versus
approximation --- too many clusters provide great approximation but
bad generalization. One way to approach this problem is to compute the
mean distortion for increasingly larger values of $K$ until it no
longer makes sense. To do this, we want to take every data point and
compare it to the centers of all the clusters. Then, take the
smallest value of this across all clusters and average those. This
gives us an idea of the overall mean performance for the $K$ clusters.
The following code computes this explicitly.
**Programming Tip.**
The `cdist` function from Scipy computes all the pairwise
differences between the two input collections according to the
specified metric.
```
from scipy.spatial.distance import cdist
m_distortions=[]
for k in range(1,7):
kmeans = KMeans(n_clusters=k)
_=kmeans.fit(X)
tmp=cdist(X,kmeans.cluster_centers_,'euclidean')
m_distortions.append(sum(np.min(tmp,axis=1))/X.shape[0])
fig,ax=subplots()
fig.set_size_inches((8,5))
_=ax.plot(m_distortions,'-o',ms=10,color='gray')
_=ax.set_xlabel('K',fontsize=16)
_=ax.set_ylabel('Mean Distortion',fontsize=16)
ax.tick_params(labelsize='x-large')
# ax.set_aspect(1/1.6)
fig.savefig('fig-machine_learning/clustering_002.png')
```
<!-- dom:FIGURE: [fig-machine_learning/clustering_002.png, width=500 frac=0.75] The Mean Distortion shows that there is a diminishing value in using more clusters. <div id="fig:clustering_002"></div> -->
<!-- begin figure -->
<div id="fig:clustering_002"></div>
<p>The Mean Distortion shows that there is a diminishing value in using more clusters.</p>
<img src="fig-machine_learning/clustering_002.png" width=500>
<!-- end figure -->
Note that code above uses the `cluster_centers_`, which are
estimated from K-means algorithm. The resulting [Figure](#fig:clustering_002) shows the point of diminishing returns for
added additional clusters.
Another figure-of-merit is the silhouette coefficient, which measures
how compact and separated the individual clusters are. To compute the
silhouette coefficient, we need to compute the mean intra-cluster
distance for each sample ($a_i$) and the mean distance to the next
nearest cluster ($b_i$). Then, the silhouette coefficient for the
$i^{th}$ sample is
$$
\texttt{sc}_i = \frac{b_i-a_i}{\max(a_i,b_i)}
$$
The mean silhouette coefficient is just the mean of all these values
over all the samples. The best value is one and the worst is negative one,
with values near zero indicating overlapping clusters and negative values
showing that samples have been incorrectly assigned to the wrong cluster. This
figure-of-merit is implemented in Scikit-learn as in the following,
```
from sklearn.metrics import silhouette_score
def scatter_fit(X,y,ax):
_=kmeans.fit(X)
_=ax.scatter(X[:,0],X[:,1],c=y,s=50,cmap='gray',marker='.')
_=ax.set_title('silhouette={:.3f}'.format(silhouette_score(X,kmeans.labels_)))
fig,axs = subplots(2,2,sharex=True,sharey=True)
np.random.seed(12)
ax=axs[0,0]
X,y=make_blobs(centers=[[0,0],[3,0]],n_samples=100)
scatter_fit(X,y,ax)
ax=axs[0,1]
X,y=make_blobs(centers=[[0,0],[10,0]],n_samples=100)
scatter_fit(X,y,ax)
ax=axs[1,0]
X,y=make_blobs(centers=[[0,0],[3,0]],n_samples=100,cluster_std=[.5,.5])
scatter_fit(X,y,ax)
ax=axs[1,1]
X,y=make_blobs(centers=[[0,0],[10,0]],n_samples=100,cluster_std=[.5,.5])
scatter_fit(X,y,ax)
fig.savefig('fig-machine_learning/clustering_003.png')
```
[Figure](#fig:clustering_004) shows how the silhouette coefficient varies
as the clusters become more dispersed and/or closer together.
<!-- dom:FIGURE: [fig-machine_learning/clustering_003.png, width=500 frac=0.85] The shows how the silhouette coefficient varies as the clusters move closer and become more compact. <div id="fig:clustering_003"></div> -->
<!-- begin figure -->
<div id="fig:clustering_003"></div>
<p>The shows how the silhouette coefficient varies as the clusters move closer and become more compact.</p>
<img src="fig-machine_learning/clustering_003.png" width=500>
<!-- end figure -->
K-means is easy to understand and to implement, but can be sensitive
to the initial choice of cluster-centers. The default initialization
method in Scikit-learn uses a very effective and clever randomization
to come up with the initial cluster-centers. Nonetheless, to see why
initialization can cause instability with K-means, consider the
following [Figure](#fig:clustering_004), In [Figure](#fig:clustering_004), there are two large clusters on the left and
a very sparse cluster on the far right. The large circles at the
centers are the cluster-centers that K-means found. Given $K=2$, how
should the cluster-centers be chosen? Intuitively, the first two
clusters should have their own cluster-center somewhere between them
and the sparse cluster on the right should have its own cluster-center [^kmeans].
Why isn't this happening?
[^kmeans]: Note that we are using the `init=random` keyword argument for this
example in order to illustrate this.
```
X,y = make_blobs(centers=[[0,0],[5,0]],random_state=100,n_samples=200)
Xx,yx=make_blobs(centers=[[20,0]],random_state=100,n_samples=3)
X=np.vstack([X,Xx])
y=np.hstack([y,yx+2])
fig,axs=subplots(2,1,sharex=True,sharey=True)
ax=axs[0]
_=ax.scatter(X[:,0],X[:,1],c=y,s=50,cmap='gray',marker='.',alpha=.3);
_=kmeans = KMeans(n_clusters=2,random_state=123,init='random')
_=kmeans.fit(X)
_=ax.set_aspect(1)
_=ax.plot(kmeans.cluster_centers_[:,0],kmeans.cluster_centers_[:,1],'o',color='gray',ms=15,alpha=.5)
X,y = make_blobs(centers=[[0,0],[5,0]],random_state=100,n_samples=200)
Xx,yx=make_blobs(centers=[[20,0]],random_state=100,n_samples=10)
X=np.vstack([X,Xx])
y=np.hstack([y,yx+2])
ax=axs[1]
_=ax.scatter(X[:,0],X[:,1],c=y,s=50,cmap='gray',marker='.',alpha=.3);
kmeans = KMeans(n_clusters=2,random_state=123,init='random')
_=kmeans.fit(X)
_=ax.set_aspect(1)
_=ax.plot(kmeans.cluster_centers_[:,0],kmeans.cluster_centers_[:,1],'o',color='gray',ms=15,alpha=.8)
fig.savefig('fig-machine_learning/clustering_004.png')
```
<!-- dom:FIGURE: [fig-machine_learning/clustering_004.png, width=500 frac=0.85] The large circles indicate the cluster-centers found by the K-means algorithm. <div id="fig:clustering_004"></div> -->
<!-- begin figure -->
<div id="fig:clustering_004"></div>
<p>The large circles indicate the cluster-centers found by the K-means algorithm.</p>
<img src="fig-machine_learning/clustering_004.png" width=500>
<!-- end figure -->
The problem is that the objective function for K-means is trading the distance
of the far-off sparse cluster with its small size. If we keep increasing the
number of samples in the sparse cluster on the right, then K-means will move
the cluster centers out to meet them, as shown in [Figure](#fig:clustering_004). That is, if one of the initial cluster-centers was
right in the middle of the sparse cluster, the the algorithm would have
immediately captured it and then moved the next cluster-center to the middle of
the other two clusters (bottom panel of [Figure](#fig:clustering_004)).
Without some thoughtful initialization, this may not happen and the sparse
cluster would have been merged into the middle cluster (top panel of [Figure](#fig:clustering_004)). Furthermore, such problems are hard to visualize with
high-dimensional clusters. Nonetheless, K-means is generally very fast,
easy-to-interpret, and easy to understand. It is straightforward to parallelize
using the `n_jobs` keyword argument so that many initial cluster-centers can be
easily evaluated. Many extensions of K-means use different metrics beyond
Euclidean and incorporate adaptive weighting of features. This enables the
clusters to have ellipsoidal instead of spherical shapes.
| github_jupyter |
# Workflow 1, Module 3, Question 2
## What proteins produce agent [x]?
Let's run Q1 and use the endogenous output of that.
### Expand service
```
robokop_server = 'robokop.renci.org'
import requests
import pandas as pd
def expand(type1,identifier,type2,rebuild=None,csv=None,predicate=None):
url=f'http://{robokop_server}:80/api/simple/expand/{type1}/{identifier}/{type2}'
params = {'rebuild': rebuild,
'csv' : csv,
'predicate': predicate}
params = { k:v for k,v in params.items() if v is not None }
response = requests.get(url,params=params)
print( f'Return Status: {response.status_code}' )
if response.status_code == 200:
return response.json()
return []
def parse_answer(returnanswer):
nodes = [answer['nodes'][1] for answer in returnanswer['answers']]
edges = [answer['edges'][0] for answer in returnanswer['answers']]
answers = [ {"result_id": node["id"],
"result_name": node["name"] if 'name' in node else node['id'],
"type": edge["type"],
"source": edge['edge_source']}
for node,edge in zip(nodes,edges)]
return pd.DataFrame(answers)
```
### Question 2
```
FANCD2 = 'HGNC:3585'
all_results = expand('gene',FANCD2,'chemical_substance',rebuild=True)
CASP3='HGNC:1504'
hmm = expand('gene',CASP3,'chemical_substance',rebuild=True)
caspframe = parse_answer(hmm)
caspframe
#caspframe['type'].value_counts()
preds=['increases_expression_of',
'increases_abundance_of',
'increases_activity_of',
'increases_stability_of',
'decreases_degradation_of',
'affects_metabolic_processing',
'affects_splicing_of',
'affects_transport_of',
'increases_response_to',
'affects_molecular_modification_of']
results = {}
for p in preds:
print(p)
results[p] = expand('gene',CASP3,'chemical_substance',predicate=p)
r = []
for p in preds:
if 'answers' in results[p]:
frame = parse_answer(results[p])
print(p,len(frame))
frame['predicate'] = p
r.append(frame)
all_results = pd.concat(r)
all_results
```
Are there any predicates indicating a production?
```
all_results['type'].value_counts()
all_results[all_results['type']=='positively_regulates__entity_to_entity']
```
We're finding enzymes involved in creating the endogenous chemicals. But how come our genes are in EC format, rather than being translated to our normal HGNC? It's because these are non-human enzymes and at the moment we're not synonymizing non-human genes. In fact, for the most part, we don't accept non-human genes, but we're not checking for that in the KEGG client.
```
all_results[all_results['type']=='increases_synthesis_of']
```
So here's an example of a metabolite (`PUBCHEM:273429`) that is increased by the action of a gene `ABC1`
### Cystic Fibrosis
```
cf = 'MONDO:0009061'
result = expand(type1,cf,type2)
result_frame = parse_answer(result)
endogenous = result_frame[ result_frame['type'] == 'related_to' ]
endogenous
identifiers = list(endogenous['result_id'])
genes = {}
for ident in identifiers:
print(ident)
genes[ident] = expand('chemical_substance',ident,'gene',rebuild=True)
results = []
for chemical in genes:
if 'answers' in genes[chemical]:
frame = parse_answer(genes[chemical])
frame['Chemical'] = chemical
results.append(frame)
all_results = pd.concat(results)
all_results['type'].value_counts()
all_results[all_results['type']=='positively_regulates__entity_to_entity']
all_results[all_results['type']=='increases_synthesis_of']
expand('chemical_substance',ident,'gene',predicate='increases_degradation_of')
```
| github_jupyter |
## 1: Jeopardy Questions
```
import pandas as pd
jeopardy = pd.read_csv('jeopardy.csv')
jeopardy.head()
jeopardy.columns
jeopardy.columns = ['Show Number', 'Air Date', 'Round', 'Category', 'Value', 'Question', 'Answer']
```
## 2: Normalizing Text
```
import re
def norm(string):
string = string.lower()
string = re.sub("[^A-Za-z0-9\s]", "", string)
return(string)
jeopardy["clean_question"] = jeopardy["Question"].apply(norm)
jeopardy["clean_answer"] = jeopardy["Answer"].apply(norm)
```
## 3: Normalizing Columns
```
def norm_value(string):
string = re.sub("[^A-Za-z0-9\s]", "", string)
try:
string =int(string)
except Exception:
string = 0
return(string)
jeopardy["clean_value"] = jeopardy["Value"].apply(norm_value)
jeopardy["Air Date"] = pd.to_datetime(jeopardy["Air Date"])
jeopardy.head()
jeopardy.dtypes
```
## 4: Answers In Questions
```
# How often the answer is deducible from the question.
def count_matches(row):
split_answer = row['clean_answer'].split(' ')
split_question = row['clean_question'].split(' ')
match_count = 0
if 'the' in split_answer: split_answer.remove("the")
if len(split_answer)==0: return 0
for item in split_answer:
if item in split_answer:
match_count +=1
return match_count / len(split_answer)
jeopardy["answer_in_question"] = jeopardy.apply(count_matches, axis=1)
jeopardy["answer_in_question"].mean()
```
### How often the answer is deducible from the question
The answer only appears in the question about 6% of the time. This isn't a huge number, and means that we probably can't just hope that hearing a question will enable us to figure out the answer. We'll probably have to study.
## 5: Recycled Questions
```
question_overlap = []
terms_used = set()
for i, row in jeopardy.iterrows():
split_question = row["clean_question"].split(" ")
split_question = [q for q in split_question if len(q) > 5]
match_count = 0
for word in split_question:
if word in terms_used:
match_count += 1
for word in split_question:
terms_used.add(word)
if len(split_question) > 0:
match_count /= len(split_question)
question_overlap.append(match_count)
jeopardy["question_overlap"] = question_overlap
jeopardy["question_overlap"].mean()
```
### How often new questions are repeats of older questions.
There is about 70% overlap between terms in new questions and terms in old questions. This only looks at a small set of questions, and it doesn't look at phrases, it looks at single terms. This makes it relatively insignificant, but it does mean that it's worth looking more into the recycling of questions.
## 6: Low Value Vs High Value Questions
```
def determine_value(row):
value = 0
if row["clean_value"] > 800:
value = 1
return value
jeopardy["high_value"] = jeopardy.apply(determine_value, axis=1)
def count_usage(term):
low_count = 0
high_count = 0
for i, row in jeopardy.iterrows():
if term in row["clean_question"].split(" "):
if row["high_value"] == 1:
high_count += 1
else:
low_count += 1
return high_count, low_count
comparison_terms = list(terms_used)[:5]
observed_expected = []
for term in comparison_terms:
observed_expected.append(count_usage(term))
observed_expected
```
## 7: Applying The Chi-Squared Test
```
from scipy.stats import chisquare
import numpy as np
high_value_count = jeopardy[jeopardy["high_value"] == 1].shape[0]
low_value_count = jeopardy[jeopardy["high_value"] == 0].shape[0]
chi_squared = []
for obs in observed_expected:
total = sum(obs)
total_prop = total / jeopardy.shape[0]
high_value_exp = total_prop * high_value_count
low_value_exp = total_prop * low_value_count
observed = np.array([obs[0], obs[1]])
expected = np.array([high_value_exp, low_value_exp])
chi_squared.append(chisquare(observed, expected))
chi_squared
```
### Chi-squared results
None of the terms had a significant difference in usage between high value and low value rows. Additionally, the frequencies were all lower than 5, so the chi-squared test isn't as valid. It would be better to run this test with only terms that have higher frequencies.
| github_jupyter |
# 텍스트 데이터 전처리
딥러닝을 위한 텍스트 데이터를 준비하는 방법
딥러닝 모형에서는 텍스트를 수치로 변환하여 처리해야 한다.
원시 텍스트(raw text)를 딥러닝 모델에 직접 공급할 수 없다.
텍스트 데이터는 기계 학습 및 심층 학습 모델의 입력 또는 출력으로 사용할 숫자로 인코딩되어야 한다.
* 텍스트 데이터를 빠르게 준비하는 데 사용할 수있는 편리한 방법.
* BoW(Bag of Word)
* Tokenizer API
# BoW(Bag of Word)
텍스트나 단어를 사용하기 전에 수치 형태로 변환하는 전처리 과정을 거쳐야 한다. 텍스트를 수치 벡터로 표현하는 방법으로 BoW가 있다.
1. 전체 문서를 고유한 token으로 변환한다.
2. 특정 문서에서 각 단어가 얼마나 자주 등장하는지 세어서 문서의 특성 벡터를 생성한다.
```
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
bag = count.fit_transform(docs)
```
### CountVectorizer
CountVectorizer는 이러한 작업을 하기 위한 다음과 같은 인수를 가질 수 있다.
* stop_words : 문자열 {‘english’}, 리스트 또는 None (디폴트)
* stop words 목록.‘english’이면 영어용 스탑 워드 사용.
* analyzer : 문자열 {‘word’, ‘char’, ‘char_wb’} 또는 함수
단어 n-그램, 문자 n-그램, 단어 내의 문자 n-그램
* token_pattern : string
토큰 정의용 정규 표현식
* tokenizer : 함수 또는 None (디폴트)
토큰 생성 함수 .
* ngram_range : (min_n, max_n) 튜플
n-그램 범위
* max_df : 정수 또는 [0.0, 1.0] 사이의 실수. 디폴트 1
단어장에 포함되기 위한 최대 빈도
* min_df : 정수 또는 [0.0, 1.0] 사이의 실수. 디폴트 1
단어장에 포함되기 위한 최소 빈도
어휘 사전의 내용을 출력해 보면 다음과 같다.
```
print(count.vocabulary_)
print(bag.toarray())
```
# RNN을 위한 텍스트 데이터 전처리
RNN에서 텍스트 데이터 전처리는 BoW와는 달리 고유한 단어의 집합만 관심 대상이고 부수적으로 생성된 단어의 빈도는 필요하지 않다.
* text_to_word_sequence로 단어를 분할한다.
* Tokenizer API
### text_to_word_sequence로 단어 분할
텍스트로 작업 할 때 첫 번째 단계는 텍스트를 단어로 나누는 것이다.
단어를 토큰(Token)이라고하며 텍스트를 토큰으로 분할하는 과정을 토큰 화(Tokenizer)라고 한다.
` text_to_word_sequence () 함수 `는 다음의 작업을 자동으로 수행한다.
* 공백으로 단어를 분할합니다 (split =”“).
* 구두점을 필터링합니다 (filters = '!”# $ % & () * +,-. / :; <=>? @ [\\] ^ _`{|} ~ \ t \ n').
* 텍스트를 소문자로 변환합니다 (lower = True)
```
from tensorflow.keras.preprocessing.text import text_to_word_sequence
# define the document
text = 'The quick brown fox jumped over the lazy dog.'
# tokenize the document
result = text_to_word_sequence(text)
print(result)
```
### Tokenizer API
여러 텍스트 문서를 준비하는 데 적합하고 재사용 할 수있는 텍스트를 준비하기위한보다 정교한 API를 제공한다.
`Tokenizer 클래스` 참고) https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/text/Tokenizer
Tokenizer는 문서에 대해 학습 한 내용을 쿼리하는 데 사용할 수있는 4 가지 속성을 제공한다.
* word_counts : 단어 사전 및 개수.
* word_docs : 단어 사전 및 각 단어가 등장하는 문서 수.
* word_index : 단어 사전과 고유하게 할당 된 정수입니다.
* document_count : Tokenizer에 맞게 사용 된 총 문서 수의 정수 수입니다.
```
from tensorflow.keras.preprocessing.text import Tokenizer
# define 5 documents
docs = ['Well done!',
'Good work',
'Great effort',
'nice work',
'Excellent!']
# create the tokenizer
t = Tokenizer()
# fit the tokenizer on the documents
t.fit_on_texts(docs)
?Tokenizer()
# summarize what was learned
print(t.word_counts)
print(t.document_count)
print(t.word_index)
print(t.word_docs)
```
`texts_to_matrix ()` 메소드
* Tokenizer의 `texts_to_matrix () 함수`를 사용하여 입력 제공되는 문서 당 하나의 벡터를 만들 수 있다.
* 벡터의 길이는 어휘의 총 크기이다.
* 표준 bag-of-words 모델 텍스트 인코딩 체계 모음을 제공
- ' binary ': 문서에 각 단어가 있는지 여부. 이것이 기본값입니다.
- ' count ': 문서에있는 각 단어의 개수입니다.
- ' tfidf ': 문서의 각 단어에 대한 텍스트 빈도 역 문서 빈도 (TF-IDF) 점수입니다.
- ' freq ': 각 문서 내에서 단어의 비율로서 각 단어의 빈도.
```
# integer encode documents
encoded_docs = t.texts_to_matrix(docs, mode='count')
print(encoded_docs)
```
`texts_to_sequences()` 메소드
* Tokenizer의 texts_to_sequences()를 사용하면 각 문서의 텍스트를 정수 시퀀스로 변환할 수 있다.
```
print(docs)
t.texts_to_sequences(docs)
```
| github_jupyter |
# OUTDATED, the examples moved to the manual
## See https://empymod.readthedocs.io/en/stable/examples
----
# Comparison between full wavefield and diffusive approximation for a fullspace
Play around to see that the difference is getting bigger for
- higher frequencies,
- higher eperm/mperm.
```
import numpy as np
import matplotlib.pyplot as plt
import empymod as epm
# Style adjustments
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.dpi'] = 100
```
### Define model
```
x = (np.arange(1051))*10-500
rx = np.repeat([x,],np.size(x),axis=0)
ry = rx.transpose()
zsrc= 150
zrec = 200
res = 1/3
freq = 0.5
ab = 11
aniso = np.sqrt(3/.3)
perm = 1
inp = {'src': [0, 0, zsrc],
'rec': [rx.ravel(), ry.ravel(), zrec],
'res': res,
'freqtime': freq,
'aniso': aniso,
'ab': ab,
'epermH': perm,
'epermV': perm,
'mpermH': perm,
'mpermV': perm,
'verb': 0}
```
### Calculation
```
# Halfspace
hs = epm.analytical(**inp, solution='dfs')
hs = hs.reshape(np.shape(rx))
# Fullspace
fs = epm.analytical(**inp)
fs = fs.reshape(np.shape(rx))
# Relative error (%)
amperr = np.abs((np.abs(fs) - np.abs(hs))/np.abs(fs))*100
phaerr = np.abs((np.angle(fs) - np.angle(hs))/np.angle(fs))*100
```
### Plot
```
fig, axs = plt.subplots(figsize=(10, 4.2), nrows=1, ncols=2, sharex=True, sharey=True)
# Min and max, properties
vmin = 1e-10
vmax = 1e0
props = {'levels': np.logspace(np.log10(vmin), np.log10(vmax), 50),
'locator': plt.matplotlib.ticker.LogLocator(), 'cmap': 'Greys'}
# Plot amplitude error
plt.sca(axs[0])
plt.title(r'(a) Amplitude')
cf1 = plt.contourf(rx/1000, ry/1000, amperr.clip(vmin, vmax), **props)
plt.ylabel('Crossline offset (km)')
plt.xlabel('Inline offset (km)')
plt.xlim(min(x)/1000, max(x)/1000)
plt.ylim(min(x)/1000, max(x)/1000)
plt.axis('equal')
# Plot phase error
plt.sca(axs[1])
plt.title(r'(b) Phase')
cf2 = plt.contourf(rx/1000, ry/1000, phaerr.clip(vmin, vmax), **props)
plt.xlabel('Inline offset (km)')
plt.xlim(min(x)/1000, max(x)/1000)
plt.ylim(min(x)/1000, max(x)/1000)
plt.axis('equal')
# Title
plt.suptitle('Analytical fullspace solution\nDifference between full wavefield and diffusive approximation.',
y=1.1)
# Plot colorbar
cax, kw = plt.matplotlib.colorbar.make_axes([axs[0], axs[1]], location='right', fraction=.05, pad=0.05, aspect=20)
cb = plt.colorbar(cf2, cax=cax, ticks=10**(-(np.arange(13.)[::-1])+2), **kw)
cb.set_label(r'Relative Error $(\%)$')
# Show
plt.show()
epm.versions('HTML')
```
| github_jupyter |
## 1--Spec with Ferry Downstream Task
## Wav Temporal Order Self-Supervised Learning from Birdsong Applied to Ferry Motor Classification.
Self-Supervised Model, Extracted Weights, and Load into Custom Model
Last Updated Date June 10
```
from __future__ import print_function
%matplotlib inline
import matplotlib as plt
plt.style.use('seaborn-white')
from IPython.display import Markdown, display
from IPython.display import SVG
import numpy as np
from time import time
np.random.seed(10)
import tensorflow as tf
import pandas as pd
import pyAudioAnalysis
from pyAudioAnalysis import audioBasicIO
from pyAudioAnalysis import ShortTermFeatures, MidTermFeatures
import matplotlib.pyplot as plt
import resampy
import librosa
from librosa import cqt
import scipy.io.wavfile as wavfile
import librosa
import plotly
import numpy as np
import plotly.graph_objs as go
from scipy.signal import medfilt as mf
import os
import os.path
from os.path import isfile, join
from os import listdir
import sklearn
from sklearn import preprocessing
import scipy.io.wavfile as wavfile
np.set_printoptions(suppress=True)
import argparse
from pydub import AudioSegment
import random
birds_home = os.path.join('/home/pattyry/birds/')
#cwd_audio = os.path.join("..","..","birds")
os.chdir(birds_home)
os.getcwd()
cqt_train = os.path.join('./cqt-sequences_train')
cqt_test = os.path.join('./cqt-sequences_test')
spec_cqt_train = os.path.join('./spec-aug-sequences_train')
spec_cqt_test = os.path.join('./spec-aug-sequences_test')
new_train_test = os.path.join('./new-train-test')
print(cqt_train)
nb_classes = 2
img_rows, img_cols = 70, 112
os.chdir(birds_home)
os.chdir(cqt_train)
os.getcwd()
train_x_cqt_1=pd.read_csv('a_train_cqt.csv', sep=',',header=None)
print(train_x_cqt_1.shape)
train_x_cqt_1 = np.asarray(train_x_cqt_1)
train_x_cqt_2=pd.read_csv('b_train_cqt.csv', sep=',',header=None)
print(train_x_cqt_2.shape)
train_x_cqt_2 = np.asarray(train_x_cqt_2)
train_x_cqt_3=pd.read_csv('c_train_cqt.csv', sep=',',header=None)
print(train_x_cqt_3.shape)
train_x_cqt_3 = np.asarray(train_x_cqt_3)
train_x_cqt_4=pd.read_csv('d_train_cqt.csv', sep=',',header=None)
print(train_x_cqt_4.shape)
train_x_cqt_4 = np.asarray(train_x_cqt_4)
train_y_cqt=pd.read_csv('labels_train_cqt.csv', sep=',',header=None)
print(train_y_cqt.shape)
train_y_cqt = np.asarray(train_y_cqt)
os.chdir(birds_home)
os.chdir(cqt_test)
os.getcwd()
test_x_cqt_1=pd.read_csv('a_test_cqt.csv', sep=',',header=None)
print(test_x_cqt_1.shape)
test_x_cqt_1 = np.asarray(test_x_cqt_1)
test_x_cqt_2=pd.read_csv('b_test_cqt.csv', sep=',',header=None)
print(test_x_cqt_2.shape)
test_x_cqt_2 = np.asarray(test_x_cqt_2)
test_x_cqt_3=pd.read_csv('c_test_cqt.csv', sep=',',header=None)
print(test_x_cqt_3.shape)
test_x_cqt_3 = np.asarray(test_x_cqt_3)
test_x_cqt_4=pd.read_csv('d_test_cqt.csv', sep=',',header=None)
print(test_x_cqt_4.shape)
test_x_cqt_4 = np.asarray(test_x_cqt_4)
test_y_cqt=pd.read_csv('labels_test_cqt.csv', sep=',',header=None)
print(test_y_cqt.shape)
test_y_cqt = np.asarray(test_y_cqt)
os.chdir(birds_home)
os.chdir(spec_cqt_train)
os.getcwd()
os.listdir()
# train_x_spec_1=pd.read_csv('a_train.csv', sep=',',header=None)
# print(train_x_spec_1.shape)
# train_x_spec_1 = np.asarray(train_x_spec_1)
# train_x_spec_2=pd.read_csv('b_train.csv', sep=',',header=None)
# print(train_x_spec_2.shape)
# train_x_spec_2 = np.asarray(train_x_spec_2)
# train_x_spec_3=pd.read_csv('c_train.csv', sep=',',header=None)
# print(train_x_spec_3.shape)
# train_x_spec_3 = np.asarray(train_x_spec_3)
# train_x_spec_4=pd.read_csv('d_train.csv', sep=',',header=None)
# print(train_x_spec_4.shape)
# train_x_spec_4 = np.asarray(train_x_spec_4)
# train_y_spec=pd.read_csv('labels_train.csv', sep=',',header=None)
# print(train_y_spec.shape)
# train_y_spec = np.asarray(train_y_spec)
os.chdir(birds_home)
os.chdir(spec_cqt_test)
os.getcwd()
os.listdir()
test_x_spec_1=pd.read_csv('a_test.csv', sep=',',header=None)
print(test_x_spec_1.shape)
test_x_spec_1 = np.asarray(test_x_spec_1)
test_x_spec_2=pd.read_csv('b_test.csv', sep=',',header=None)
print(test_x_spec_2.shape)
test_x_spec_2 = np.asarray(test_x_spec_2)
test_x_spec_3=pd.read_csv('c_test.csv', sep=',',header=None)
print(test_x_spec_3.shape)
test_x_spec_3 = np.asarray(test_x_spec_3)
test_x_spec_4=pd.read_csv('d_test.csv', sep=',',header=None)
print(test_x_spec_4.shape)
test_x_spec_4 = np.asarray(test_x_spec_4)
test_y_spec=pd.read_csv('labels_test.csv', sep=',',header=None)
print(test_y_spec.shape)
test_y_spec = np.asarray(test_y_spec)
# Build sets of Samples
# Positive Sequence
# a, b, c
# Negative Sequences:
# b, a, d
# d, a, b
# train_x_cqt_1, train_x_cqt_2, train_x_cqt_3, train_x_cqt_4
# test_x_cqt_1, test_x_cqt_2, test_x_cqt_3, test_x_cqt_4
# train_y_cqt
# test_y_cqt
# train_x_spec_1, train_x_spec_2, train_x_spec_3, train_x_spec_4
# test_x_spec_1, test_x_spec_2, test_x_spec_3, test_x_spec_4
# train_y_spec
# test_y_spec
samples_train = len(train_x_cqt_1)
samples_test = len(test_x_cqt_1)
train_labels_positive = np.full((samples_train, 1), 1)
print(train_labels_positive.shape)
test_labels_positive = np.full((samples_test, 1), 1)
print(test_labels_positive.shape)
train_labels_negative = np.full((samples_train, 1), 0)
print(train_labels_negative.shape)
test_labels_negative = np.full((samples_test, 1), 0)
print(test_labels_negative.shape)
positive_train_type1_cqt = np.concatenate((train_x_cqt_1, train_x_cqt_2, train_x_cqt_3, train_labels_positive), axis=1)
positive_test_type1_cqt = np.concatenate((test_x_cqt_1, test_x_cqt_2, test_x_cqt_3, test_labels_positive), axis=1)
negative_train_type2_cqt = np.concatenate((train_x_cqt_2, train_x_cqt_1, train_x_cqt_4, train_labels_negative), axis=1)
negative_test_type2_cqt = np.concatenate((test_x_cqt_2, test_x_cqt_1, test_x_cqt_4, test_labels_negative), axis=1)
negative_train_type3_cqt = np.concatenate((train_x_cqt_4, train_x_cqt_1, train_x_cqt_2, train_labels_negative), axis=1)
negative_test_type3_cqt = np.concatenate((test_x_cqt_4, test_x_cqt_1, test_x_cqt_2, test_labels_negative), axis=1)
# positive_train_type1_spec = np.concatenate((train_x_spec_1, train_x_spec_2, train_x_spec_3, train_labels_positive), axis=1)
# positive_test_type1_spec = np.concatenate((test_x_spec_1, test_x_spec_2, test_x_spec_3, test_labels_positive), axis=1)
# negative_train_type2_spec = np.concatenate((train_x_spec_2, train_x_spec_1, train_x_spec_4, train_labels_negative), axis=1)
# negative_test_type2_spec = np.concatenate((test_x_spec_2, test_x_spec_1, test_x_spec_4, test_labels_negative), axis=1)
# negative_train_type3_spec = np.concatenate((train_x_spec_4, train_x_spec_1, train_x_spec_2, train_labels_negative), axis=1)
# negative_test_type3_spec = np.concatenate((test_x_spec_4, test_x_spec_1, test_x_spec_2, test_labels_negative), axis=1)
train_cqt = np.append(positive_train_type1_cqt, negative_train_type2_cqt, axis=0)
train_cqt = np.append(train_cqt, negative_train_type3_cqt, axis=0)
print(train_cqt.shape)
test_cqt = np.append(positive_test_type1_cqt,negative_test_type2_cqt, axis=0)
test_cqt = np.append(test_cqt,negative_test_type3_cqt, axis=0)
print(test_cqt.shape)
train_cqt = np.append(positive_train_type1_cqt, negative_train_type2_cqt, axis=0)
test_cqt = np.append(positive_test_type1_cqt,negative_test_type2_cqt, axis=0)
# train_spec = np.append(positive_train_type1_spec, negative_train_type2_spec, axis=0)
# train_spec = np.append(train_spec, negative_train_type3_spec, axis=0)
# print(train_spec.shape)
# test_spec = np.append(positive_test_type1_spec,negative_test_type2_spec, axis=0)
# test_spec = np.append(test_spec,negative_test_type3_spec, axis=0)
# print(test_spec.shape)
# train_cqt, test_cqt
# train_spec, test_spec
os.chdir(birds_home)
#os.chdir(spec_cqt_test)
os.getcwd()
os.listdir()
np.random.shuffle(train_cqt)
np.random.shuffle(test_cqt)
# np.random.shuffle(train_spec)
# np.random.shuffle(test_spec)
unrolled_cols = train_cqt.shape[1]
print(unrolled_cols)
train_cqt_y = train_cqt[:,-1]
test_cqt_y = test_cqt[:,-1]
print(train_cqt_y.shape)
print(test_cqt_y.shape)
# train_spec_y = train_spec[:,-1]
# test_spec_y = test_spec[:,-1]
# print(train_spec_y.shape)
# print(test_spec_y.shape)
train_cqt_x = train_cqt[:,0:(unrolled_cols-1)]
print(train_cqt_x.shape)
# train_spec_x = train_spec[:,0:(unrolled_cols-1)]
# print(train_spec_x.shape)
test_cqt_x = test_cqt[:,0:(unrolled_cols-1)]
print(test_cqt_x.shape)
# test_spec_x = test_spec[:,0:(unrolled_cols-1)]
# print(test_spec_x.shape)
os.chdir(birds_home)
os.chdir(new_train_test)
print(os.getcwd())
os.listdir()
np.savetxt('train_cqt_x.csv', train_cqt_x, delimiter=',')
# np.savetxt('train_spec_x.csv', train_spec_x, delimiter=',')
np.savetxt('test_cqt_x.csv', test_cqt_x, delimiter=',')
# np.savetxt('test_spec_x.csv', test_spec_x, delimiter=',')
# np.savetxt('train_spec_y.csv', train_spec_y, delimiter=',')
# np.savetxt('test_spec_y.csv', test_spec_y, delimiter=',')
np.savetxt('train_cqt_y.csv', train_cqt_y, delimiter=',')
np.savetxt('test_cqt_y.csv', test_cqt_y, delimiter=',')
# train_x = np.append(train_cqt_x, train_spec_x, axis=0)
# test_x = np.append(test_cqt_x, test_spec_x, axis=0)
# train_y = np.append(train_cqt_y, train_spec_y, axis=0)
# test_y = np.append(test_cqt_y, test_spec_y, axis=0)
train_x = train_cqt_x
test_x = test_cqt_x
train_y = train_cqt_y
test_y = test_cqt_y
print(train_x.shape)
print(test_x.shape)
print(train_y.shape)
print(test_y.shape)
train_labelled = np.asarray(pd.concat([pd.DataFrame(train_x), pd.DataFrame(train_y)], axis=1))
test_labelled = np.asarray(pd.concat([pd.DataFrame(test_x), pd.DataFrame(test_y)], axis=1))
print(train_labelled.shape)
print(test_labelled.shape)
np.random.shuffle(train_labelled)
np.random.shuffle(test_labelled)
test_labelled[0,:]
num_train_samples = 1526
num_test_samples = 978
train_y_shuffled = np.asarray(train_labelled)[:,-1]
print(train_y_shuffled.shape)
test_y_shuffled = np.asarray(test_labelled)[:,-1]
print(test_y_shuffled.shape)
train_y_ = np.array(train_y_shuffled).reshape(num_train_samples)
test_y_ = np.array(test_y_shuffled).reshape(num_test_samples)
print(train_y_)
print(test_y_)
unrolled_cols
train_x_shuffled = np.asarray(train_labelled)[:,0:(unrolled_cols-1)]
print(train_x_shuffled.shape)
test_x_shuffled = np.asarray(test_labelled)[:,0:(unrolled_cols-1)]
print(test_x_shuffled.shape)
# train_x_shuffled, train_y_shuffled
# test_x_shuffled, test_y_shuffled
#7840
```
## Shape Data for Model Inputs
```
# Split 'x' series back into sequences
train_x_1 = train_x_shuffled[:,0:7840]
train_x_2 = train_x_shuffled[:,7840:15680]
train_x_3 = train_x_shuffled[:,15680:]
print(train_x_3.shape)
train_x_1 = train_x_shuffled[:,0:7840]
train_x_2 = train_x_shuffled[:,7840:15680]
train_x_3 = train_x_shuffled[:,15680:]
test_x_1 = test_x_shuffled[:,0:7840]
test_x_2 = test_x_shuffled[:,7840:15680]
test_x_3 = test_x_shuffled[:,15680:]
#Define Single input shape
num_test_samples = test_x_shuffled.shape[0]
num_train_samples = train_x_shuffled.shape[0]
num_timesteps = 70
num_features = 112
print(num_test_samples)
print(num_train_samples)
print(num_timesteps)
print(num_features)
train_x_1 = np.array(train_x_1).reshape(num_train_samples, num_timesteps, num_features, 1)
train_x_2 = np.array(train_x_2).reshape(num_train_samples, num_timesteps, num_features, 1)
train_x_3 = np.array(train_x_3).reshape(num_train_samples, num_timesteps, num_features, 1)
print(train_x_1.shape)
test_x_1 = np.array(test_x_1).reshape(num_test_samples, num_timesteps, num_features, 1)
test_x_2 = np.array(test_x_2).reshape(num_test_samples, num_timesteps, num_features, 1)
test_x_3 = np.array(test_x_3).reshape(num_test_samples, num_timesteps, num_features, 1)
print(test_x_1.shape)
print(test_x_1.shape)
print(train_x_1.shape)
print(test_x_2.shape)
print(train_x_2.shape)
print(test_x_3.shape)
print(train_x_3.shape)
train_x_1 = np.float32(train_x_1)
test_x_1 = np.float32(test_x_1)
train_x_2 = np.float32(train_x_2)
test_x_2 = np.float32(test_x_2)
train_x_3 = np.float32(train_x_3)
test_x_3 = np.float32(test_x_3)
cqt_input_shape = train_x_1[0].shape
print(cqt_input_shape)
print(train_x_1.shape)
type(train_x_1)
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-white')
import pywt
from matplotlib.image import imread
import numpy as np
import os
from scipy.io import wavfile
from pathlib import Path
import pywt
#import soundfile
import random
import pickle
from glob import iglob
from sklearn.metrics import confusion_matrix, classification_report
import pprint
random.seed(42)
import librosa
import scipy.signal
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.layers import MaxPooling2D, Flatten, Conv2D
from tensorflow.keras.utils import to_categorical
import tensorflow.keras
from tensorflow.keras import initializers
from tensorflow.keras import regularizers
from tensorflow.keras import constraints
from tensorflow.keras.layers import Activation
from keras.layers.advanced_activations import PReLU
from keras.layers.advanced_activations import LeakyReLU
from keras.layers.advanced_activations import ELU
from tensorflow.keras.constraints import max_norm
from tensorflow.keras.layers import LeakyReLU
from tensorflow.keras import optimizers, losses, activations, models
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping, LearningRateScheduler
from tensorflow.keras.layers import Dense, Input, Dropout, Convolution1D, MaxPool1D, GlobalMaxPool1D, GlobalAveragePooling1D,concatenate
from tensorflow.keras.layers import MaxPooling3D, Flatten, Conv3D
from tensorflow.keras import backend as K
from tensorflow.keras.layers import LeakyReLU,PReLU
import tensorflow.keras
from tensorflow.keras.losses import *
from tensorflow.keras.models import load_model
# def my_init(shape, dtype=None):
# vals=((float(shape[0], float(shape[1]))) * math.sqrt(2./float(shape[1])))
# return K.random_normal(vals, dtype=dtype)
import math
#Before prediction
K.clear_session()
# from tensorflow.compat.v1 import ConfigProto
# from tensorflow.compat.v1 import InteractiveSession
tf.compat.v1.disable_eager_execution()
# # #session = InteractiveSession.close()
# config = ConfigProto()
# config.gpu_options.allow_growth = True
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
from tensorflow.python.framework import ops
ops.reset_default_graph()
# import os
# os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152
# os.environ["CUDA_VISIBLE_DEVICES"]="1"
len(train_y)
unique, counts = np.unique(train_y_, return_counts=True)
print(unique)
print(counts)
# If wanted, add class weighting
total_len = len(train_y)
class_weight = {0: total_len/len([x for x in train_y_==0 if x]),
1: total_len/len([x for x in train_y_==1 if x])}
def get_model():
with tf.device("GPU:1"):
nclass = 2
initializer1 = tf.keras.initializers.RandomNormal
initializer2 = tf.keras.initializers.Zeros()
initializer3 = tf.keras.initializers.glorot_uniform(seed=None)
initializer4 = tf.keras.initializers.lecun_normal(seed=None)
initializer5 = tf.keras.initializers.TruncatedNormal(mean=0., stddev=1.)
inp = Input(shape=cqt_input_shape)
img_1 = Conv2D(128, kernel_size=(7,7),
kernel_initializer=initializer4, kernel_regularizer = regularizers.l2(0.001),
use_bias=True, bias_initializer=initializers.Zeros(),padding="valid")(inp)
#img_1 = LeakyReLU(alpha=0.2)(img_1)
img_1 = Dropout(0.3)(img_1)
img_1 = MaxPooling2D()(img_1)
img_1 = Conv2D(128, kernel_size=(3,3),
kernel_initializer=initializer4, kernel_regularizer = regularizers.l2(0.001),
use_bias=True, bias_initializer=initializers.Zeros(),padding="valid")(img_1)
#img_1 = LeakyReLU(alpha=0.2)(img_1)
img_1 = Dropout(0.3)(img_1)
img_1 = MaxPooling2D()(img_1)
img_1 = Conv2D(256, kernel_size=(3,3),
kernel_initializer=initializer4, kernel_regularizer = regularizers.l2(0.001),
use_bias=True, bias_initializer=initializers.Zeros(),padding="valid")(img_1)
#img_1 = LeakyReLU(alpha=0.2)(img_1)
img_1 = Dropout(0.5)(img_1)
img_1 = Flatten()(img_1)
img_1 = Dense(512, kernel_initializer=initializer4, kernel_regularizer = regularizers.l2(0.001))(img_1)
#img_1 = LeakyReLU(alpha=0.2)(img_1)
img_1 = Dropout(0.5)(img_1)
inp2 = Input(shape=cqt_input_shape)
img_2 = Conv2D(128, kernel_size=(7,7),
kernel_initializer=initializer4, kernel_regularizer = regularizers.l2(0.001),
use_bias=True, bias_initializer=initializers.Zeros())(inp2)
#img_2 = LeakyReLU(alpha=0.2)(img_2)
img_2 = Dropout(0.3)(img_2)
img_2 = MaxPooling2D()(img_2)
img_2 = Conv2D(128, kernel_size=(3,3),
kernel_initializer=initializer4, kernel_regularizer = regularizers.l2(0.001),
use_bias=True, bias_initializer=initializers.Zeros(),padding="valid")(img_2)
#img_2 = LeakyReLU(alpha=0.2)(img_2)
img_2 = Dropout(0.3)(img_2)
img_2 = MaxPooling2D()(img_2)
img_2 = Conv2D(256, kernel_size=(3,3),
kernel_initializer=initializer4, kernel_regularizer = regularizers.l2(0.001),
use_bias=True, bias_initializer=initializers.Zeros(),padding="valid")(img_2)
#img_2 = LeakyReLU(alpha=0.2)(img_2)
img_2 = Dropout(0.5)(img_2)
img_2 = Flatten()(img_2)
img_2 = Dense(512, kernel_initializer=initializer4, kernel_regularizer = regularizers.l2(0.001))(img_2)
#img_2 = LeakyReLU(alpha=0.2)(img_2)
img_2 = Dropout(0.5)(img_2)
inp3 = Input(shape=cqt_input_shape)
img_3 = Conv2D(128, kernel_size=(7,7),
kernel_initializer=initializer4, kernel_regularizer = regularizers.l2(0.001),
use_bias=True, bias_initializer=initializers.Zeros(),padding="valid")(inp3)
#img_3 = LeakyReLU(alpha=0.2)(img_3)
img_3 = Dropout(0.3)(img_3)
img_3 = MaxPooling2D()(img_3)
img_3 = Conv2D(128, kernel_size=(3,3),
kernel_initializer=initializer4, kernel_regularizer = regularizers.l2(0.001),
use_bias=True, bias_initializer=initializers.Zeros(),padding="valid")(img_3)
#img_3 = LeakyReLU(alpha=0.2)(img_3)
img_3 = Dropout(0.5)(img_3)
img_3 = MaxPooling2D()(img_3)
img_3 = Conv2D(256, kernel_size=(3,3),
kernel_initializer=initializer4, kernel_regularizer = regularizers.l2(0.001),
use_bias=True, bias_initializer=initializers.Zeros(),padding="valid")(img_3)
#img_3 = LeakyReLU(alpha=0.2)(img_3)
img_3 = Dropout(0.5)(img_3)
img_3 = Flatten()(img_3)
img_3 = Dense(512, kernel_initializer=initializer4, kernel_regularizer = regularizers.l2(0.001))(img_3)
#img_3 = LeakyReLU(alpha=0.2)(img_3)
img_3 = Dropout(0.5)(img_3)
concat_layer = keras.layers.concatenate([img_1, img_2, img_3])
concat_dense = LeakyReLU(alpha=0.2)(concat_layer)
concat_dense = Dense(256, kernel_initializer=initializer4, kernel_regularizer = regularizers.l2(0.001),
bias_initializer=initializers.Zeros(),use_bias=True)(concat_dense)
concat_dense = Dropout(0.5)(concat_dense)
concat_dense = Dense(16, kernel_regularizer = regularizers.l2(0.1))(concat_dense)
concat_dense = Dropout(0.5)(concat_dense)
output_layer = Dense(2,activation=activations.sigmoid)(concat_dense)
model = models.Model(inputs=[inp, inp2, inp3], outputs=[output_layer])
opt = optimizers.Adam(lr=0.0009, beta_1=0.9, beta_2=0.999, epsilon=1e-5, decay=.03, amsgrad=False)
model.compile(optimizer=opt, loss=losses.binary_crossentropy, metrics=['acc'])
model.summary()
return model
cqt_order_model = get_model()
cqt_order_model.summary
mcp_save = ModelCheckpoint('self-supervised-temporal-order7_7.hdf5', save_best_only=True, monitor='val_acc', mode='max')
reduce_lr = keras.callbacks.ReduceLROnPlateau(monitor='val_loss',
factor=.2,
patience=1,
verbose=0,
mode='auto',
min_delta=0.00001,
cooldown=0,
min_lr=0)
from tensorflow.keras.callbacks import EarlyStopping
birds_home = os.path.join('/home/pattyry/birds/')
#cwd_audio = os.path.join("..","..","birds")
os.chdir(birds_home)
os.getcwd()
history = cqt_order_model.fit([train_x_1, train_x_2, train_x_3],[to_categorical(train_y_)],
epochs=20,
#verbose=2,
batch_size=240,
validation_data=([test_x_1, test_x_2,test_x_3], to_categorical(test_y_)),
callbacks=[mcp_save, reduce_lr],
class_weight=class_weight)
import sys
os.path.dirname(sys.executable)
import matplotlib.pyplot as plt
#Accuracy
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.models import load_model
```
## Self-Supervised Learning
```
os.getcwd()
birds_home = os.path.join('/home/pattyry/birds/')
#cwd_audio = os.path.join("..","..","birds")
os.chdir(birds_home)
os.getcwd()
model = load_model('self-supervised-temporal-order7_7.hdf5')
# ferry_data = os.path.join('./ferry-data')
# os.chdir(ferry_data)
# print(ferry_data)
x1 = keras.Input(shape =(cqt_input_shape))
x2 = keras.Input(shape =(cqt_input_shape))
x3 = keras.Input(shape =(cqt_input_shape))
inputs=[x1, x2, x3]
layer_dict = dict([(layer.name, layer) for layer in model.layers])
print(layer_dict)
weights_list = []
for layer in model.layers[:43:3]:
layer_name = layer.name
# print(layer.name)
layer.set_weights(layer_dict[layer_name].get_weights())
print(layer.get_weights())
#print(type(np.asarray(layer.get_weights)))
x = (np.asarray(layer.get_weights()))
weights_list.append(np.asarray(x))
def get_model2():
with tf.device("GPU:1"):
nclass = 2
initializer1 = tf.keras.initializers.RandomNormal
initializer2 = tf.keras.initializers.Zeros()
initializer3 = tf.keras.initializers.glorot_uniform(seed=None)
initializer4 = tf.keras.initializers.lecun_normal(seed=None)
initializer5 = tf.keras.initializers.TruncatedNormal(mean=0., stddev=1.)
inp = Input(shape=cqt_input_shape)
img_1 = Conv2D(128, kernel_size=(7,7),
kernel_initializer=initializer4,
trainable = False,
padding="valid")(inp)
#img_1 = LeakyReLU(alpha=0.2)(img_1)
img_1 = Dropout(0.3)(img_1)
img_1 = MaxPooling2D()(img_1)
img_1 = Conv2D(128, kernel_size=(3,3),
kernel_initializer=initializer4,
trainable = False,
padding="valid")(img_1)
#img_1 = LeakyReLU(alpha=0.2)(img_1)
img_1 = Dropout(0.3)(img_1)
img_1 = MaxPooling2D()(img_1)
img_1 = Conv2D(256, kernel_size=(3,3),
kernel_initializer=initializer4,
trainable = False,
padding="valid")(img_1)
#img_1 = LeakyReLU(alpha=0.2)(img_1)
img_1 = Dropout(0.3)(img_1)
img_1 = Flatten()(img_1)
img_1 = Dense(512, kernel_initializer=initializer4, trainable = True, kernel_regularizer = regularizers.l2(0.001)
)(img_1)
#img_1 = LeakyReLU(alpha=0.2)(img_1)
img_1 = Dropout(0.4)(img_1)
img_1 = Dense(128, kernel_initializer=initializer4, trainable = True,
kernel_regularizer = regularizers.l2(0.001),
use_bias=True,
bias_initializer=initializer4,
)(img_1)
img_1 = LeakyReLU(alpha=0.2)(img_1)
img_1 = Dropout(0.5)(img_1)
img_1 = Dense(16, kernel_initializer=initializer4, trainable = True,
kernel_regularizer = regularizers.l2(0.01),
use_bias=True,
bias_initializer=initializer4,
)(img_1)
img_1 = LeakyReLU(alpha=0.2)(img_1)
img_1 = Dropout(0.5)(img_1)
output_layer = Dense(2,activation=activations.sigmoid)(img_1)
model = models.Model(inputs=[inp], outputs=[output_layer])
opt = optimizers.Adam(lr=0.00005, beta_1=0.9, beta_2=0.999, epsilon=1e-5, decay=.03, amsgrad=False)
model.compile(optimizer=opt, loss=losses.binary_crossentropy, metrics=['acc'])
model.summary()
return model
#Before prediction
K.clear_session()
# from tensorflow.compat.v1 import ConfigProto
# from tensorflow.compat.v1 import InteractiveSession
tf.compat.v1.disable_eager_execution()
# # #session = InteractiveSession.close()
# config = ConfigProto()
# config.gpu_options.allow_growth = True
from tensorflow.python.framework import ops
ops.reset_default_graph()
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
pre_trained = get_model2()
indexlayers = [1,4,7,10]
for layerweight in indexlayers:
pre_trained.layers[layerweight].set_weights(weights_list[layerweight])
print(weights_list[layerweight])
for indexlayer in indexlayers:
layer = pre_trained.layers[indexlayer]
layer.trainable = False
os.getcwd()
```
### Ferry Data Inputs
Pre-proceseed, mapped to same sample rate, transformed to cqt and unrolled into a 2D matrix
```
os.chdir(birds_home)
cwd_ferry_audio = os.path.join("./ferry-data","ferry-cqt")
os.chdir(cwd_ferry_audio)
os.getcwd()
cqt_y_train =pd.read_csv('y_train.csv', sep=',',header=None)
print(cqt_y_train.shape)
cqt_y_train = np.asarray(cqt_y_train)
cqt_y_test =pd.read_csv('y_test.csv', sep=',',header=None)
print(cqt_y_test.shape)
cqt_y_test = np.asarray(cqt_y_test)
cqt_x_test =pd.read_csv('x_test.csv', sep=',',header=None)
print(cqt_x_test.shape)
cqt_x_test = np.asarray(cqt_x_test)
cqt_x_train =pd.read_csv('x_train.csv', sep=',',header=None)
print(cqt_x_train.shape)
cqt_x_train = np.asarray(cqt_x_train)
#Define Single input shape
num_test_samples = cqt_x_test.shape[0]
num_train_samples = cqt_x_train.shape[0]
num_timesteps = 70
num_features = 112
print(num_test_samples)
print(num_train_samples)
print(num_timesteps)
print(num_features)
cqt_x_test = np.array(cqt_x_test).reshape(num_test_samples, num_timesteps, num_features, 1)
cqt_x_train = np.array(cqt_x_train).reshape(num_train_samples, num_timesteps, num_features, 1)
cqt_x_test = np.float32(cqt_x_test)
cqt_x_train = np.float32(cqt_x_train)
print(cqt_x_test.shape)
print(cqt_x_train.shape)
print(cqt_y_test.shape)
print(cqt_y_train.shape)
```
### Train Model
```
# If wanted, add class weighting
total_len = len(cqt_y_train)
class_weight = {0: total_len/len([x for x in cqt_y_train==0 if x]),
1: total_len/len([x for x in cqt_y_train==1 if x])}
from_layer = 14
for layer in pre_trained.layers[:from_layer]:
layer.trainable = False
mcp_save_ = ModelCheckpoint('best_ferry_pretrained_7_7.hdf5', save_best_only=True, monitor='val_acc', mode='max')
reduce_lr = keras.callbacks.ReduceLROnPlateau(monitor='val_loss',
factor=.1,
patience=2,
verbose=0,
mode='auto',
min_delta=0.0001,
cooldown=0,
min_lr=0)
from tensorflow.keras.callbacks import EarlyStopping
history = pre_trained.fit([cqt_x_train],[to_categorical(cqt_y_train)],
epochs=10,
batch_size=8,
validation_data=([cqt_x_test], to_categorical(cqt_y_test)),
callbacks=[mcp_save_, reduce_lr],
class_weight=class_weight)
import matplotlib.pyplot as plt
#Accuracy
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
## Without pretraining
def get_model3():
with tf.device("GPU:1"):
nclass = 2
initializer1 = tf.keras.initializers.RandomNormal
initializer2 = tf.keras.initializers.Zeros()
initializer3 = tf.keras.initializers.glorot_uniform(seed=None)
initializer4 = tf.keras.initializers.lecun_normal(seed=None)
initializer5 = tf.keras.initializers.TruncatedNormal(mean=0., stddev=1.)
inp = Input(shape=cqt_input_shape)
img_1 = Conv2D(128, kernel_size=(7,7),
kernel_initializer=initializer4,
trainable = False,
padding="valid")(inp)
#img_1 = LeakyReLU(alpha=0.2)(img_1)
img_1 = Dropout(0.3)(img_1)
img_1 = MaxPooling2D()(img_1)
img_1 = Conv2D(128, kernel_size=(3,3),
kernel_initializer=initializer4,
trainable = False,
padding="valid")(img_1)
#img_1 = LeakyReLU(alpha=0.2)(img_1)
img_1 = Dropout(0.3)(img_1)
img_1 = MaxPooling2D()(img_1)
img_1 = Conv2D(256, kernel_size=(3,3),
kernel_initializer=initializer4,
trainable = False,
padding="valid")(img_1)
#img_1 = LeakyReLU(alpha=0.2)(img_1)
img_1 = Dropout(0.3)(img_1)
img_1 = Flatten()(img_1)
img_1 = Dense(512, kernel_initializer=initializer4, trainable = True, kernel_regularizer = regularizers.l2(0.001)
)(img_1)
#img_1 = LeakyReLU(alpha=0.2)(img_1)
img_1 = Dropout(0.5)(img_1)
img_1 = Dense(128, kernel_initializer=initializer4, trainable = True,
kernel_regularizer = regularizers.l2(0.001),
use_bias=True,
bias_initializer=initializer4,
)(img_1)
img_1 = LeakyReLU(alpha=0.2)(img_1)
img_1 = Dropout(0.5)(img_1)
img_1 = Dense(16, kernel_initializer=initializer4, trainable = True,
kernel_regularizer = regularizers.l2(0.01),
use_bias=True,
bias_initializer=initializer4,
)(img_1)
img_1 = LeakyReLU(alpha=0.2)(img_1)
img_1 = Dropout(0.5)(img_1)
output_layer = Dense(2,activation=activations.sigmoid)(img_1)
model = models.Model(inputs=[inp], outputs=[output_layer])
opt = optimizers.Adam(lr=0.00006, beta_1=0.9, beta_2=0.999, epsilon=1e-5, decay=.03, amsgrad=False)
model.compile(optimizer=opt, loss=losses.binary_crossentropy, metrics=['acc'])
model.summary()
return model
not_pre_trained = get_model3()
history_not_pretrained = not_pre_trained.fit([cqt_x_train],[to_categorical(cqt_y_train)],
epochs=10,
batch_size=8,
validation_data=([cqt_x_test], to_categorical(cqt_y_test)),
callbacks=[mcp_save_, reduce_lr],
class_weight=class_weight)
model2 = load_model('10_epochs_not_pretrained_ferry_7-7.hdf5')
# freezing selective layers
# sentence_encoder.trainable = True
# is_trainable = False
# for layer in sentence_encoder.layers:
# if layer.name == 'last layer name':
# is_trainable = True
# if is_trainable:
# layer.trainable = True
# else:
# layer.trainable = False
```
| github_jupyter |
This notebook verifies math in Appendix A. Perspective effect in Oh & Evans 2020.
```
from sympy import symbols, simplify, latex
from sympy import cos, sin, Matrix, diff, N
import numpy as np
ra, dec = symbols('alpha, delta')
vra,vdec,vr = symbols(r'v_\alpha, v_\delta, v_r')
vx,vy,vz = symbols('v_x v_y v_z')
delta_ra, delta_dec= symbols(r'\Delta\alpha \Delta\delta')
R = Matrix([
[-sin(ra), cos(ra), 0.],
[-sin(dec)*cos(ra), -sin(dec)*sin(ra), cos(dec)],
[cos(dec)*cos(ra), cos(dec)*sin(ra), sin(dec)]
])
R
simplify(R.inv()) == R.T
diff(R, ra)
diff(R, dec)
```
## Geneneral $\Delta v_\mathrm{sphere}$ to the first order
```
vvec = Matrix([
[vx],
[vy],
[vz]
])
delta_v_sphere = diff(R, ra)*vvec*delta_ra + diff(R, dec)*vvec*delta_dec
delta_v_sphere
```
We can express this with $v_\mathrm{sphere} = [v_\alpha,\,v_\delta,\,v_r]^T$ **at** $(\alpha,\,\delta)$.
Such first-order correction has been applied in e.g., Kuhn et al. 2019.
The limits of this is:
1. the mean velocity is estimaten in the projected space, where the perspective effect is **baked in** already
2. it is correct to only first-order in $\Delta \alpha$ and $\Delta \delta$
3. it assumes an absolute center at $(\alpha,\,\delta)$
```
vvec = R.T @ Matrix([[vra],[vdec],[vr]])
delta_v_sphere = diff(R, ra)*vvec*delta_ra + diff(R, dec)*vvec*delta_dec
simplify(delta_v_sphere)
print(latex(simplify(delta_v_sphere)))
```
## A special case: $\vec{v}_0$ is radial: perspective expansion/contraction
When $\vec{v}_0$ is exactly radial at $(\alpha,\,\delta)$:
```
v_radial = Matrix([
[0],
[0],
[vr]
])
v0 = R.T * v_radial
dMdrav0 = simplify(diff(R, ra) * v0)
dMddecv0 = simplify(diff(R, dec)*v0)
dMdrav0*delta_ra + dMddecv0*delta_dec
```
$$ \left[\begin{matrix} \Delta v_\alpha \\ \Delta v_\delta \end{matrix} \right] =
- \left[\begin{matrix} \cos\delta & 0 \\ 0 & 1 \end{matrix}\right] v_r
\left[ \begin{matrix} \Delta \alpha \\ \Delta \delta \end{matrix} \right] $$
Since $\cos\delta>0$ always, and noting that there is not cross-term, this means that the signs of projected velocity gradient $\delta v_\alpha$ and $\delta v_\delta$ depends only on the sign of $v_r$: when $v_r>0$ (receding), the projected velocities decrease outward, i.e., we see an apparent contraction and vice versa.
## Second-order terms
One can expand to second-order as well. There will always be a higher-order correction as $\sin$ and $\cos$ expand forever.
The next order term will dominate the residual pattern.
```
delta_v_sphere2 = simplify(diff(R, ra, 2) *v0 * delta_ra**2 + diff(R, dec, 2) *v0 * delta_dec**2 + 2*diff(R, ra,dec) * v0 * delta_ra * delta_dec)
delta_v_sphere2.subs({ra:np.deg2rad(45), dec:np.deg2rad(45)})
delta_v_sphere2.subs({ra:np.deg2rad(135), dec:np.deg2rad(135)})
N(delta_v_sphere.subs({ra:np.deg2rad(45),dec:np.deg2rad(45), vx:5.0, vy:5.0, vz:7.07106781}), 3)
pos2 = N(delta_v_sphere.subs({ra:np.deg2rad(300),dec:np.deg2rad(45), vx:5.0, vy:5.0, vz:7.07106781}), 3, )
N(delta_v_sphere.subs({ra:np.deg2rad(340),dec:np.deg2rad(-65), vx:5.0, vy:5.0, vz:7.07106781}), 3)
```
| github_jupyter |
<h1> Preprocessing using tf.transform and Dataflow </h1>
This notebook illustrates:
<ol>
<li> Creating datasets for Machine Learning using tf.transform and Dataflow
</ol>
<p>
While Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming.
Apache Beam only works in Python 2 at the moment, so we're going to switch to the Python 2 kernel. In the above menu, click the dropdown arrow and select `python2`. 
Then activate a Python 2 environment and install Apache Beam. Only specific combinations of TensorFlow/Beam are supported by tf.transform. So make sure to get a combo that is.
* TFT 0.8.0
* TF 1.8 or higher
* Apache Beam [GCP] 2.5.0 or higher
```
%%bash
source activate py2env
pip uninstall -y google-cloud-dataflow
conda install -y pytz==2018.4
pip install apache-beam[gcp] tensorflow_transform==0.8.0
%%bash
pip freeze | grep -e 'flow\|beam'
```
You need to restart your kernel to register the new installs running the below cells
```
import tensorflow as tf
import apache_beam as beam
print(tf.__version__)
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR PROJECT ID
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
!gcloud config set project $PROJECT
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
```
<h2> Save the query from earlier </h2>
The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
```
query="""
SELECT
weight_pounds,
is_male,
mother_age,
mother_race,
plurality,
gestation_weeks,
mother_married,
ever_born,
cigarette_use,
alcohol_use,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
"""
import google.datalab.bigquery as bq
df = bq.Query(query + " LIMIT 100").execute().result().to_dataframe()
df.head()
```
<h2> Create ML dataset using tf.transform and Dataflow </h2>
<p>
Let's use Cloud Dataflow to read in the BigQuery data and write it out as CSV files. Along the way, let's use tf.transform to do scaling and transforming. Using tf.transform allows us to save the metadata to ensure that the appropriate transformations get carried out during prediction as well.
<p>
Note that after you launch this, the notebook won't show you progress. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about <b>30 minutes</b> for me. If you wish to continue without doing this step, you can copy my preprocessed output:
<pre>
gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc_tft gs://your-bucket/
</pre>
```
%writefile requirements.txt
tensorflow-transform==0.8.0
import datetime
import apache_beam as beam
import tensorflow_transform as tft
from tensorflow_transform.beam import impl as beam_impl
def preprocess_tft(inputs):
import copy
import numpy as np
def center(x):
return x - tft.mean(x)
result = copy.copy(inputs) # shallow copy
result['mother_age_tft'] = center(inputs['mother_age'])
result['gestation_weeks_centered'] = tft.scale_to_0_1(inputs['gestation_weeks'])
result['mother_race_tft'] = tft.string_to_int(inputs['mother_race'])
return result
#return inputs
def cleanup(rowdict):
import copy, hashlib
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,mother_race,plurality,gestation_weeks,mother_married,cigarette_use,alcohol_use'.split(',')
STR_COLUMNS = 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
FLT_COLUMNS = 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
# add any missing columns, and correct the types
def tofloat(value, ifnot):
try:
return float(value)
except (ValueError, TypeError):
return ifnot
result = {
k : str(rowdict[k]) if k in rowdict else 'None' for k in STR_COLUMNS
}
result.update({
k : tofloat(rowdict[k], -99) if k in rowdict else -99 for k in FLT_COLUMNS
})
# modify opaque numeric race code into human-readable data
races = dict(zip([1,2,3,4,5,6,7,18,28,39,48],
['White', 'Black', 'American Indian', 'Chinese',
'Japanese', 'Hawaiian', 'Filipino',
'Asian Indian', 'Korean', 'Samaon', 'Vietnamese']))
if 'mother_race' in rowdict and rowdict['mother_race'] in races:
result['mother_race'] = races[rowdict['mother_race']]
else:
result['mother_race'] = 'Unknown'
# cleanup: write out only the data we that we want to train on
if result['weight_pounds'] > 0 and result['mother_age'] > 0 and result['gestation_weeks'] > 0 and result['plurality'] > 0:
data = ','.join([str(result[k]) for k in CSV_COLUMNS])
result['key'] = hashlib.sha224(data).hexdigest()
yield result
def preprocess(query, in_test_mode):
import os
import os.path
import tempfile
import tensorflow as tf
from apache_beam.io import tfrecordio
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
import shutil
print('Launching local job ... hang on')
OUTPUT_DIR = './preproc_tft'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
else:
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/babyweight/preproc_tft/'.format(BUCKET)
import subprocess
subprocess.call('gsutil rm -r {}'.format(OUTPUT_DIR).split())
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'max_num_workers': 24,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'requirements_file': 'requirements.txt'
}
opts = beam.pipeline.PipelineOptions(flags=[], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
# set up metadata
raw_data_schema = {
colname : dataset_schema.ColumnSchema(tf.string, [], dataset_schema.FixedColumnRepresentation())
for colname in 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
}
raw_data_schema.update({
colname : dataset_schema.ColumnSchema(tf.float32, [], dataset_schema.FixedColumnRepresentation())
for colname in 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
})
raw_data_metadata = dataset_metadata.DatasetMetadata(dataset_schema.Schema(raw_data_schema))
def read_rawdata(p, step, test_mode):
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) = 3'.format(query)
if in_test_mode:
selquery = selquery + ' LIMIT 100'
#print('Processing {} data from {}'.format(step, selquery))
return (p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query=selquery, use_standard_sql=True))
| '{}_cleanup'.format(step) >> beam.FlatMap(cleanup)
)
# run Beam
with beam.Pipeline(RUNNER, options=opts) as p:
with beam_impl.Context(temp_dir=os.path.join(OUTPUT_DIR, 'tmp')):
# analyze and transform training
raw_data = read_rawdata(p, 'train', in_test_mode)
raw_dataset = (raw_data, raw_data_metadata)
transformed_dataset, transform_fn = (
raw_dataset | beam_impl.AnalyzeAndTransformDataset(preprocess_tft))
transformed_data, transformed_metadata = transformed_dataset
_ = transformed_data | 'WriteTrainData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'train'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
# transform eval data
raw_test_data = read_rawdata(p, 'eval', in_test_mode)
raw_test_dataset = (raw_test_data, raw_data_metadata)
transformed_test_dataset = (
(raw_test_dataset, transform_fn) | beam_impl.TransformDataset())
transformed_test_data, _ = transformed_test_dataset
_ = transformed_test_data | 'WriteTestData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'eval'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
_ = (transform_fn
| 'WriteTransformFn' >>
transform_fn_io.WriteTransformFn(os.path.join(OUTPUT_DIR, 'metadata')))
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(query, in_test_mode=False)
%%bash
gsutil ls gs://${BUCKET}/babyweight/preproc_tft/*-00000*
```
Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l06c03_exercise_flowers_with_transfer_learning_solution.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l06c03_exercise_flowers_with_transfer_learning_solution.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
# TensorFlow Hub
[TensorFlow Hub](http://tensorflow.org/hub) is an online repository of already trained TensorFlow models that you can use.
These models can either be used as is, or they can be used for Transfer Learning.
Transfer learning is a process where you take an existing trained model, and extend it to do additional work. This involves leaving the bulk of the model unchanged, while adding and retraining the final layers, in order to get a different set of possible outputs.
Here, you can see all the models available in [TensorFlow Module Hub](https://tfhub.dev/).
Before starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above.
# Imports
Some normal imports we've seen before. The new one is importing tensorflow_hub which this Colab will make heavy use of.
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# Use the %tensorflow_version magic if in colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import tensorflow_hub as hub
import tensorflow_datasets as tfds
from tensorflow.keras import layers
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
```
# TODO: Download the Flowers Dataset using TensorFlow Datasets
In the cell below you will download the Flowers dataset using TensorFlow Datasets. If you look at the [TensorFlow Datasets documentation](https://www.tensorflow.org/datasets/datasets#tf_flowers) you will see that the name of the Flowers dataset is `tf_flowers`. You can also see that this dataset is only split into a TRAINING set. You will therefore have to use `tfds.splits` to split this training set into to a `training_set` and a `validation_set`. Do a `[70, 30]` split such that 70 corresponds to the `training_set` and 30 to the `validation_set`. Then load the `tf_flowers` dataset using `tfds.load`. Make sure the `tfds.load` function uses the all the parameters you need, and also make sure it returns the dataset info, so we can retrieve information about the datasets.
```
(training_set, validation_set), dataset_info = tfds.load(
'tf_flowers',
split=['train[:70%]', 'train[70%:]'],
with_info=True,
as_supervised=True,
)
```
# TODO: Print Information about the Flowers Dataset
Now that you have downloaded the dataset, use the dataset info to print the number of classes in the dataset, and also write some code that counts how many images we have in the training and validation sets.
```
num_classes = dataset_info.features['label'].num_classes
num_training_examples = 0
num_validation_examples = 0
for example in training_set:
num_training_examples += 1
for example in validation_set:
num_validation_examples += 1
print('Total Number of Classes: {}'.format(num_classes))
print('Total Number of Training Images: {}'.format(num_training_examples))
print('Total Number of Validation Images: {} \n'.format(num_validation_examples))
```
The images in the Flowers dataset are not all the same size.
```
for i, example in enumerate(training_set.take(5)):
print('Image {} shape: {} label: {}'.format(i+1, example[0].shape, example[1]))
```
# TODO: Reformat Images and Create Batches
In the cell below create a function that reformats all images to the resolution expected by MobileNet v2 (224, 224) and normalizes them. The function should take in an `image` and a `label` as arguments and should return the new `image` and corresponding `label`. Then create training and validation batches of size `32`.
```
IMAGE_RES = 224
def format_image(image, label):
image = tf.image.resize(image, (IMAGE_RES, IMAGE_RES))/255.0
return image, label
BATCH_SIZE = 32
train_batches = training_set.shuffle(num_training_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_set.map(format_image).batch(BATCH_SIZE).prefetch(1)
```
# Do Simple Transfer Learning with TensorFlow Hub
Let's now use TensorFlow Hub to do Transfer Learning. Remember, in transfer learning we reuse parts of an already trained model and change the final layer, or several layers, of the model, and then retrain those layers on our own dataset.
### TODO: Create a Feature Extractor
In the cell below create a `feature_extractor` using MobileNet v2. Remember that the partial model from TensorFlow Hub (without the final classification layer) is called a feature vector. Go to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) to see a list of available feature vectors. Click on the `tf2-preview/mobilenet_v2/feature_vector`. Read the documentation and get the corresponding `URL` to get the MobileNet v2 feature vector. Finally, create a `feature_extractor` by using `hub.KerasLayer` with the correct `input_shape` parameter.
```
URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
feature_extractor = hub.KerasLayer(URL,
input_shape=(IMAGE_RES, IMAGE_RES, 3))
```
### TODO: Freeze the Pre-Trained Model
In the cell below freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
```
feature_extractor.trainable = False
```
### TODO: Attach a classification head
In the cell below create a `tf.keras.Sequential` model, and add the pre-trained model and the new classification layer. Remember that the classification layer must have the same number of classes as our Flowers dataset. Finally print a summary of the Sequential model.
```
model = tf.keras.Sequential([
feature_extractor,
layers.Dense(num_classes)
])
model.summary()
```
### TODO: Train the model
In the cell bellow train this model like any other, by first calling `compile` and then followed by `fit`. Make sure you use the proper parameters when applying both methods. Train the model for only 6 epochs.
```
model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
EPOCHS = 6
history = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
```
You can see we get ~88% validation accuracy with only 6 epochs of training, which is absolutely awesome. This is a huge improvement over the model we created in the previous lesson, where we were able to get ~76% accuracy with 80 epochs of training. The reason for this difference is that MobileNet v2 was carefully designed over a long time by experts, then trained on a massive dataset (ImageNet).
# TODO: Plot Training and Validation Graphs
In the cell below, plot the training and validation accuracy/loss graphs.
```
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(EPOCHS)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
```
What is a bit curious here is that validation performance is better than training performance, right from the start to the end of execution.
One reason for this is that validation performance is measured at the end of the epoch, but training performance is the average values across the epoch.
The bigger reason though is that we're reusing a large part of MobileNet which is already trained on Flower images.
# TODO: Check Predictions
In the cell below get the label names from the dataset info and convert them into a NumPy array. Print the array to make sure you have the correct label names.
```
class_names = np.array(dataset_info.features['label'].names)
print(class_names)
```
### TODO: Create an Image Batch and Make Predictions
In the cell below, use the `next()` function to create an `image_batch` and its corresponding `label_batch`. Convert both the `image_batch` and `label_batch` to numpy arrays using the `.numpy()` method. Then use the `.predict()` method to run the image batch through your model and make predictions. Then use the `np.argmax()` function to get the indices of the best prediction for each image. Finally convert the indices of the best predictions to class names.
```
image_batch, label_batch = next(iter(train_batches))
image_batch = image_batch.numpy()
label_batch = label_batch.numpy()
predicted_batch = model.predict(image_batch)
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids = np.argmax(predicted_batch, axis=-1)
predicted_class_names = class_names[predicted_ids]
print(predicted_class_names)
```
### TODO: Print True Labels and Predicted Indices
In the cell below, print the true labels and the indices of predicted labels.
```
print("Labels: ", label_batch)
print("Predicted labels: ", predicted_ids)
```
# Plot Model Predictions
```
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.subplots_adjust(hspace = 0.3)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
```
# TODO: Perform Transfer Learning with the Inception Model
Go to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) and click on `tf2-preview/inception_v3/feature_vector`. This feature vector corresponds to the Inception v3 model. In the cells below, use transfer learning to create a CNN that uses Inception v3 as the pretrained model to classify the images from the Flowers dataset. Note that Inception, takes as input, images that are 299 x 299 pixels. Compare the accuracy you get with Inception v3 to the accuracy you got with MobileNet v2.
```
IMAGE_RES = 299
(training_set, validation_set), dataset_info = tfds.load(
'tf_flowers',
with_info=True,
as_supervised=True,
split=['train[:70%]', 'train[70%:]'],
)
train_batches = training_set.shuffle(num_training_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_set.map(format_image).batch(BATCH_SIZE).prefetch(1)
URL = "https://tfhub.dev/google/tf2-preview/inception_v3/feature_vector/4"
feature_extractor = hub.KerasLayer(URL,
input_shape=(IMAGE_RES, IMAGE_RES, 3),
trainable=False)
model_inception = tf.keras.Sequential([
feature_extractor,
tf.keras.layers.Dense(num_classes)
])
model_inception.summary()
model_inception.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
EPOCHS = 6
history = model_inception.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
```
| github_jupyter |
##### salimt
Below is code with a link to a happy or sad dataset which contains 80 images, 40 happy and 40 sad.
Create a convolutional neural network that trains to 100% accuracy on these images, which cancels training upon hitting training accuracy of >.999
Hint -- it will work best with 3 convolutional layers.
```
import tensorflow as tf
import os
import zipfile
from os import path, getcwd, chdir
# DO NOT CHANGE THE LINE BELOW. If you are developing in a local
# environment, then grab happy-or-sad.zip from the Coursera Jupyter Notebook
# and place it inside a local folder and edit the path to that location
path = f"{getcwd()}/../tmp2/happy-or-sad.zip"
zip_ref = zipfile.ZipFile(path, 'r')
zip_ref.extractall("/tmp/h-or-s")
zip_ref.close()
# GRADED FUNCTION: train_happy_sad_model
def train_happy_sad_model():
# Please write your code only where you are indicated.
# please do not remove # model fitting inline comments.
DESIRED_ACCURACY = 0.999
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('acc')>=DESIRED_ACCURACY):
print("\nReached 99.8% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
# This Code Block should Define and Compile the Model. Please assume the images are 150 X 150 in your implementation.
model = tf.keras.models.Sequential([
# Your Code Here
# Note the input shape is the desired size of the image 150x150 with 3 bytes color
# This is the first convolution
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
# The second convolution
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The third convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fourth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fifth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# Flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
# 512 neuron hidden layer
tf.keras.layers.Dense(512, activation='relu'),
# Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans')
tf.keras.layers.Dense(1, activation='sigmoid')
])
from tensorflow.keras.optimizers import RMSprop
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=0.001),
metrics=['accuracy'])
# This code block should create an instance of an ImageDataGenerator called train_datagen
# And a train_generator by calling train_datagen.flow_from_directory
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale=1/255)
# Please use a target_size of 150 X 150.
train_generator = train_datagen.flow_from_directory(
'/tmp/h-or-s/', # This is the source directory for training images
target_size=(150, 150), # All images will be resized to 150x150
batch_size=128,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
# Expected output: 'Found 80 images belonging to 2 classes'
# This code block should call model.fit_generator and train for
# a number of epochs.
# model fitting
history = model.fit(
train_generator,
steps_per_epoch=8,
epochs=15,
verbose=1,
callbacks=[callbacks])
# model fitting
return history.history['acc'][-1]
# The Expected output: "Reached 99.9% accuracy so cancelling training!""
train_happy_sad_model()
# Now click the 'Submit Assignment' button above.
# Once that is complete, please run the following two cells to save your work and close the notebook
%%javascript
<!-- Save the notebook -->
IPython.notebook.save_checkpoint();
%%javascript
IPython.notebook.session.delete();
window.onbeforeunload = null
setTimeout(function() { window.close(); }, 1000);
```
| github_jupyter |
# H0 Hyperparameter Tuning - ResConvLSTM
#### Author: Jayant Verma
#### Cognibit Solutions LLP
Derived from https://arxiv.org/pdf/1610.03022.pdf,
1. No conv(3x3)/2 used
2. Added an extra dense layer of 256 units
83.8% on val set
```
import sys
import os
import tensorflow as tf
sys.path.append("../libs")
from classification import input_data
from classification import models
from classification import trainer
from classification import freeze
import hyperopt
```
Change the data folder to use the required data folder.
### Flags
```
flags=tf.app.flags
flags=tf.app.flags
#Important Directories
flags.DEFINE_string('data_dir','../data/raw','Train Data Folder')
flags.DEFINE_string('summaries_dir','../summaries','Summaries Folder')
flags.DEFINE_string('train_dir','../logs&checkpoint','Directory to write event logs and checkpoint')
flags.DEFINE_string('models_dir','../models','Models Folder')
#Task Specific Parameters
flags.DEFINE_string('wanted_words','yes,no,up,down,left,right,on,off,stop,go','Wanted Words')
flags.DEFINE_float('validation_percentage',10,'Validation Percentage')
flags.DEFINE_float('testing_percentage',10,'Testing Percentage')
flags.DEFINE_integer('sample_rate',16000,'Sample Rate')
flags.DEFINE_integer('clip_duration_ms',1000,'Clip Duration in ms')
flags.DEFINE_float('window_size_ms',30,'How long each spectogram timeslice is')
flags.DEFINE_float('window_stride_ms',10.0,'How far to move in time between frequency windows.')
flags.DEFINE_integer('dct_coefficient_count',40,'How many bins to use for the MFCC fingerprint')
flags.DEFINE_float('time_shift_ms',100.0,'Range to randomly shift the training audio by in time.')
FLAGS=flags.FLAGS
```
### Variables
```
model_architecture='convlstm'
start_checkpoint=None
logging_interval=10
eval_step_interval=1000
save_step_interval=100000
silence_percentage=10.0
unknown_percentage=10.0
background_frequency=0.8
background_volume=0.1
train_steps='3500' #Declare the training steps for which the learning rates will be used
learning_rate='0.0001'
batch_size=256
```
### Model to be optimized
```
def resCONVLSTM(inputs, model_settings, is_training, name='',conv_lstm_filter_size=4):
"""Creates a Residual ConvLSTM as in https://arxiv.org/abs/1607.06450.
1-D Conv on feature, unidirectional rnn
"""
with(tf.variable_scope('resCONVLSTM_%s' % name)):
batch_size = tf.shape(inputs)[0]
input_frequency_size = model_settings['dct_coefficient_count']
input_time_size = model_settings['spectrogram_length']
input_shape = [input_frequency_size, 1]
conv1 = tf.contrib.rnn.ConvLSTMCell(1, input_shape, 1, [conv_lstm_filter_size], name='conv1')
conv2 = tf.contrib.rnn.ConvLSTMCell(1, input_shape, 1, [conv_lstm_filter_size], name='conv2')
# First ConvLSTM
initial_conv1 = conv1.zero_state(batch_size, dtype=tf.float32)
initial_conv2 = conv2.zero_state(batch_size, dtype=tf.float32)
conv1_o, _ = tf.nn.dynamic_rnn(conv1, inputs, initial_state=initial_conv1)
bn1 = tf.layers.batch_normalization(inputs, axis=2, training=is_training)
bn1_relu = tf.nn.relu(bn1)
conv2_o, _ = tf.nn.dynamic_rnn(conv2, bn1_relu, initial_state=initial_conv2)
bn2 = tf.layers.batch_normalization(conv2_o, axis=2, training=is_training)
residual = tf.add(bn2, inputs)
output_relu = tf.nn.relu(residual)
return output_relu
def create_multilayer_convlstm_model(fingerprint_input, model_settings, is_training,conv_lstm_filter_size=4,lstm_size=256,dense_size=256):
"""
Creates a Multilayer ConvLSTM Model Followed by a linear layer and softmax activation function
"""
if is_training:
dropout_prob = tf.placeholder(tf.float32, name='dropout_prob')
batch_size = tf.shape(fingerprint_input)[0]
input_frequency_size = model_settings['dct_coefficient_count']
input_time_size = model_settings['spectrogram_length']
fingerprint_4d = tf.reshape(fingerprint_input,
[-1, input_time_size, input_frequency_size, 1])
# Layer1 resCONVLSTMs
resCONVLSTM1 = resCONVLSTM(fingerprint_4d, model_settings, is_training, '1',conv_lstm_filter_size)
resCONVLSTM2 = resCONVLSTM(resCONVLSTM1, model_settings, is_training, '2',conv_lstm_filter_size)
resCONVLSTM3 = resCONVLSTM(resCONVLSTM2, model_settings, is_training, '3',conv_lstm_filter_size)
resCONVLSTM4= resCONVLSTM(resCONVLSTM3,model_settings,is_training,'4',conv_lstm_filter_size)
resCONVLSTM4=tf.reshape(resCONVLSTM4,[-1, input_time_size, input_frequency_size])
with tf.variable_scope('lstm1'):
lstm_cell1=tf.contrib.rnn.LSTMCell(num_units=lstm_size,num_proj=input_frequency_size)
initial_lstm1=lstm_cell1.zero_state(batch_size,dtype=tf.float32)
lstm1_o,_=tf.nn.dynamic_rnn(lstm_cell1,resCONVLSTM4,initial_state=initial_lstm1)
lstm1_o=tf.reshape(lstm1_o,[-1, input_time_size, input_frequency_size,1 ])
nin1_o=tf.layers.conv2d(lstm1_o,1,[1,1],name='nin1')
bn1=tf.layers.batch_normalization(nin1_o, axis=2, training=is_training)
bn1=tf.reshape(bn1,[-1,input_time_size,input_frequency_size])
with tf.variable_scope('lstm2'):
lstm_cell2=tf.contrib.rnn.LSTMCell(num_units=lstm_size,num_proj=input_frequency_size)
initial_lstm2=lstm_cell1.zero_state(batch_size,dtype=tf.float32)
lstm2_o, _ = tf.nn.dynamic_rnn(lstm_cell2, bn1, initial_state=initial_lstm2)
lstm2_o = tf.reshape(lstm2_o, [-1, input_time_size, input_frequency_size, 1])
nin2_o = tf.layers.conv2d(lstm2_o, 1, [1, 1], name='nin1')
bn2 = tf.layers.batch_normalization(nin2_o, axis=2, training=is_training)
bn2=tf.reshape(bn2,[-1,input_time_size,input_frequency_size])
# LSTM Layer Final
with tf.variable_scope('lstm3'):
lstm_cell3=tf.contrib.rnn.LSTMCell(num_units=lstm_size,num_proj=input_frequency_size)
initial_lstm3=lstm_cell1.zero_state(batch_size,dtype=tf.float32)
lstm3_o, _ = tf.nn.dynamic_rnn(lstm_cell3, bn2, initial_state=initial_lstm3)
lstm3_o = tf.reshape(lstm3_o, [-1, input_time_size, input_frequency_size, 1])
# Final FC for classification
reshaped_layer = tf.reshape(lstm3_o,
[-1, input_time_size * input_frequency_size])
# Dropout
if is_training:
reshaped_layer = tf.nn.dropout(reshaped_layer, keep_prob=dropout_prob)
prefinal_dense=tf.nn.relu(tf.layers.dense(reshaped_layer,dense_size))
if is_training:
prefinal_dense=tf.nn.dropout(prefinal_dense,keep_prob=dropout_prob)
# Final Layer
label_count = model_settings['label_count']
final_fc_weights = tf.Variable(
tf.truncated_normal(
[dense_size, label_count], stddev=0.01))
final_fc_bias = tf.Variable(tf.zeros([label_count]))
final_fc = tf.matmul(prefinal_dense, final_fc_weights) + final_fc_bias
if is_training:
return final_fc, dropout_prob
else:
return final_fc
```
### Objective Function
```
def objective(args):
conv_lstm_filter_size=int(args['conv_lstm_filter_size'])
lstm_size=int(args['lstm_size'])
dense_size=int(args['dense_size'])
dropout=args['dropout']
print('Eval Start')
print(conv_lstm_filter_size,lstm_size,dense_size,dropout)
tf.reset_default_graph()
train_dir=os.path.join(FLAGS.data_dir,'train','audio')
model_settings = models.prepare_model_settings(
len(input_data.prepare_words_list(FLAGS.wanted_words.split(','))),
FLAGS.sample_rate, FLAGS.clip_duration_ms, FLAGS.window_size_ms,
FLAGS.window_stride_ms, FLAGS.dct_coefficient_count)
audio_processor = input_data.AudioProcessor(
train_dir, silence_percentage, unknown_percentage,
FLAGS.wanted_words.split(','), FLAGS.validation_percentage,
FLAGS.testing_percentage, model_settings)
def get_train_data(args):
sess=args
time_shift_samples = int((FLAGS.time_shift_ms * FLAGS.sample_rate) / 1000)
train_fingerprints, train_ground_truth = audio_processor.get_data(
batch_size, 0, model_settings,background_frequency,
background_volume, time_shift_samples, 'training', sess)
return train_fingerprints,train_ground_truth
def get_val_data(args):
'''
Input: (sess,offset)
'''
sess,i=args
validation_fingerprints, validation_ground_truth = (
audio_processor.get_data(batch_size, i, model_settings, 0.0,
0.0, 0, 'validation', sess))
return validation_fingerprints,validation_ground_truth
with tf.Session() as sess:
# Placeholders
fingerprint_size = model_settings['fingerprint_size']
label_count = model_settings['label_count']
fingerprint_input = tf.placeholder(
tf.float32, [None, fingerprint_size], name='fingerprint_input')
ground_truth_input = tf.placeholder(
tf.float32, [None, label_count], name='groundtruth_input')
set_size = audio_processor.set_size('validation')
label_count = model_settings['label_count']
# Create Model
logits, dropout_prob = create_multilayer_convlstm_model(
fingerprint_input,
model_settings,
True,
conv_lstm_filter_size,
lstm_size,
dense_size)
#Start Training
extra_args=(dropout_prob,label_count,batch_size,set_size)
val_acc=trainer.train(sess,logits,fingerprint_input,ground_truth_input,get_train_data,
get_val_data,train_steps,learning_rate,eval_step_interval, logging_interval=logging_interval,
start_checkpoint=None,checkpoint_interval=None,
model_name=model_architecture,train_dir=None,
summaries_dir=None,dropout=dropout,args=extra_args)
return 1-val_acc
```
### Optimization
```
space = {
'conv_lstm_filter_size': hyperopt.hp.uniform('conv_lstm_filter_size', 4,20),
'lstm_size': hyperopt.hp.uniform('lstm_size', 128,756),
'dense_size': hyperopt.hp.uniform('dense_size', 128,756),
'dropout':hyperopt.hp.uniform('dropout',0.3,1)
}
best_model=hyperopt.fmin(objective, space, algo=hyperopt.tpe.suggest, max_evals=20)
print('The best selected Hyperparameters')
print(hyperopt.space_eval(space, best_model))
```
| github_jupyter |
# Bayesian Normal Density
This notebook illustrate how to use a Bayesian Normal density model with the [beer framework](https://github.com/beer-asr/beer). The Normal distribution is a fairly basic model but it is used extenslively in other model as a basic building block.
```
# Add "beer" to the PYTHONPATH
import sys
sys.path.insert(0, '../')
import beer
import numpy as np
import torch
# For plotting.
from bokeh.io import show, output_notebook
from bokeh.plotting import figure, gridplot
output_notebook()
# Convenience functions for plotting.
import plotting
%load_ext autoreload
%autoreload 2
```
## Data
Generate some normally distributed data:
```
mean = np.array([-1.5, 4])
cov = np.array([
[2, 1],
[1, .75]
])
data = np.random.multivariate_normal(mean, cov, size=100)
fig = figure(
title='Data',
width=400,
height=400,
x_range=(mean[0] - 5, mean[0] + 5),
y_range=(mean[1] - 5, mean[1] + 5)
)
fig.circle(data[:, 0], data[:, 1])
plotting.plot_normal(fig, mean, cov)
show(fig)
```
## Model Creation
We create two types of Normal distribution: one diagonal covariance matrix and another one with full covariance matrix.
```
normal_diag = beer.NormalDiagonalCovariance.create(torch.ones(2), torch.eye(2), prior_count=1e-3)
normal_full = beer.NormalFullCovariance.create(torch.zeros(2), torch.eye(2), prior_count=1e-3)
```
## Variational Bayes Training
```
# Training.
beer.train_loglinear_model(normal_diag, torch.from_numpy(data).float())
beer.train_loglinear_model(normal_full, torch.from_numpy(data).float())
fig = figure(
title='Initial model',
width=400,
height=400,
x_range=(mean[0] - 5, mean[0] + 5),
y_range=(mean[1] - 5, mean[1] + 5)
)
fig.circle(data[:, 0], data[:, 1])
plotting.plot_normal(fig, normal_diag.mean.numpy(), normal_diag.cov.numpy(), alpha=.5, color='red')
plotting.plot_normal(fig, normal_full.mean.numpy(), normal_full.cov.numpy(), alpha=.5, color='green')
show(fig)
n1, n2 = normal_diag.split()
fig = figure(
title='Initial model',
width=400,
height=400,
x_range=(mean[0] - 5, mean[0] + 5),
y_range=(mean[1] - 5, mean[1] + 5)
)
fig.circle(data[:, 0], data[:, 1])
plotting.plot_normal(fig, normal_diag.mean.numpy(), normal_diag.cov.numpy(), alpha=.5, color='blue')
plotting.plot_normal(fig, n1.mean.numpy(), n1.cov.numpy(), alpha=.5, color='red')
plotting.plot_normal(fig, n2.mean.numpy(), n2.cov.numpy(), alpha=.5, color='green')
show(fig)
n1, n2 = normal_full.split()
fig = figure(
title='Initial model',
width=400,
height=400,
x_range=(mean[0] - 5, mean[0] + 5),
y_range=(mean[1] - 5, mean[1] + 5)
)
fig.circle(data[:, 0], data[:, 1])
plotting.plot_normal(fig, normal_full.mean.numpy(), normal_full.cov.numpy(), alpha=.5, color='blue')
plotting.plot_normal(fig, n1.mean.numpy(), n1.cov.numpy(), alpha=.5, color='red')
plotting.plot_normal(fig, n2.mean.numpy(), n2.cov.numpy(), alpha=.5, color='green')
show(fig)
```
# Model comparison
We generate data for various correlation parameters:
$$
X_{\lambda} \sim \mathcal{N}(
\begin{pmatrix}
0 \\
0
\end{pmatrix},
\begin{pmatrix}
1 & \lambda \\
\lambda & 1
\end{pmatrix})
$$
and we compare the model evidence for both the Normal distribution with diagonal covariance matrix and with full covariance matrix.
$$
\ln B_{\lambda} = \ln \frac{p(X_\lambda | \mathcal{M}_{\text{full}})}{p(X_\lambda | \mathcal{M}_{\text{diag}})} =
\ln \frac{\int_{\theta} p(X_\lambda | \theta, \mathcal{M}_{\text{full}}) p(\theta) d\theta}{\int_{\theta}p(X_\lambda | \theta, \mathcal{M}_{\text{diag}})p(\theta) d\theta} = \frac{A_{\text{full}}(\xi + \sum_{n=1}^N T(x_n)) - A_{\text{full}}(\xi)}{A_{\text{diag}}(\xi + \sum_{n=1}^N T(x_n))
- A_{\text{diag}}(\xi)}
$$
```
from scipy.special import logsumexp
import copy
lambdas = np.linspace(-.99, .99, 100)
lBs = []
# For each value of lambda.
for l in lambdas:
# Generate the data.
cov = np.array([
[1, l],
[l, 1]
])
X = np.random.multivariate_normal(np.zeros(2), cov, size=1000)
X = torch.from_numpy(X).float()
# Fit both models
normal_diag = beer.NormalDiagonalCovariance.create(torch.zeros(2), torch.eye(2), prior_count=1e-3)
beer.train_loglinear_model(normal_diag, X)
normal = beer.NormalFullCovariance.create(torch.zeros(2), torch.eye(2), prior_count=1e-3)
beer.train_loglinear_model(normal, X)
# Compute the log Bayes factor.
llh_M1 = normal.posterior.log_norm - normal.prior.log_norm
llh_M2 = normal_diag.posterior.log_norm - normal_diag.prior.log_norm
lBs.append((llh_M1 - llh_M2))
lBs = np.array(lBs)
fig1 = figure(
title='Model Comparison',
x_axis_label='λ',
y_axis_label='log Bayes factor',
width=400,
height=400
)
fig1.line(lambdas, lBs)
show(fig1)
```
| github_jupyter |
# Variant Calling Workflow:

## Setting up
### Download the reference genome for E. coli REL606:
```
!mkdir -p data/ref_genome
!curl -L -o data/ref_genome/ecoli_rel606.fasta.gz ftp://ftp.ncbi.nlm.nih.gov/genomes/all/GCA/000/017/985/GCA_000017985.1_ASM1798v1/GCA_000017985.1_ASM1798v1_genomic.fna.gz
!gunzip data/ref_genome/ecoli_rel606.fasta.gz
# check sequence name: CP000819.1 Escherichia coli B str. REL606, complete genom
!head data/ref_genome/ecoli_rel606.fasta
```
### Download a set of trimmed FASTQ files, these are small subsets of the real trimmed data:
```
!curl -L -o sub.tar.gz https://ndownloader.figshare.com/files/14418248
!tar xvf sub.tar.gz
!mv sub/ data/trimmed_fastq_small
# create directories for the results that will be generated as part of this workflow
!mkdir -p results/sam results/bam results/bcf results/vcf
```
## Index the reference genome
The first step is to index the reference genome for use by BWA. Indexing allows the aligner to quickly find potential alignment sites for query sequences in a genome, which saves time during alignment. Indexing the reference only has to be run once. The only reason you would want to create a new index is if you are working with a different reference genome or you are using a different tool for alignment.
```
!bwa index data/ref_genome/ecoli_rel606.fasta
```
## Align reads to reference genome
The alignment process consists of choosing an appropriate reference genome to map our reads against and then deciding on an aligner. We will use the BWA-MEM algorithm, which is the latest and is generally recommended for high-quality queries as it is faster and more accurate.
```
!bwa mem data/ref_genome/ecoli_rel606.fasta data/trimmed_fastq_small/SRR2584866_1.trim.sub.fastq data/trimmed_fastq_small/SRR2584866_2.trim.sub.fastq > results/sam/SRR2584866.aligned.sam
# We will convert the SAM file to BAM format using the samtools program with the view command and tell this command that the input is in SAM format (-S) and to output BAM format (-b):
!samtools view -S -b results/sam/SRR2584866.aligned.sam > results/bam/SRR2584866.aligned.bam
```
## Sort BAM file by coordinates
```
!samtools sort -o results/bam/SRR2584866.aligned.sorted.bam results/bam/SRR2584866.aligned.bam
# statistics about the sorted bam file
!samtools flagstat results/bam/SRR2584866.aligned.sorted.bam
```
# Variant Calling
### Step 1: Calculate the read coverage of positions in the genome
```
!bcftools mpileup -O b -o results/bcf/SRR2584866_raw.bcf \
-f data/ref_genome/ecoli_rel606.fasta results/bam/SRR2584866.aligned.sorted.bam
```
### Step 2: Detect the single nucleotide polymorphisms (SNPs)
```
!bcftools call --ploidy 1 -m -v -o results/bcf/SRR2584866_variants.vcf results/bcf/SRR2584866_raw.bcf
```
### Step 3: Filter and report the SNP variants in variant calling format (VCF)
```
!vcfutils.pl varFilter results/bcf/SRR2584866_variants.vcf > results/vcf/SRR2584866_final_variants.vcf
```
Peeking into the VCF format:
```
!less -S results/vcf/SRR2584866_final_variants.vcf
# assess how many variants are in the vcf file
!grep -v "#" results/vcf/SRR2584866_final_variants.vcf | wc -l
```
## Assess the alignment
### Visualization using Tview
```
!samtools index results/bam/SRR2584866.aligned.sorted.bam
# tview
!samtools tview results/bam/SRR2584866.aligned.sorted.bam data/ref_genome/ecoli_rel606.fasta
```
| github_jupyter |
```
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# PyTorch Image Classification Multi-Node Distributed Data Parallel Training on CPU using Vertex Training with Custom Container
<table align="left">
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/community-content/pytorch_image_classification_distributed_data_parallel_training_with_vertex_sdk/multi_node_ddp_gloo_vertex_training_with_custom_container.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
## Setup
```
PROJECT_ID = "YOUR PROJECT ID"
BUCKET_NAME = "gs://YOUR BUCKET NAME"
REGION = "YOUR REGION"
SERVICE_ACCOUNT = "YOUR SERVICE ACCOUNT"
! gsutil ls -al $BUCKET_NAME
content_name = "pt-img-cls-multi-node-ddp-cust-cont"
```
## Local Training
```
! ls trainer
! cat trainer/requirements.txt
! pip install -r trainer/requirements.txt
! cat trainer/task.py
%run trainer/task.py --epochs 5 --no-cuda --local-mode
! ls ./tmp
! rm -rf ./tmp
```
## Vertex Training using Vertex SDK and Custom Container
### Build Custom Container
```
hostname = "gcr.io"
image_name = content_name
tag = "latest"
custom_container_image_uri = f"{hostname}/{PROJECT_ID}/{image_name}:{tag}"
! cd trainer && docker build -t $custom_container_image_uri -f Dockerfile .
! docker run --rm $custom_container_image_uri --epochs 5 --no-cuda --local-mode
! docker push $custom_container_image_uri
! gcloud container images list --repository $hostname/$PROJECT_ID
```
### Initialize Vertex SDK
```
! pip install -r requirements.txt
from google.cloud import aiplatform
aiplatform.init(
project=PROJECT_ID,
staging_bucket=BUCKET_NAME,
location=REGION,
)
```
### Create a Vertex Tensorboard Instance
```
content_name = content_name + "-cpu"
tensorboard = aiplatform.Tensorboard.create(
display_name=content_name,
)
```
#### Option: Use a Previously Created Vertex Tensorboard Instance
```
tensorboard_name = "Your Tensorboard Resource Name or Tensorboard ID"
tensorboard = aiplatform.Tensorboard(tensorboard_name=tensorboard_name)
```
### Run a Vertex SDK CustomContainerTrainingJob
```
display_name = content_name
gcs_output_uri_prefix = f"{BUCKET_NAME}/{display_name}"
replica_count = 4
machine_type = "n1-standard-4"
args = [
"--backend",
"gloo",
"--no-cuda",
"--batch-size",
"128",
"--epochs",
"25",
]
custom_container_training_job = aiplatform.CustomContainerTrainingJob(
display_name=display_name,
container_uri=custom_container_image_uri,
)
custom_container_training_job.run(
args=args,
base_output_dir=gcs_output_uri_prefix,
replica_count=replica_count,
machine_type=machine_type,
tensorboard=tensorboard.resource_name,
service_account=SERVICE_ACCOUNT,
)
print(f"Custom Training Job Name: {custom_container_training_job.resource_name}")
print(f"GCS Output URI Prefix: {gcs_output_uri_prefix}")
```
### Training Output Artifact
```
! gsutil ls $gcs_output_uri_prefix
```
## Clean Up Artifact
```
! gsutil rm -rf $gcs_output_uri_prefix
```
| github_jupyter |
# Cox-PH and DeepSurv
In this notebook we will train the [Cox-PH method](http://jmlr.org/papers/volume20/18-424/18-424.pdf), also known as [DeepSurv](https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-018-0482-1).
We will use the METABRIC data sets as an example
A more detailed introduction to the `pycox` package can be found in [this notebook](https://nbviewer.jupyter.org/github/havakv/pycox/blob/master/examples/01_introduction.ipynb) about the `LogisticHazard` method.
The main benefit Cox-CC (and the other Cox methods) has over Logistic-Hazard is that it is a continuous-time method, meaning we do not need to discretize the time scale.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn_pandas import DataFrameMapper
import torch
import torchtuples as tt
from pycox.datasets import metabric
from pycox.models import CoxPH
from pycox.evaluation import EvalSurv
## Uncomment to install `sklearn-pandas`
# ! pip install sklearn-pandas
np.random.seed(1234)
_ = torch.manual_seed(123)
```
## Dataset
We load the METABRIC data set and split in train, test and validation.
```
df_train = metabric.read_df()
df_test = df_train.sample(frac=0.2)
df_train = df_train.drop(df_test.index)
df_val = df_train.sample(frac=0.2)
df_train = df_train.drop(df_val.index)
df_train.head()
```
## Feature transforms
We have 9 covariates, in addition to the durations and event indicators.
We will standardize the 5 numerical covariates, and leave the binary variables as is. As variables needs to be of type `'float32'`, as this is required by pytorch.
```
cols_standardize = ['x0', 'x1', 'x2', 'x3', 'x8']
cols_leave = ['x4', 'x5', 'x6', 'x7']
standardize = [([col], StandardScaler()) for col in cols_standardize]
leave = [(col, None) for col in cols_leave]
x_mapper = DataFrameMapper(standardize + leave)
x_train = x_mapper.fit_transform(df_train).astype('float32')
x_val = x_mapper.transform(df_val).astype('float32')
x_test = x_mapper.transform(df_test).astype('float32')
```
We need no label transforms
```
get_target = lambda df: (df['duration'].values, df['event'].values)
y_train = get_target(df_train)
y_val = get_target(df_val)
durations_test, events_test = get_target(df_test)
val = x_val, y_val
```
## Neural net
We create a simple MLP with two hidden layers, ReLU activations, batch norm and dropout.
Here, we just use the `torchtuples.practical.MLPVanilla` net to do this.
Note that we set `out_features` to 1, and that we have not `output_bias`.
```
in_features = x_train.shape[1]
num_nodes = [32, 32]
out_features = 1
batch_norm = True
dropout = 0.1
output_bias = False
net = tt.practical.MLPVanilla(in_features, num_nodes, out_features, batch_norm,
dropout, output_bias=output_bias)
```
## Training the model
To train the model we need to define an optimizer. You can choose any `torch.optim` optimizer, but here we instead use one from `tt.optim` as it has some added functionality.
We use the `Adam` optimizer, but instead of choosing a learning rate, we will use the scheme proposed by [Smith 2017](https://arxiv.org/pdf/1506.01186.pdf) to find a suitable learning rate with `model.lr_finder`. See [this post](https://towardsdatascience.com/finding-good-learning-rate-and-the-one-cycle-policy-7159fe1db5d6) for an explanation.
```
model = CoxPH(net, tt.optim.Adam)
batch_size = 256
lrfinder = model.lr_finder(x_train, y_train, batch_size, tolerance=10)
_ = lrfinder.plot()
lrfinder.get_best_lr()
```
Often, this learning rate is a little high, so we instead set it manually to 0.01
```
model.optimizer.set_lr(0.01)
```
We include the `EarlyStopping` callback to stop training when the validation loss stops improving. After training, this callback will also load the best performing model in terms of validation loss.
```
epochs = 512
callbacks = [tt.callbacks.EarlyStopping()]
verbose = True
%%time
log = model.fit(x_train, y_train, batch_size, epochs, callbacks, verbose,
val_data=val, val_batch_size=batch_size)
_ = log.plot()
```
We can get the partial log-likelihood
```
model.partial_log_likelihood(*val).mean()
```
## Prediction
For evaluation we first need to obtain survival estimates for the test set.
This can be done with `model.predict_surv` which returns an array of survival estimates, or with `model.predict_surv_df` which returns the survival estimates as a dataframe.
However, as `CoxPH` is semi-parametric, we first need to get the non-parametric baseline hazard estimates with `compute_baseline_hazards`.
Note that for large datasets the `sample` argument can be used to estimate the baseline hazard on a subset.
```
_ = model.compute_baseline_hazards()
surv = model.predict_surv_df(x_test)
surv.iloc[:, :5].plot()
plt.ylabel('S(t | x)')
_ = plt.xlabel('Time')
```
## Evaluation
We can use the `EvalSurv` class for evaluation the concordance, brier score and binomial log-likelihood. Setting `censor_surv='km'` means that we estimate the censoring distribution by Kaplan-Meier on the test set.
```
ev = EvalSurv(surv, durations_test, events_test, censor_surv='km')
ev.concordance_td()
time_grid = np.linspace(durations_test.min(), durations_test.max(), 100)
_ = ev.brier_score(time_grid).plot()
ev.integrated_brier_score(time_grid)
ev.integrated_nbll(time_grid)
```
| github_jupyter |
### Tutorial in hamiltorch for log probabilities
* For the corresponding blog post please see: https://adamcobb.github.io/journal/hamiltorch.html
* Bayesian neural networks are left to a different notebook
```
import torch
import hamiltorch
import matplotlib.pyplot as plt
%matplotlib inline
hamiltorch.set_random_seed(123)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(hamiltorch.__version__)
```
## Sampling a multivariate Gaussian
In `hamiltorch`, we have designed the samplers to receive a function handle `log_prob_func`, which the sampler will use to evaluate the log probability of each sample. A `log_prob_func` must take a 1-d vector of length equal to the number of parameters that are being sampled. For the example of our multivariate Gaussian distribution, we can define our `log_prob_func` as follows:
```
def log_prob(omega):
mean = torch.tensor([0.,0.,0.])
stddev = torch.tensor([.5,1.,2.])
return torch.distributions.MultivariateNormal(mean, torch.diag(stddev**2)).log_prob(omega).sum()
N = 400
step_size = .3
L = 5
```
### Sample using standard HMC
* Initialise the parameters e.g. `params_init = torch.zeros(3)` and pass them into the `hamiltorch.sample()` function as `params_init=params_init`.
* Set the number of samples `num_samples=N` corresponding to the number of momentum resampling steps/the number of trajectories to sample.
* Set the step size and trajectory length via `step_size=step_size, num_steps_per_sample=L`.
```
# HMC
hamiltorch.set_random_seed(123)
params_init = torch.zeros(3)
params_hmc = hamiltorch.sample(log_prob_func=log_prob, params_init=params_init, num_samples=N,
step_size=step_size, num_steps_per_sample=L)
```
### Sample using the No-U-Turn Sampler (NUTS)
* As in Hoffman and Gelman 2011.
* This is set using the additional parameter `sampler=hamiltorch.Sampler.HMC_NUTS`.
* The step size is adapted with the objective of a desired acceptance rate `desired_accept_rate=0.8`.
* The step size is fixed after the burn stage `burn=burn` and we define `N_nuts = burn + N`
```
# HMC NUTS
hamiltorch.set_random_seed(123)
params_init = torch.zeros(3) + 5
burn=500
N_nuts = burn + N
params_hmc_nuts = hamiltorch.sample(log_prob_func=log_prob, params_init=params_init,
num_samples=N_nuts,step_size=step_size,num_steps_per_sample=L,
sampler=hamiltorch.Sampler.HMC_NUTS, burn=burn,
desired_accept_rate=0.8)
```
### Sample using implicit Riemannian manifold Hamiltonian Monte Carlo (RMHMC)
* As in Girolami and Calderhead 2011.
* Switch the sampler via setting `sampler=hamiltorch.Sampler.RMHMC` and the integrator via `integrator=hamiltorch.Integrator.IMPLICIT`.
* Limit the number of fixed point iterations in the generalised leapforg via `fixed_point_max_iterations=1000` and set the convergence threshold for 'breaking out' of the while loop via `fixed_point_threshold=1e-05`.
```
# Implicit RMHMC
hamiltorch.set_random_seed(123)
params_init = torch.zeros(3)
params_irmhmc = hamiltorch.sample(log_prob_func=log_prob, params_init=params_init, num_samples=N,
step_size=step_size,num_steps_per_sample=L, sampler=hamiltorch.Sampler.RMHMC,
integrator=hamiltorch.Integrator.IMPLICIT, fixed_point_max_iterations=1000,
fixed_point_threshold=1e-05)
```
### Sample using explicit Riemannian manifold Hamiltonian Monte Carlo (RMHMC)
* As in Cobb et. al. 2019
* Switch the integrator to explicit via `integrator=hamiltorch.Integrator.EXPLICIT`. Note that the sampler is still set to RMHMC.
* Introduce and set the binding term via `explicit_binding_const=omega`. This can be subsequently optimised for the highest acceptance rate.
```
# Explicit RMHMC
hamiltorch.set_random_seed(123)
params_init = torch.zeros(3)
omega = 100.
params_ermhmc = hamiltorch.sample(log_prob_func=log_prob, params_init=params_init, num_samples=N,
step_size=step_size,num_steps_per_sample=L, sampler=hamiltorch.Sampler.RMHMC,
integrator=hamiltorch.Integrator.EXPLICIT, explicit_binding_const=omega)
```
### Convert samples to numpy arrays to plot using matplotlib
```
coords_hmc = torch.cat(params_hmc).reshape(len(params_hmc),-1).numpy()
coords_nuts = torch.cat(params_hmc_nuts).reshape(len(params_hmc_nuts),-1).numpy()
coords_i_rmhmc = torch.cat(params_irmhmc).reshape(len(params_irmhmc),-1).numpy()
coords_e_rmhmc = torch.cat(params_ermhmc).reshape(len(params_ermhmc),-1).numpy()
xlim = [-5,5]
ylim = [-5,5]
fs=16
mean = torch.tensor([0.,0.,0.])
stddev = torch.tensor([.5,1.,2.])
fig, axs = plt.subplots(3, 1, figsize=(15,15))
axs[0].scatter(coords_hmc[:,0], coords_hmc[:,1],s=5,alpha=0.3,label='HMC')
axs[0].scatter(coords_nuts[:,0], coords_nuts[:,1],s=5,alpha=0.3,label='NUTS')
axs[0].scatter(coords_i_rmhmc[:,0], coords_i_rmhmc[:,1],s=5,alpha=0.3,label='Implicit RMHMC')
axs[0].scatter(coords_e_rmhmc[:,0], coords_e_rmhmc[:,1],s=5,alpha=0.3,label='Explicit RMHMC')
axs[0].scatter(mean[0],mean[1],marker = '*',color='C3',s=100,label='True Mean')
axs[0].legend(fontsize=fs)
axs[0].grid()
axs[0].set_xlim(xlim)
axs[0].set_ylim(ylim)
axs[1].scatter(coords_hmc[:,0], coords_hmc[:,2],s=5,alpha=0.3,label='HMC')
axs[1].scatter(coords_nuts[:,0], coords_nuts[:,2],s=5,alpha=0.3,label='NUTS')
axs[1].scatter(coords_i_rmhmc[:,0], coords_i_rmhmc[:,2],s=5,alpha=0.3,label='Implicit RMHMC')
axs[1].scatter(coords_e_rmhmc[:,0], coords_e_rmhmc[:,2],s=5,alpha=0.3,label='Explicit RMHMC')
axs[1].scatter(mean[0],mean[2],marker = '*',color='C3',s=100,label='True Mean')
axs[1].legend(fontsize=fs)
axs[1].grid()
axs[1].set_xlim(xlim)
axs[1].set_ylim(ylim)
axs[2].scatter(coords_hmc[:,1], coords_hmc[:,2],s=5,alpha=0.3,label='HMC')
axs[2].scatter(coords_nuts[:,1], coords_nuts[:,2],s=5,alpha=0.3,label='NUTS')
axs[2].scatter(coords_i_rmhmc[:,1], coords_i_rmhmc[:,2],s=5,alpha=0.3,label='Implicit RMHMC')
axs[2].scatter(coords_e_rmhmc[:,1], coords_e_rmhmc[:,2],s=5,alpha=0.3,label='Explicit RMHMC')
axs[2].scatter(mean[1],mean[2],marker = '*',color='C3',s=100,label='True Mean')
axs[2].legend(fontsize=fs)
axs[2].grid()
axs[2].set_xlim(xlim)
axs[2].set_ylim(ylim)
plt.tight_layout()
# plt.savefig('../../Gaussian_plots.png',bbox_inches='tight')
plt.show()
```
### KL divergence:
* Calculated the KL divergence as a measure of how well we have approximated the target distribution (the Gaussian).
```
p = torch.distributions.MultivariateNormal(mean, stddev.diag()**2)
q_hmc = torch.distributions.MultivariateNormal(torch.FloatTensor(coords_hmc.mean(0)),torch.diag(torch.FloatTensor(coords_hmc.var(0))))
q_nuts = torch.distributions.MultivariateNormal(torch.FloatTensor(coords_nuts.mean(0)),torch.diag(torch.FloatTensor(coords_nuts.var(0))))
q_i_rmhmc = torch.distributions.MultivariateNormal(torch.FloatTensor(coords_i_rmhmc.mean(0)),torch.diag(torch.FloatTensor(coords_i_rmhmc.var(0))))
q_e_rmhmc = torch.distributions.MultivariateNormal(torch.FloatTensor(coords_e_rmhmc.mean(0)),torch.diag(torch.FloatTensor(coords_e_rmhmc.var(0))))
print('HMC kl: ',torch.distributions.kl.kl_divergence(p, q_hmc))
print('NUTS kl: ',torch.distributions.kl.kl_divergence(p, q_nuts))
print('Implicit RMHMC kl: ',torch.distributions.kl.kl_divergence(p, q_i_rmhmc))
print('Explicit RMHMC kl: ',torch.distributions.kl.kl_divergence(p, q_e_rmhmc))
```
# Sampling from a more complicated distribution: funnel distribution
* We now define the funnel distribution as in Neal et al. 2003:
$$\prod_i\mathcal{N}(\mathbf{x}_i\vert 0, \exp\{-v\})\mathcal{N}(v\vert 0, 9). $$
* This is our new `log_prob_func`.
```
D = 10
def funnel_ll(w, dim=D):
v_dist = torch.distributions.Normal(0,3)
ll = v_dist.log_prob(w[0])
x_dist = torch.distributions.Normal(0,torch.exp(-w[0])**0.5)
ll += x_dist.log_prob(w[1:]).sum()
return ll
```
### Sample using standard HMC
* As we did above for the multivariate Gaussian.
```
# HMC
hamiltorch.set_random_seed(123)
params_init = torch.ones(D + 1)
params_init[0] = 0.
step_size = 0.2
num_samples = 1000 # For results in plot num_samples = 10000
L = 25
params_hmc = hamiltorch.sample(log_prob_func=funnel_ll, params_init=params_init, num_samples=num_samples,
step_size=step_size, num_steps_per_sample=L)
```
### Sample using the No-U-Turn Sampler (NUTS)
* Again, as we did above.
* Note that this log probability is badly defined in certain parts of the parameter space. Therefore you will see invalid log probabilities printed out as the model samples. (Especially in the burn in stage.)
* Do not worry about print statements of `Invalid log_prob`!
```
# HMC NUTS
hamiltorch.set_random_seed(123)
params_init = torch.ones(D + 1)
params_init[0] = 0.
step_size = 0.01
num_samples = 1200 # For results in plot num_samples = 12000
L = 25
burn = 200 # For results in plot burn = 2000
params_hmc_nuts = hamiltorch.sample(log_prob_func=funnel_ll, params_init=params_init, num_samples=num_samples,
step_size=step_size, num_steps_per_sample=L,desired_accept_rate=0.75,sampler=hamiltorch.Sampler.HMC_NUTS,burn=burn)
```
### Sample using implicit Riemannian manifold Hamiltonian Monte Carlo (RMHMC)
* We change sampler flag and integrator flag as before.
* For the funnel distribution our metric tensor is no longer guaranteed to be positive semi-definite (PSD) if we use the Hessian as above. Therefore we introduce a new flag and set is as `metric=hamiltorch.Metric.SOFTABS`. This forces our metric to be PSD as in Betancourt 2013.
* As is common in practice, we must often add jitter along the diagonal of the metric tensor to ensure we can invert it (it also allows us to differentiate through it using `torch.symeig`). We do this via `jitter=jitter`.
```
# Implicit RMHMC with SOFTABS
hamiltorch.set_random_seed(123)
params_init = torch.ones(D + 1)
params_init[0] = 0.
step_size = 0.14
num_samples = 10 # For results in plot num_samples = 1000, but this takes a while! Setting to 100 is also reasonable.
L = 25
threshold = 1e-3
softabs_const=10**6
fixed_point_max_iterations=1000
jitter= 0.001
params_i_rmhmc = hamiltorch.sample(log_prob_func=funnel_ll, params_init=params_init, num_samples=num_samples,
sampler=hamiltorch.Sampler.RMHMC, integrator=hamiltorch.Integrator.IMPLICIT,
metric=hamiltorch.Metric.SOFTABS, fixed_point_threshold=threshold, jitter=jitter,
num_steps_per_sample=L, step_size=step_size, softabs_const=softabs_const,
fixed_point_max_iterations=fixed_point_max_iterations)
```
### Sample using explicit Riemannian manifold Hamiltonian Monte Carlo (RMHMC)
* We use our faster integrator with the SOFTABS metric to get a similar result to the implicit integrator.
```
# Explicit RMHMC with SOFTABS
hamiltorch.set_random_seed(123)
params_init = torch.ones(D + 1)
params_init[0] = 0.
step_size = 0.14
num_samples = 100 # For results in plot num_samples = 1000
L = 25
omega=10
softabs_const=10**6
jitter=0.001
params_e_rmhmc = hamiltorch.sample(log_prob_func=funnel_ll, params_init=params_init, num_samples=num_samples,
sampler=hamiltorch.Sampler.RMHMC, integrator=hamiltorch.Integrator.EXPLICIT,
metric=hamiltorch.Metric.SOFTABS, jitter=jitter,
num_steps_per_sample=L, step_size=step_size, explicit_binding_const=omega,
softabs_const=softabs_const)
```
### Convert to numpy arrays for plotting
```
coords_hmc = torch.cat(params_hmc).reshape(len(params_hmc),-1).numpy()
coords_hmc_nuts = torch.cat(params_hmc_nuts).reshape(len(params_hmc_nuts),-1).numpy()
coords_i_rmhmc = torch.cat(params_i_rmhmc).reshape(len(params_i_rmhmc),-1).numpy()
coords_e_rmhmc = torch.cat(params_e_rmhmc).reshape(len(params_e_rmhmc),-1).numpy()
# One that I made earlier!
params_i_rmhmc = torch.load('../../data/funnel/params_i_rmhmc_10D_funnel_1000.npy')
params_e_rmhmc = torch.load('../../data/funnel/params_e_rmhmc_10D_funnel_1000.npy')
params_hmc = torch.load('../../data/funnel/params_hmc_10D_funnel_10000.npy')
params_hmc_nuts = torch.load('../../data/funnel/params_hmc_nuts_10D_funnel_10000.npy')
coords_hmc = torch.cat(params_hmc).reshape(len(params_hmc),-1).numpy()
coords_hmc_nuts = torch.cat(params_hmc_nuts).reshape(len(params_hmc_nuts),-1).numpy()
coords_i_rmhmc = torch.cat(params_i_rmhmc).reshape(len(params_i_rmhmc),-1).numpy()
coords_e_rmhmc = torch.cat(params_e_rmhmc).reshape(len(params_e_rmhmc),-1).numpy()
xlim = [-4,4]
ylim = [0,7]#[-2,9]
text_x = -1.5
text_y = 8
font_size_text = 20
fs=17
vxx = torch.linspace(xlim[0],xlim[1],300)
p = torch.distributions.Normal(0,3)
v_pdf = torch.exp(p.log_prob(vxx))
fig, axs = plt.subplots(1, 4, figsize=(20,5), sharey=True)
axs[0].scatter(coords_hmc[:,1], coords_hmc[:,0],s=5,alpha=0.3,rasterized=True, color='C0', label='HMC')
l = axs[0].legend(loc=0,fontsize=fs)
l.legendHandles[0]._sizes = [100]
axs[0].grid()
axs[0].set_xlim(xlim)
axs[0].set_ylim(ylim)
axs[0].tick_params(axis='both', labelsize=fs)
axs[0].set_xlabel(r'$x_1$',fontsize=font_size_text)
axs[0].set_ylabel(r'$v$',fontsize=font_size_text,rotation=0,labelpad=30)
axs[1].scatter(coords_hmc_nuts[:,1], coords_hmc_nuts[:,0],s=5,alpha=0.3,label='NUTS',rasterized=True,color='C5')
l = axs[1].legend(loc=0,fontsize=fs)
l.legendHandles[0]._sizes = [100]
axs[1].grid()
axs[1].set_xlim(xlim)
axs[1].set_ylim(ylim)
axs[1].tick_params(axis='both', labelsize=fs)
axs[1].set_xlabel(r'$x_1$',fontsize=font_size_text)
axs[2].scatter(coords_i_rmhmc[:,1], coords_i_rmhmc[:,0],s=5,alpha=0.3,rasterized=True, color='C1',label='Implicit\nRMHMC')
l = axs[2].legend(loc=0,fontsize=fs)
l.legendHandles[0]._sizes = [100]
axs[2].grid()
axs[2].set_xlim(xlim)
axs[2].set_ylim(ylim)
axs[2].tick_params(axis='both', labelsize=fs)
axs[2].set_xlabel(r'$x_1$',fontsize=font_size_text)
axs[3].scatter(coords_e_rmhmc[:,1], coords_e_rmhmc[:,0],s=5,alpha=0.3,rasterized=True, color='C2', label='Explicit\nRMHMC')
l = axs[3].legend(loc=0,fontsize=fs)
l.legendHandles[0]._sizes = [100]
axs[3].grid()
axs[3].set_xlim(xlim)
axs[3].set_ylim(ylim)
axs[3].tick_params(axis='both', labelsize=fs)
axs[3].set_xlabel(r'$x_1$',fontsize=font_size_text)
plt.tight_layout()
# plt.savefig('../../data/funnel/funnel_hist_plots_scatter.pdf',bbox_inches='tight')
plt.show()
```
### Marginal distribution $p(v)$
We can also plot the marginal distributions of $v$ by representing them in histograms. We plot the known Gaussian distribution in each figure for comparison. The KL divergence is also included to measure how close the empirical distribution is from the true one.
```
p = torch.distributions.Normal(0,3)
q_hmc = torch.distributions.Normal(coords_hmc[:,0].mean(),coords_hmc[:,0].std())
q_hmc_nuts = torch.distributions.Normal(coords_hmc_nuts[:,0].mean(),coords_hmc_nuts[:,0].std())
q_i_rmhmc = torch.distributions.Normal(coords_i_rmhmc[:,0].mean(),coords_i_rmhmc[:,0].std())
q_e_rmhmc = torch.distributions.Normal(coords_e_rmhmc[:,0].mean(),coords_e_rmhmc[:,0].std())
kl_hmc = torch.distributions.kl.kl_divergence(p, q_hmc)
kl_hmc_nuts = torch.distributions.kl.kl_divergence(p, q_hmc_nuts)
kl_i_rmhmc = torch.distributions.kl.kl_divergence(p, q_i_rmhmc)
kl_e_rmhmc = torch.distributions.kl.kl_divergence(p, q_e_rmhmc)
print('HMC kl: ',kl_hmc)
print('NUTS HMC kl: ',kl_hmc_nuts)
print('Implicit RMHMC kl: ',kl_i_rmhmc)
print('Explicit RMHMC kl: ',kl_e_rmhmc)
xlim = [-9,9]
ylim = [0,.25]
text_x = -4.5
text_y = .233
font_size_text = 20
fs=17
vxx = torch.linspace(xlim[0],xlim[1],300)
p = torch.distributions.Normal(0,3)
v_pdf = torch.exp(p.log_prob(vxx))
fig, axs = plt.subplots(1, 4, figsize=(20,5),sharey=True)
axs[0].hist(coords_hmc[:,0], color='C0', bins=20,density=True, alpha=0.5, label='HMC',range=xlim)
axs[0].plot(vxx.numpy(), v_pdf.numpy(),'C3',label='$p(v)$')
axs[0].legend(loc=0,fontsize=fs)
axs[0].grid()
axs[0].set_xlim(xlim)
axs[0].text(text_x, text_y, "$\mathrm{D_{KL}} = $" + '{:.3f}'.format(kl_hmc), size=font_size_text, rotation=0.,
ha="center", va="center",
bbox=dict(boxstyle="round",
ec="k", # Outer colour
fc='w',
)
)
axs[0].set_ylim(ylim)
axs[0].tick_params(axis='both', labelsize=fs)
axs[0].set_xlabel(r'$v$',fontsize=font_size_text)
axs[0].set_ylabel(r'$p(v)$',fontsize=font_size_text,rotation=0,labelpad=30)
axs[1].hist(coords_hmc_nuts[:,0], color='C5',bins=20,density=True, alpha=0.5,label='NUTS',range=xlim)
axs[1].plot(vxx.numpy(), v_pdf.numpy(),'C3', label='$p(v)$')
axs[1].legend(loc=0,fontsize=fs)
axs[1].grid()
axs[1].set_xlim(xlim)
axs[1].text(text_x, text_y, "$\mathrm{D_{KL}} = $" + '{:.3f}'.format(kl_hmc_nuts), size=font_size_text, rotation=0.,
ha="center", va="center",
bbox=dict(boxstyle="round",
ec="k", # Outer colour
fc='w',
)
)
axs[1].set_ylim(ylim)
axs[1].tick_params(axis='both', labelsize=fs)
axs[1].set_xlabel(r'$v$',fontsize=font_size_text)
axs[2].hist(coords_i_rmhmc[:,0], color='C1',bins=20,density=True, alpha=0.5,label='Implicit\nRMHMC')
axs[2].plot(vxx.numpy(), v_pdf.numpy(),'C3', label='$p(v)$')
axs[2].legend(loc=1,fontsize=fs)
axs[2].grid()
axs[2].set_xlim(xlim)
axs[2].text(text_x, text_y, "$\mathrm{D_{KL}} = $" + '{:.3f}'.format(kl_i_rmhmc), size=font_size_text, rotation=0.,
ha="center", va="center",
bbox=dict(boxstyle="round",
ec="k", # Outer colour
fc='w',
)
)
axs[2].set_ylim(ylim)
axs[2].tick_params(axis='both', labelsize=fs)
axs[2].set_xlabel(r'$v$',fontsize=font_size_text)
axs[3].hist(coords_e_rmhmc[:,0], color='C2',bins=20,density=True, alpha=0.5, label='Explicit\nRMHMC')
axs[3].plot(vxx.numpy(), v_pdf.numpy(),'C3',label='$p(v)$')
axs[3].legend(loc=0,fontsize=fs)
axs[3].grid()
axs[3].set_xlim(xlim)
axs[3].text(text_x, text_y, "$\mathrm{D_{KL}} = $" + '{:.3f}'.format(kl_e_rmhmc), size=font_size_text, rotation=0.,
ha="center", va="center",
bbox=dict(boxstyle="round",
ec="k", # Outer colour
fc='w',
)
)
axs[3].set_ylim(ylim)
axs[3].tick_params(axis='both', labelsize=fs)
axs[3].set_xlabel(r'$v$',fontsize=font_size_text)
plt.tight_layout()
# plt.savefig('../../data/funnel/funnel_hist_plots_nuts.pdf',bbox_inches='tight')
plt.show()
```
### DEBUG MODE
* For `hamiltorch.sample()` we can pass `debug=True`. This is useful for checking how many iterations RMHMC takes to converge and also to look at the values of the Hamiltonian.
* Also, for NUTS, debug mode returns an EXTRA output corresponding to the adapted step size.
```
# HMC NUTS
hamiltorch.set_random_seed(123)
params_init = torch.ones(D + 1)
params_init[0] = 0.
step_size = 0.01
num_samples = 6 #In paper: 12000
L = 25
burn = 3 #2000
############
debug = True
############
params_hmc_nuts, adapted_step_size = hamiltorch.sample(log_prob_func=funnel_ll, params_init=params_init,
num_samples=num_samples, step_size=step_size,
num_steps_per_sample=L,desired_accept_rate=0.75,
sampler=hamiltorch.Sampler.HMC_NUTS,burn=burn, debug=debug)
```
* DEBUG Mode for implicit RMHMC.
```
# Implicit RMHMC with SOFTABS
hamiltorch.set_random_seed(123)
params_init = torch.ones(D + 1)
params_init[0] = 0.
step_size = 0.14
num_samples = 2
L = 25
threshold = 1e-3
softabs_const=10**6
fixed_point_max_iterations=1000
jitter= 0.001
############
debug = True
############
params_i_rmhmc = hamiltorch.sample(log_prob_func=funnel_ll, params_init=params_init, num_samples=num_samples,
sampler=hamiltorch.Sampler.RMHMC, integrator=hamiltorch.Integrator.IMPLICIT,
metric=hamiltorch.Metric.SOFTABS, fixed_point_threshold=threshold, jitter=jitter,
num_steps_per_sample=L, step_size=step_size, softabs_const=softabs_const,
fixed_point_max_iterations=fixed_point_max_iterations,debug=debug)
```
The print statements below show how many iterations each implicit Euler integrator took before breaking out the loop:
```Converged (params), iterations: 3, params_diff: 0.00030622229678556323
Converged (momentum), iterations: 2, momenta_diff: 0.0001082777525880374```
Therefore here, the `params` fixed-point step took 3 iterations and the `momentum` took 2.
| github_jupyter |
# Running the fleet of Virtual Wind Turbines and Edge Devices
**SageMaker Studio Kernel**: Data Science
After visualizing the data and training/optimizing/packaging the Anomaly detection model, its time to deploy it and test your virtual fleet. In this exercise you will run a local application written in Python3 that simulates 5 Wind Turbines and 5 edge devices. The SageMaker Edge Agent is deployed on the edge devices.
Here you'll be the **Wind Turbine Farm Operator**. It's possible to visualize the data flowing from the sensors to the ML Model and analyze the anomalies. Also, you'll be able to inject noise (pressing some buttons) in the data to simulate potential anomalies with the equipment.
<table border="0" cellpading="0">
<tr>
<td align="center"><b>STEP-BY-STEP</b></td>
<td align="center"><b>APPLICATION ARCHITECTURE</b></td>
</tr>
<tr>
<td><img src="../imgs/EdgeManagerWorkshop_Macro.png" width="500px"></img></td>
<td><img src="../imgs/EdgeManagerWorkshop_App.png" width="500px"></img></td>
</tr>
</table>
The components of the applicationare:
- Simulator:
- [Simulator](app/ota.py): Program that launches the virtual wind turbines and the edge devices. It uses Python Threads to run all the 10 processes
- [Wind Farm](app/windfarm.py): This is the application that runs on the edge device. It is reponsible for reading the sensors, invoking the ML model and analyzing the anomalies
- Edge Application:
- [Turbine](app/turbine.py): Virtual Wind Turbine. It reads the raw data collected from the 3D Prited Mini Turbine and stream it as a circular buffer. It also has a graphical representation in **IPython Widgets** that is rendered by the Simulator/Dashboard.
- [Over The Air](app/ota.py): This is a module integrated with **IoT Jobs**. In the previous exercise you created an IoT job to deploy the model. This module gets the document process it and deployes the model in each edge device and loads it via SageMaker Edge Manager.
- [Edge client](app/edgeagentclient.py): An abstraction layer on top of the **generated stubs** (proto compilation). It makes it easy to integrate **Wind Farm** with the SageMaker Edge Agent
### Install some dependencies
```
!apt-get -y update && apt-get -y install build-essential procps
!pip install -U grpcio-tools grpcio protobuf
!pip install paho-mqtt
!pip install ipywidgets
```
### Download the sensors data (streamed by the virtual wind turbines)
```
!mkdir -p data
!curl https://aws-ml-blog.s3.amazonaws.com/artifacts/monitor-manage-anomaly-detection-model-wind-turbine-fleet-sagemaker-neo/dataset_wind_turbine.csv.gz -o data/dataset_wind.csv.gz
import boto3
import tarfile
import os
import stat
import io
project_name='<<PROJECT_NAME>>'
sm_client = boto3.client('sagemaker')
project_id = sm_client.describe_project(ProjectName=project_name)['ProjectId']
bucket_name = 'sagemaker-wind-turbine-farm-%s' % project_id
agent_config_package_prefix = 'wind_turbine_agent/config.tgz'
agent_version = '1.20210512.96da6cc'
agent_pkg_bucket = 'sagemaker-edge-release-store-us-west-2-linux-x64'
```
### Prepare the edge devices
1. First download the deployment package that contains the IoT + CA certificates and the configuration file of the SageMaker Edge Agent.
2. Then, download the SageMaker Edge Manager package and complete the deployment process.
> You can see all the artifacts that will be loaded/executed by the virtual Edge Device in **agent/**
```
if not os.path.isdir('agent'):
s3_client = boto3.client('s3')
# Get the configuration package with certificates and config files
with io.BytesIO() as file:
s3_client.download_fileobj(bucket_name, agent_config_package_prefix, file)
file.seek(0)
# Extract the files
tar = tarfile.open(fileobj=file)
tar.extractall('.')
tar.close()
# Download and install SageMaker Edge Manager
agent_pkg_key = 'Releases/%s/%s.tgz' % (agent_version, agent_version)
# get the agent package
with io.BytesIO() as file:
s3_client.download_fileobj(agent_pkg_bucket, agent_pkg_key, file)
file.seek(0)
# Extract the files
tar = tarfile.open(fileobj=file)
tar.extractall('agent')
tar.close()
# Adjust the permissions
os.chmod('agent/bin/sagemaker_edge_agent_binary', stat.S_IXUSR|stat.S_IWUSR|stat.S_IXGRP|stat.S_IWGRP)
```
### Finally, create the SageMaker Edge Agent client stubs, using the protobuffer compiler
```
!python3 -m grpc_tools.protoc --proto_path=agent/docs/api --python_out=app/ --grpc_python_out=app/ agent/docs/api/agent.proto
```
# Simulating The Wind Turbine Farm
Now its time to run our simulator and start playing with the turbines, agents and with the anomalies
> After clicking on **Start**, each turbine will start buffering some data. It takes a few seconds but after completing this process, the application runs in real-time
> Try to press some buttons while the simulation is running, to inject noise in the data and see some anomalies
```
import sys
sys.path.insert(1, 'app')
import windfarm
import edgeagentclient
import turbine
import simulator
import ota
import boto3
from importlib import reload
reload(simulator)
reload(turbine)
reload(edgeagentclient)
reload(windfarm)
reload(ota)
iot_client = boto3.client('iot')
mqtt_host=iot_client.describe_endpoint(endpointType='iot:Data-ATS')['endpointAddress']
mqtt_port=8883
!mkdir -p agent/logs && rm -f agent/logs/*
simulator = simulator.WindTurbineFarmSimulator(5)
simulator.start()
farm = windfarm.WindTurbineFarm(simulator, mqtt_host, mqtt_port)
farm.start()
simulator.show()
```
> If you want to experiment with the deployment process, with the wind farm running, go back to [Exercise #2](02%20-%20Training%20with%20Pytorch.ipynb#Building-the-Deployment-Package-SageMaker-Edge-Manager) (in the last cells), replace the variable **model_version** by the constant (string) '2.0' in the Json document used by the IoT Job. Then, create a new IoT Job to simulate how to deploy new versions of the model. Go back to this exercise to see the results.
```
farm.halt()
```
----
## Cleanup
Run the next cell only if you already finished exploring/hacking the content of the workshop.
This code will delete all the resouces created so far, including the **SageMaker Project** you've created
```
import boto3
import time
from shutil import rmtree
iot_client = boto3.client('iot')
sm_client = boto3.client('sagemaker')
s3_resource = boto3.resource('s3')
policy_name='WindTurbineFarmPolicy-%s' % project_id
thing_group_name='WindTurbineFarm-%s' % project_id
fleet_name='wind-turbine-farm-%s' % project_id
# Delete all files from the S3 Bucket
s3_resource.Bucket(bucket_name).objects.all().delete()
# now deregister the devices from the fleet
resp = sm_client.list_devices(DeviceFleetName=fleet_name)
devices = [d['DeviceName'] for d in resp['DeviceSummaries']]
if len(devices) > 0:
sm_client.deregister_devices(DeviceFleetName=fleet_name, DeviceNames=devices)
# now deregister the devices from the fleet
for i,cert_arn in enumerate(iot_client.list_targets_for_policy(policyName=policy_name)['targets']):
for t in iot_client.list_principal_things(principal=cert_arn)['things']:
iot_client.detach_thing_principal(thingName=t, principal=cert_arn)
iot_client.detach_policy(policyName=policy_name, target=cert_arn)
certificateId = cert_arn.split('/')[-1]
iot_client.update_certificate(certificateId=certificateId, newStatus='INACTIVE')
iot_client.delete_certificate(certificateId=certificateId)
iot_client.delete_role_alias(roleAlias='SageMakerEdge-%s' % fleet_name)
iot_client.delete_thing_group(thingGroupName=thing_group_name)
if os.path.isdir('agent'): rmtree('agent')
sm_client.delete_project(ProjectName=project_name)
```
Mission Complete!
Congratulations! :)
| github_jupyter |
# Face Recognition
Welcome! In this assignment, you're going to build a face recognition system. Many of the ideas presented here are from [FaceNet](https://arxiv.org/pdf/1503.03832.pdf). In the lecture, you also encountered [DeepFace](https://research.fb.com/wp-content/uploads/2016/11/deepface-closing-the-gap-to-human-level-performance-in-face-verification.pdf).
Face recognition problems commonly fall into one of two categories:
**Face Verification** "Is this the claimed person?" For example, at some airports, you can pass through customs by letting a system scan your passport and then verifying that you (the person carrying the passport) are the correct person. A mobile phone that unlocks using your face is also using face verification. This is a 1:1 matching problem.
**Face Recognition** "Who is this person?" For example, the video lecture showed a [face recognition video](https://www.youtube.com/watch?v=wr4rx0Spihs) of Baidu employees entering the office without needing to otherwise identify themselves. This is a 1:K matching problem.
FaceNet learns a neural network that encodes a face image into a vector of 128 numbers. By comparing two such vectors, you can then determine if two pictures are of the same person.
By the end of this assignment, you'll be able to:
* Differentiate between face recognition and face verification
* Implement one-shot learning to solve a face recognition problem
* Apply the triplet loss function to learn a network's parameters in the context of face recognition
* Explain how to pose face recognition as a binary classification problem
* Map face images into 128-dimensional encodings using a pretrained model
* Perform face verification and face recognition with these encodings
**Channels-last notation**
For this assignment, you'll be using a pre-trained model which represents ConvNet activations using a "channels last" convention, as used during the lecture and in previous programming assignments.
In other words, a batch of images will be of shape $(m, n_H, n_W, n_C)$.
## Table of Contents
- [1 - Packages](#1)
- [2 - Naive Face Verification](#2)
- [3 - Encoding Face Images into a 128-Dimensional Vector](#3)
- [3.1 - Using a ConvNet to Compute Encodings](#3-1)
- [3.2 - The Triplet Loss](#3-2)
- [Exercise 1 - triplet_loss](#ex-1)
- [4 - Loading the Pre-trained Model](#4)
- [5 - Applying the Model](#5)
- [5.1 - Face Verification](#5-1)
- [Exercise 2 - verify](#ex-2)
- [5.2 - Face Recognition](#5-2)
- [Exercise 3 - who_is_it](#ex-3)
- [6 - References](#6)
<a name='1'></a>
## 1 - Packages
Go ahead and run the cell below to import the packages you'll need.
```
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate
from tensorflow.keras.models import Model
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.layers import MaxPooling2D, AveragePooling2D
from tensorflow.keras.layers import Concatenate
from tensorflow.keras.layers import Lambda, Flatten, Dense
from tensorflow.keras.initializers import glorot_uniform
from tensorflow.keras.layers import Layer
from tensorflow.keras import backend as K
K.set_image_data_format('channels_last')
import os
import numpy as np
from numpy import genfromtxt
import pandas as pd
import tensorflow as tf
import PIL
%matplotlib inline
%load_ext autoreload
%autoreload 2
```
<a name='2'></a>
## 2 - Naive Face Verification
In Face Verification, you're given two images and you have to determine if they are of the same person. The simplest way to do this is to compare the two images pixel-by-pixel. If the distance between the raw images is below a chosen threshold, it may be the same person!
<img src="images/pixel_comparison.png" style="width:380px;height:150px;">
<caption><center> <u> <font color='purple'> <b>Figure 1</b> </u></center></caption>
Of course, this algorithm performs poorly, since the pixel values change dramatically due to variations in lighting, orientation of the person's face, minor changes in head position, and so on.
You'll see that rather than using the raw image, you can learn an encoding, $f(img)$.
By using an encoding for each image, an element-wise comparison produces a more accurate judgement as to whether two pictures are of the same person.
<a name='3'></a>
## 3 - Encoding Face Images into a 128-Dimensional Vector
<a name='3-1'></a>
### 3.1 - Using a ConvNet to Compute Encodings
The FaceNet model takes a lot of data and a long time to train. So following the common practice in applied deep learning, you'll load weights that someone else has already trained. The network architecture follows the Inception model from [Szegedy *et al*..](https://arxiv.org/abs/1409.4842) An Inception network implementation has been provided for you, and you can find it in the file `inception_blocks_v2.py` to get a closer look at how it is implemented.
*Hot tip:* Go to "File->Open..." at the top of this notebook. This opens the file directory that contains the `.py` file).
The key things to be aware of are:
- This network uses 160x160 dimensional RGB images as its input. Specifically, a face image (or batch of $m$ face images) as a tensor of shape $(m, n_H, n_W, n_C) = (m, 160, 160, 3)$
- The input images are originally of shape 96x96, thus, you need to scale them to 160x160. This is done in the `img_to_encoding()` function.
- The output is a matrix of shape $(m, 128)$ that encodes each input face image into a 128-dimensional vector
Run the cell below to create the model for face images!
```
from tensorflow.keras.models import model_from_json
json_file = open('keras-facenet-h5/model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
model = model_from_json(loaded_model_json)
model.load_weights('keras-facenet-h5/model.h5')
```
Now summarize the input and output shapes:
```
print(model.inputs)
print(model.outputs)
```
By using a 128-neuron fully connected layer as its last layer, the model ensures that the output is an encoding vector of size 128. You then use the encodings to compare two face images as follows:
<img src="images/distance_kiank.png\" style="width:680px;height:250px;">
<caption><center> <u> <font color='purple'> <b>Figure 2:</b> <br> </u> <font color='purple'>By computing the distance between two encodings and thresholding, you can determine if the two pictures represent the same person</center></caption>
So, an encoding is a good one if:
- The encodings of two images of the same person are quite similar to each other.
- The encodings of two images of different persons are very different.
The triplet loss function formalizes this, and tries to "push" the encodings of two images of the same person (Anchor and Positive) closer together, while "pulling" the encodings of two images of different persons (Anchor, Negative) further apart.
<img src="images/triplet_comparison.png" style="width:280px;height:150px;"><br>
<caption><center> <u> <font color='purple'> <b>Figure 3: </b> <br> </u> <font color='purple'> In the next section, you'll call the pictures from left to right: Anchor (A), Positive (P), Negative (N)</center></caption>
<a name='3-2'></a>
### 3.2 - The Triplet Loss
**Important Note**: Since you're using a pretrained model, you won't actually need to implement the triplet loss function in this assignment. *However*, the triplet loss is the main ingredient of the face recognition algorithm, and you'll need to know how to use it for training your own FaceNet model, as well as other types of image similarity problems. Therefore, you'll implement it below, for fun and edification. :)
For an image $x$, its encoding is denoted as $f(x)$, where $f$ is the function computed by the neural network.
<img src="images/f_x.png" style="width:380px;height:150px;">
Training will use triplets of images $(A, P, N)$:
- A is an "Anchor" image--a picture of a person.
- P is a "Positive" image--a picture of the same person as the Anchor image.
- N is a "Negative" image--a picture of a different person than the Anchor image.
These triplets are picked from the training dataset. $(A^{(i)}, P^{(i)}, N^{(i)})$ is used here to denote the $i$-th training example.
You'd like to make sure that an image $A^{(i)}$ of an individual is closer to the Positive $P^{(i)}$ than to the Negative image $N^{(i)}$) by at least a margin $\alpha$:
$$
|| f\left(A^{(i)}\right)-f\left(P^{(i)}\right)||_{2}^{2}+\alpha<|| f\left(A^{(i)}\right)-f\left(N^{(i)}\right)||_{2}^{2}
$$
You would thus like to minimize the following "triplet cost":
$$\mathcal{J} = \sum^{m}_{i=1} \large[ \small \underbrace{\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2}_\text{(1)} - \underbrace{\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2}_\text{(2)} + \alpha \large ] \small_+ \tag{3}$$
Here, the notation "$[z]_+$" is used to denote $max(z,0)$.
**Notes**:
- The term (1) is the squared distance between the anchor "A" and the positive "P" for a given triplet; you want this to be small.
- The term (2) is the squared distance between the anchor "A" and the negative "N" for a given triplet, you want this to be relatively large. It has a minus sign preceding it because minimizing the negative of the term is the same as maximizing that term.
- $\alpha$ is called the margin. It's a hyperparameter that you pick manually. You'll use $\alpha = 0.2$.
Most implementations also rescale the encoding vectors to haven L2 norm equal to one (i.e., $\mid \mid f(img)\mid \mid_2$=1); you won't have to worry about that in this assignment.
<a name='ex-1'></a>
### Exercise 1 - triplet_loss
Implement the triplet loss as defined by formula (3). These are the 4 steps:
1. Compute the distance between the encodings of "anchor" and "positive": $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$
2. Compute the distance between the encodings of "anchor" and "negative": $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$
3. Compute the formula per training example: $ \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2 + \alpha$
4. Compute the full formula by taking the max with zero and summing over the training examples:$$\mathcal{J} = \sum^{m}_{i=1} \large[ \small \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2+ \alpha \large ] \small_+ \tag{3}$$
*Hints*:
- Useful functions: `tf.reduce_sum()`, `tf.square()`, `tf.subtract()`, `tf.add()`, `tf.maximum()`.
- For steps 1 and 2, sum over the entries of $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$ and $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$.
- For step 4, you will sum over the training examples.
*Additional Hints*:
- Recall that the square of the L2 norm is the sum of the squared differences: $||x - y||_{2}^{2} = \sum_{i=1}^{N}(x_{i} - y_{i})^{2}$
- Note that the anchor, positive and negative encodings are of shape (*m*,128), where *m* is the number of training examples and 128 is the number of elements used to encode a single example.
- For steps 1 and 2, maintain the number of *m* training examples and sum along the 128 values of each encoding. `tf.reduce_sum` has an axis parameter. This chooses along which axis the sums are applied.
- Note that one way to choose the last axis in a tensor is to use negative indexing (axis=-1).
- In step 4, when summing over training examples, the result will be a single scalar value.
- For `tf.reduce_sum` to sum across all axes, keep the default value axis=None.
```
# UNQ_C1(UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: triplet_loss
def triplet_loss(y_true, y_pred, alpha = 0.2):
"""
Implementation of the triplet loss as defined by formula (3)
Arguments:
y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.
y_pred -- python list containing three objects:
anchor -- the encodings for the anchor images, of shape (None, 128)
positive -- the encodings for the positive images, of shape (None, 128)
negative -- the encodings for the negative images, of shape (None, 128)
Returns:
loss -- real number, value of the loss
"""
anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]
### START CODE HERE
#(≈ 4 lines)
# Step 1: Compute the (encoding) distance between the anchor and the positive
pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), axis=-1)
# Step 2: Compute the (encoding) distance between the anchor and the negative
neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), axis=-1)
# Step 3: subtract the two previous distances and add alpha.
basic_loss = tf.maximum(tf.add(tf.subtract(pos_dist, neg_dist), alpha), 0)
# Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples.
loss = tf.reduce_sum(basic_loss, axis=None)
### END CODE HERE
return loss
# BEGIN UNIT TEST
tf.random.set_seed(1)
y_true = (None, None, None) # It is not used
y_pred = (tf.keras.backend.random_normal([3, 128], mean=6, stddev=0.1, seed = 1),
tf.keras.backend.random_normal([3, 128], mean=1, stddev=1, seed = 1),
tf.keras.backend.random_normal([3, 128], mean=3, stddev=4, seed = 1))
loss = triplet_loss(y_true, y_pred)
assert type(loss) == tf.python.framework.ops.EagerTensor, "Use tensorflow functions"
print("loss = " + str(loss))
y_pred_perfect = ([1., 1.], [1., 1.], [1., 1.,])
loss = triplet_loss(y_true, y_pred_perfect, 5)
assert loss == 5, "Wrong value. Did you add the alpha to basic_loss?"
y_pred_perfect = ([1., 1.],[1., 1.], [0., 0.,])
loss = triplet_loss(y_true, y_pred_perfect, 3)
assert loss == 1., "Wrong value. Check that pos_dist = 0 and neg_dist = 2 in this example"
y_pred_perfect = ([1., 1.],[0., 0.], [1., 1.,])
loss = triplet_loss(y_true, y_pred_perfect, 0)
assert loss == 2., "Wrong value. Check that pos_dist = 2 and neg_dist = 0 in this example"
y_pred_perfect = ([0., 0.],[0., 0.], [0., 0.,])
loss = triplet_loss(y_true, y_pred_perfect, -2)
assert loss == 0, "Wrong value. Are you taking the maximum between basic_loss and 0?"
y_pred_perfect = ([[1., 0.], [1., 0.]],[[1., 0.], [1., 0.]], [[0., 1.], [0., 1.]])
loss = triplet_loss(y_true, y_pred_perfect, 3)
assert loss == 2., "Wrong value. Are you applying tf.reduce_sum to get the loss?"
y_pred_perfect = ([[1., 1.], [2., 0.]], [[0., 3.], [1., 1.]], [[1., 0.], [0., 1.,]])
loss = triplet_loss(y_true, y_pred_perfect, 1)
if (loss == 4.):
raise Exception('Perhaps you are not using axis=-1 in reduce_sum?')
assert loss == 5, "Wrong value. Check your implementation"
# END UNIT TEST
```
**Expected Output**:
<table>
<tr>
<td>
<b>loss</b>
</td>
<td>
527.2598
</td>
</tr>
</table>
<a name='4'></a>
## 4 - Loading the Pre-trained Model
FaceNet is trained by minimizing the triplet loss. But since training requires a lot of data and a lot of computation, you won't train it from scratch here. Instead, you'll load a previously trained model in the following cell; which might take a couple of minutes to run.
```
FRmodel = model
```
Here are some examples of distances between the encodings between three individuals:
<img src="images/distance_matrix.png" style="width:380px;height:200px;"><br>
<caption><center> <u> <font color='purple'> <b>Figure 4:</b></u> <br> <font color='purple'> Example of distance outputs between three individuals' encodings</center></caption>
Now use this model to perform face verification and face recognition!
<a name='5'></a>
## 5 - Applying the Model
You're building a system for an office building where the building manager would like to offer facial recognition to allow the employees to enter the building.
You'd like to build a face verification system that gives access to a list of people. To be admitted, each person has to swipe an identification card at the entrance. The face recognition system then verifies that they are who they claim to be.
<a name='5-1'></a>
### 5.1 - Face Verification
Now you'll build a database containing one encoding vector for each person who is allowed to enter the office. To generate the encoding, you'll use `img_to_encoding(image_path, model)`, which runs the forward propagation of the model on the specified image.
Run the following code to build the database (represented as a Python dictionary). This database maps each person's name to a 128-dimensional encoding of their face.
```
#tf.keras.backend.set_image_data_format('channels_last')
def img_to_encoding(image_path, model):
img = tf.keras.preprocessing.image.load_img(image_path, target_size=(160, 160))
img = np.around(np.array(img) / 255.0, decimals=12)
x_train = np.expand_dims(img, axis=0)
embedding = model.predict_on_batch(x_train)
return embedding / np.linalg.norm(embedding, ord=2)
database = {}
database["danielle"] = img_to_encoding("images/danielle.png", FRmodel)
database["younes"] = img_to_encoding("images/younes.jpg", FRmodel)
database["tian"] = img_to_encoding("images/tian.jpg", FRmodel)
database["andrew"] = img_to_encoding("images/andrew.jpg", FRmodel)
database["kian"] = img_to_encoding("images/kian.jpg", FRmodel)
database["dan"] = img_to_encoding("images/dan.jpg", FRmodel)
database["sebastiano"] = img_to_encoding("images/sebastiano.jpg", FRmodel)
database["bertrand"] = img_to_encoding("images/bertrand.jpg", FRmodel)
database["kevin"] = img_to_encoding("images/kevin.jpg", FRmodel)
database["felix"] = img_to_encoding("images/felix.jpg", FRmodel)
database["benoit"] = img_to_encoding("images/benoit.jpg", FRmodel)
database["arnaud"] = img_to_encoding("images/arnaud.jpg", FRmodel)
```
Load the images of Danielle and Kian:
```
danielle = tf.keras.preprocessing.image.load_img("images/danielle.png", target_size=(160, 160))
kian = tf.keras.preprocessing.image.load_img("images/kian.jpg", target_size=(160, 160))
np.around(np.array(kian) / 255.0, decimals=12).shape
kian
np.around(np.array(danielle) / 255.0, decimals=12).shape
danielle
```
Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID.
<a name='ex-2'></a>
### Exercise 2 - verify
Implement the `verify()` function, which checks if the front-door camera picture (`image_path`) is actually the person called "identity". You will have to go through the following steps:
- Compute the encoding of the image from `image_path`.
- Compute the distance between this encoding and the encoding of the identity image stored in the database.
- Open the door if the distance is less than 0.7, else do not open it.
As presented above, you should use the L2 distance `np.linalg.norm`.
**Note**: In this implementation, compare the L2 distance, not the square of the L2 distance, to the threshold 0.7.
*Hints*:
- `identity` is a string that is also a key in the database dictionary.
- `img_to_encoding` has two parameters: the image_path and model.
```
# UNQ_C2(UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: verify
def verify(image_path, identity, database, model):
"""
Function that verifies if the person on the "image_path" image is "identity".
Arguments:
image_path -- path to an image
identity -- string, name of the person you'd like to verify the identity. Has to be an employee who works in the office.
database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors).
model -- your Inception model instance in Keras
Returns:
dist -- distance between the image_path and the image of "identity" in the database.
door_open -- True, if the door should open. False otherwise.
"""
### START CODE HERE
# Step 1: Compute the encoding for the image.
encoding = img_to_encoding(image_path, model)
# Step 2: Compute distance with identity's image (≈ 1 line)
dist = np.linalg.norm(database[identity] - encoding)
# Step 3: Open the door if dist < 0.7, else don't open (≈ 3 lines)
if dist < 0.7:
print("It's " + str(identity) + ", welcome in!")
door_open = True
else:
print("It's not " + str(identity) + ", please go away")
door_open = False
### END CODE HERE
return dist, door_open
```
Younes is trying to enter the office and the camera takes a picture of him ("images/camera_0.jpg"). Let's run your verification algorithm on this picture:
<img src="images/camera_0.jpg\" style="width:100px;height:100px;">
```
# BEGIN UNIT TEST
assert(np.allclose(verify("images/camera_1.jpg", "bertrand", database, FRmodel), (0.54364836, True)))
assert(np.allclose(verify("images/camera_3.jpg", "bertrand", database, FRmodel), (0.38616243, True)))
assert(np.allclose(verify("images/camera_1.jpg", "younes", database, FRmodel), (1.3963861, False)))
assert(np.allclose(verify("images/camera_3.jpg", "younes", database, FRmodel), (1.3872949, False)))
verify("images/camera_0.jpg", "younes", database, FRmodel)
# END UNIT TEST
```
**Expected Output**:
<table>
<tr>
<td>
<b>It's Younes, welcome in!</b>
</td>
<td>
(0.5992946, True)
</td>
</tr>
</table>
Benoit, who does not work in the office, stole Kian's ID card and tried to enter the office. Naughty Benoit! The camera took a picture of Benoit ("images/camera_2.jpg).
<img src="images/camera_2.jpg" style="width:100px;height:100px;">
Run the verification algorithm to check if Benoit can enter.
```
verify("images/camera_2.jpg", "kian", database, FRmodel)
```
**Expected Output**:
<table>
<tr>
<td>
<b>It's not Kian, please go away</b>
</td>
<td>
(1.0259346, False)
</td>
</tr>
</table>
<a name='5-2'></a>
### 5.2 - Face Recognition
Your face verification system is mostly working. But since Kian got his ID card stolen, when he came back to the office the next day he couldn't get in!
To solve this, you'd like to change your face verification system to a face recognition system. This way, no one has to carry an ID card anymore. An authorized person can just walk up to the building, and the door will unlock for them!
You'll implement a face recognition system that takes as input an image, and figures out if it is one of the authorized persons (and if so, who). Unlike the previous face verification system, you will no longer get a person's name as one of the inputs.
<a name='ex-3'></a>
### Exercise 3 - who_is_it
Implement `who_is_it()` with the following steps:
- Compute the target encoding of the image from `image_path`
- Find the encoding from the database that has smallest distance with the target encoding.
- Initialize the `min_dist` variable to a large enough number (100). This helps you keep track of the closest encoding to the input's encoding.
- Loop over the database dictionary's names and encodings. To loop use for (name, db_enc) in `database.items()`.
- Compute the L2 distance between the target "encoding" and the current "encoding" from the database. If this distance is less than the min_dist, then set min_dist to dist, and identity to name.
```
# UNQ_C3(UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: who_is_it
def who_is_it(image_path, database, model):
"""
Implements face recognition for the office by finding who is the person on the image_path image.
Arguments:
image_path -- path to an image
database -- database containing image encodings along with the name of the person on the image
model -- your Inception model instance in Keras
Returns:
min_dist -- the minimum distance between image_path encoding and the encodings from the database
identity -- string, the name prediction for the person on image_path
"""
### START CODE HERE
## Step 1: Compute the target "encoding" for the image.
encoding = img_to_encoding(image_path, model)
## Step 2: Find the closest encoding ##
# Initialize "min_dist" to a large value, say 100 (≈1 line)
min_dist = 100
# Loop over the database dictionary's names and encodings.
for (name, db_enc) in database.items():
# Compute L2 distance between the target "encoding" and the current db_enc from the database. (≈ 1 line)
dist = np.linalg.norm(encoding - db_enc)
# If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines)
if dist < min_dist:
min_dist = dist
identity = name
### END CODE HERE
if min_dist > 0.7:
print("Not in the database.")
else:
print ("it's " + str(identity) + ", the distance is " + str(min_dist))
return min_dist, identity
```
Younes is at the front door and the camera takes a picture of him ("images/camera_0.jpg"). Let's see if your `who_it_is()` algorithm identifies Younes.
```
# BEGIN UNIT TEST
# Test 1 with Younes pictures
who_is_it("images/camera_2.jpg", database, FRmodel)
# Test 2 with Younes pictures
test1 = who_is_it("images/camera_0.jpg", database, FRmodel)
assert np.isclose(test1[0], 0.5992946)
assert test1[1] == 'younes'
# Test 3 with Younes pictures
test2 = who_is_it("images/younes.jpg", database, FRmodel)
assert np.isclose(test2[0], 0.0)
assert test2[1] == 'younes'
# END UNIT TEST
```
**Expected Output**:
<table>
<tr>
<td>
<b>it's Younes, the distance is 0.5992946</b>
</td>
<td>
(0.5992946, 'younes')
</td>
</tr>
</table>
You can change "camera_0.jpg" (picture of Younes) to "camera_1.jpg" (picture of Bertrand) and see the result.
**Congratulations**!
You've completed this assignment, and your face recognition system is working well! It not only lets in authorized persons, but now people don't need to carry an ID card around anymore!
You've now seen how a state-of-the-art face recognition system works, and can describe the difference between face recognition and face verification. Here's a quick recap of what you've accomplished:
- Posed face recognition as a binary classification problem
- Implemented one-shot learning for a face recognition problem
- Applied the triplet loss function to learn a network's parameters in the context of face recognition
- Mapped face images into 128-dimensional encodings using a pretrained model
- Performed face verification and face recognition with these encodings
Great work!
<font color='blue'>
**What you should remember**:
- Face verification solves an easier 1:1 matching problem; face recognition addresses a harder 1:K matching problem.
- Triplet loss is an effective loss function for training a neural network to learn an encoding of a face image.
- The same encoding can be used for verification and recognition. Measuring distances between two images' encodings allows you to determine whether they are pictures of the same person.
**Ways to improve your facial recognition model**:
Although you won't implement these here, here are some ways to further improve the algorithm:
- Put more images of each person (under different lighting conditions, taken on different days, etc.) into the database. Then, given a new image, compare the new face to multiple pictures of the person. This would increase accuracy.
- Crop the images to contain just the face, and less of the "border" region around the face. This preprocessing removes some of the irrelevant pixels around the face, and also makes the algorithm more robust.
<a name='6'></a>
## 6 - References
1. Florian Schroff, Dmitry Kalenichenko, James Philbin (2015). [FaceNet: A Unified Embedding for Face Recognition and Clustering](https://arxiv.org/pdf/1503.03832.pdf)
2. Yaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, Lior Wolf (2014). [DeepFace: Closing the gap to human-level performance in face verification](https://research.fb.com/wp-content/uploads/2016/11/deepface-closing-the-gap-to-human-level-performance-in-face-verification.pdf)
3. This implementation also took a lot of inspiration from the official FaceNet github repository: https://github.com/davidsandberg/facenet
4. Further inspiration was found here: https://machinelearningmastery.com/how-to-develop-a-face-recognition-system-using-facenet-in-keras-and-an-svm-classifier/
5. And here: https://github.com/nyoki-mtl/keras-facenet/blob/master/notebook/tf_to_keras.ipynb
| github_jupyter |
# Python Job Interview Questions
1) What is Python?
- Python is a high-level, interactive and object-oriented language.
- Python is a very readable language.
2) What are some key features of Python?
- Object Oriented
- Free - open source
- It has a large community.
- Simple and understandable.
3) What is the difference between Deep Copy and Shallow Copy?
- Deep copy copies the original object to another new object, but if the original object is changed, the new object does not change.
- Shallow copy copies the original object to another new object, but if the original object is changed, the new object changes.
```
# Examples of Deep copy
import copy
old_list = [[1, 1, 1], [2, 2, 2]]
new_list = copy.deepcopy(old_list)
old_list[1][0] = 100
print("Old list:", old_list)
print("New list:", new_list)
# Example of shallow copy
old_list2 = [[1, 1, 1], [2, 2, 2]]
new_list2 = copy.copy(old_list2)
old_list2[1][1] = 100
print("Old list:", old_list2)
print("New list:", new_list2)
```
4) What is the difference between list and tuples?
- List can change.
- Tuple can not change.
```
lists = [1, 2, 3]
lists[0] = 100
print(lists)
tuples = (1, 2, 3)
tuples[0] = 100
```
5) What is Flask?
* Web framework for python
6) How is the Memory Management in Python?
* Python has a special heap-space. That's why we don't hava access.
* But we can access some of its features with the core API.
7) What are the differences between help() and dir() function?
* The help() function displays the document as a string
* The dir() function displays all the properties and methods of the desired object.
```
help(copy.copy)
dir(copy.copy)
```
8) What is Monkey Patching?
* It is used to change a class or module while the code is in run-time.
* Generally used for debugging bugs.
9) What is Dictionary?
* They are useful structures that contain the key-value pair.
* {key:value} : {Ankara:1000}
```
dictionary = {"Ankara":1000,"istanbul":2000}
print(dictionary)
```
10) What are the concepts of args and kwargs?
* In some cases we don't know how many input parameters functions will take, so we use *args.
* kwargs is used like dictionary.
```
def function(*args):
for each in args:
print(each)
function(10,11,12)
def function2(**kwargs):
for each in kwargs:
print(each,kwargs[each])
function2(a=1,b=2,c=3)
```
11) What are negative indexes?
* Normally, the index starts counting from zero and the negative index starts counting in reverse.
```
list2 = [1,2,3,4,5]
print("positive index: ",list2[0])
print("negative index: ",list2[-1])
```
12) How do you make the list random?
```
mylist=[0,1,2,3,4,5,6,7,8]
from random import shuffle
shuffle(mylist)
print("shuffled: ", mylist)
mylist.sort()
print("sorted: ",mylist)
```
13) What do the join() and split() functions do?
```
new = ".".join("new function")
print(new)
first = new.split(".")
print(first)
j = "".join(first)
print(j)
```
14) Is keywords used as a Identifier?
* float is a keyword and is not used as an identifier.
```
float = 15 # float is a built-in name
```
15) What is the meaning of leading and trailing whitespace?
* Let it be a string "hello world"
* This string has spaces at the beginning and at the end.
* The space at the beginning is leading whitespace
* The space at the end is trailing whitespace
* This is how we can delete them.
```
string = " hello world "
print(string.lstrip()) # remove the space at the beginning
print(string.rstrip()) # remove the space at the end
print(string.strip()) # remove the space at the beginning and at the end
```
16) How do you convert a string to to lowercase and uppercase?
```
string = "Hello World"
print(string.lower())
print(string.upper())
```
17) How to use pass, break and continue in Python?
```
# If you do not know what you should write inside the function, you can use pass.
def getResult():
pass
# break
for each in range(5):
print(each)
if each == 4:
break
# continue
for each in range(5):
print(each)
if each == 2:
print("The number is found.")
continue
```
18) What is the meaning of //, % and ** ?
* // : get integer division
* ** : get the power
* % : get the remainder
```
print(10/3)
print(10//3)
print(9 * 9) # multiplication
print(9 ** 2) # power
print(10/3) # divison
print(10%3) # remainder
```
19) What is the meaning of membership operator?
```
'lo' in "Hello World!"
"wor" in "Hello World!"
"Wor" in "Hello World!"
```
| github_jupyter |
## feature reduction
Data can be loaded downloaded [here](https://drive.google.com/drive/folders/1yZI5v3ws3b8GZMl_ACe4TO_qebdS2fUz?usp=sharing). The data is contained in the `srp_raw01.zip` and has to be moved to `/data/raw`.
The resulting folder structure looks like this:
`/data/raw/n`, `/data/raw/o` and `/data/raw/x`.
In this notebook the feature space of the data is reduced.
By reducing redundancy in data the model decreases in size and may improve in performace, because less weights and biases have to be adjusted.
```
from os.path import join
raw = join('data', 'raw')
processed = join('data', 'processed')
from src.utils import reset_and_distribute_data
reset_and_distribute_data(raw, processed, [400,0,100])
```
In essence the only change undertaken in this notebook is the `color_mode` parameter given to the `flow_from_directory` function.
```
from tensorflow.keras.preprocessing.image import ImageDataGenerator
def create_generator(data_dir, batch_size, datagen):
full_path = join(processed, data_dir)
return datagen.flow_from_directory(
full_path,
target_size=(32, 32),
color_mode='grayscale',
batch_size=batch_size,
class_mode='binary')
train_datagen = ImageDataGenerator(
rescale = 1./255,
rotation_range=360,
horizontal_flip=True,
vertical_flip=True)
test_datagen = ImageDataGenerator(rescale = 1./255)
train_generator = create_generator('train', 20, train_datagen)
test_generator = create_generator('test', 10, test_datagen)
```
The `model.summary` shows that the number of parameters is almost divided by a factor of three (was 99491 in previous models). This is translating directly into the size of the saved model.
```
from tensorflow.keras import layers
from tensorflow.keras import models
model = models.Sequential()
model.add(layers.Flatten(input_shape=(32, 32, 1)))
model.add(layers.Dense(32,'relu'))
model.add(layers.Dense(32,'relu'))
model.add(layers.Dense(3, 'softmax'))
model.summary()
from tensorflow.keras.optimizers import SGD
optimizer = SGD(lr=0.005, momentum=0.9, nesterov=True)
model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizer, metrics=['acc'])
from tensorflow.keras.callbacks import TensorBoard
import numpy as np
from datetime import datetime
from os import mkdir
log_dir = join('logs', 'srp62', datetime.now().strftime("%Y-%m-%dT%H-%M-%S"))
mkdir(log_dir)
callbacks = [ TensorBoard(
log_dir=log_dir,
histogram_freq=1,
embeddings_freq=1) ]
history = model.fit_generator(
train_generator,
steps_per_epoch=20,
epochs=20,
callbacks=callbacks)
model.evaluate_generator(test_generator)
```
The resulting model has 295kb, whereas the old models were 807kb big. The model is much smaller now without loosing any measurable data.
```
model_path = join('models', 'devel', 'srp62.h5')
model.save(model_path)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/cedro3/data-efficient-gans/blob/master/DiffAugment_GAN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Data-Efficient GANs with DiffAugment
## セットアップ
```
# tensorflow 1.15.0 のインストール
!pip uninstall -y tensorflow tensorflow-probability
!pip install tensorflow-gpu==1.15.0 tensorflow-datasets==2.1.0
# Githubからコードをコピー
!git clone https://github.com/cedro3/data-efficient-gans.git
# ディレクトリをDiffAugment-stylegan2に移動
%cd data-efficient-gans/DiffAugment-stylegan2
# Google drive から新垣結衣の画像と学習済みの重みをダウンロード
import requests
import sys
import os
import zipfile
def download_file_from_google_drive(id, destination):
# ダウンロード画面のURL
URL = "https://drive.google.com/uc?id=1rlmCCuLH4euzIwzqlYh8LiF5iizh5UWV&export=download"
session = requests.Session()
response = session.get(URL, params = { 'id' : id }, stream = True)
token = get_confirm_token(response)
if token:
params = { 'id' : id, 'confirm' : token }
response = session.get(URL, params = params, stream = True)
save_response_content(response, destination)
def get_confirm_token(response):
for key, value in response.cookies.items():
if key.startswith('download_warning'):
return value
return None
def save_response_content(response, destination):
CHUNK_SIZE = 32768
with open(destination, "wb") as f:
for chunk in response.iter_content(CHUNK_SIZE):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
if __name__ == "__main__":
file_id = 'TAKE ID FROM SHAREABLE LINK'
destination = './yui.zip' # 保存先パスの指定
download_file_from_google_drive(file_id, destination)
# zipファイル解凍
zipf = zipfile.ZipFile('./yui.zip')
zipf.extractall()
zipf.close()
```
# 関数の定義
```
# 関数の定義
import tensorflow as tf
import os
import numpy as np
import PIL
import IPython
from multiprocessing import Pool
import matplotlib.pyplot as plt
from dnnlib import tflib, EasyDict
from training import misc, dataset_tool
from metrics import metric_base
from metrics.metric_defaults import metric_defaults
def _generate(network_name, num_rows, num_cols, seed, resolution):
if seed is not None:
np.random.seed(seed)
with tf.Session():
_, _, Gs = misc.load_pkl(network_name)
z = np.random.randn(num_rows * num_cols, Gs.input_shape[1])
outputs = Gs.run(z, None, output_transform=dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True))
outputs = np.reshape(outputs, [num_rows, num_cols, *outputs.shape[1:]])
outputs = np.concatenate(outputs, axis=1)
outputs = np.concatenate(outputs, axis=1)
img = PIL.Image.fromarray(outputs)
img = img.resize((resolution * num_cols, resolution * num_rows), PIL.Image.ANTIALIAS)
return img
def generate(network_name, num_rows, num_cols, seed=None, resolution=128):
with Pool(1) as pool:
return pool.apply(_generate, (network_name, num_rows, num_cols, seed, resolution))
```
# データセットの作成
64×64の画像100枚からtfrecords形式のデータセットを作成します。\
```
# 100-shot-gakki を読み込み学習用データセットを作成
data_dir = dataset_tool.create_dataset('100-shot-gakki')
training_images = []
for fname in os.listdir(data_dir):
if fname.endswith('.jpg'):
training_images.append(np.array(PIL.Image.open(os.path.join(data_dir, fname))))
imgs = np.reshape(training_images, [5, 20, *training_images[0].shape])
imgs = np.concatenate(imgs, axis=1)
imgs = np.concatenate(imgs, axis=1)
PIL.Image.fromarray(imgs).resize((1000, 250), PIL.Image.ANTIALIAS)
```
# 学習の実行
学習時間は、割り当てられているGPUによって異なります。下記を参考にして下さい。\
P100 : 7.3H (kimg=300)\
V100 : 4.1H (kimg=300)\
\
※割り当てがK80であったり、学習に時間を掛けたくない方は、kimg=500で学習した重みがありますので、ここはパスでOKです。
```
# GPUの確認
!nvidia-smi
# 学習の実行
!python3 run_few_shot.py --dataset=100-shot-gakki --resolution=64 --total-kimg=300
```
# 学習済みの重みを使う
実際に学習を行った場合は、resultsフォルダーの1段下に重み( network-snapshot-XXXXXX.pkl)が作成されますので、それをDiffAugment-stylegan2のディレクトリーに移動して下さい。\
そして、**generate() , generate_gif.py** の引数をそのファイル名に変更して下さい。\
\
```
# 学習済みの重みを使って画像生成
generate('network-snapshot-gakki-000500.pkl', num_rows=2, num_cols=5, seed=3)
# 学習済みの重みを使って、GIF動画 (interp.gif) を作成
!python3 generate_gif.py -r network-snapshot-gakki-000500.pkl -o interp.gif --num-rows=2 --num-cols=3 --seed=1
IPython.display.Image(open('interp.gif', 'rb').read())
```
| github_jupyter |
<img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/>
# CCXT - Calculate Support and Resistance
<a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/CCXT/CCXT_Calculate_Support_and_Resistance.ipynb" target="_parent"><img src="https://img.shields.io/badge/-Open%20in%20Naas-success?labelColor=000000&logo=data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iMTAyNHB4IiBoZWlnaHQ9IjEwMjRweCIgdmlld0JveD0iMCAwIDEwMjQgMTAyNCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayIgdmVyc2lvbj0iMS4xIj4KIDwhLS0gR2VuZXJhdGVkIGJ5IFBpeGVsbWF0b3IgUHJvIDIuMC41IC0tPgogPGRlZnM+CiAgPHRleHQgaWQ9InN0cmluZyIgdHJhbnNmb3JtPSJtYXRyaXgoMS4wIDAuMCAwLjAgMS4wIDIyOC4wIDU0LjUpIiBmb250LWZhbWlseT0iQ29tZm9ydGFhLVJlZ3VsYXIsIENvbWZvcnRhYSIgZm9udC1zaXplPSI4MDAiIHRleHQtZGVjb3JhdGlvbj0ibm9uZSIgZmlsbD0iI2ZmZmZmZiIgeD0iMS4xOTk5OTk5OTk5OTk5ODg2IiB5PSI3MDUuMCI+bjwvdGV4dD4KIDwvZGVmcz4KIDx1c2UgaWQ9Im4iIHhsaW5rOmhyZWY9IiNzdHJpbmciLz4KPC9zdmc+Cg=="/></a>
**Tags:** #ccxt #bitcoin #trading
Prerequisite : get binance API Key
```
!pip install trendln matplotlib==3.1.3 --user
import naas
import ccxt
import pandas as pd
from datetime import datetime
import naas_drivers
import trendln
binance = ccxt.binance({
'apiKey': naas.secret.get('binance_api'),
'secret': naas.secret.get('binance_secret')
})
data = binance.fetch_ohlcv(symbol = 'BTC/USDT', limit = 180, timeframe = '4h')
df = pd.DataFrame(data, columns=["Date","Open","High","Low","Close","Volume"])
df['Date'] = [datetime.fromtimestamp(float(time)/1000) for time in df['Date']]
df
fig = trendln.plot_support_resistance(
df[-1000:].Close, #as per h for calc_support_resistance
xformatter = None, #x-axis data formatter turning numeric indexes to display output
# e.g. ticker.FuncFormatter(func) otherwise just display numeric indexes
numbest = 1, #number of best support and best resistance lines to display
fromwindows = True, #draw numbest best from each window, otherwise draw numbest across whole range
pctbound = 0.1, # bound trend line based on this maximum percentage of the data range above the high or below the low
extmethod = trendln.METHOD_NUMDIFF,
method=trendln.METHOD_PROBHOUGH,
window=125,
errpct = 0.005,
hough_prob_iter=50,
sortError=False,
accuracy=1)
import plotly.tools as tls
import plotly.graph_objects as go
plotly_fig = tls.mpl_to_plotly(fig)
layout = dict(
dragmode="pan",
xaxis_rangeslider_visible=False,
showlegend=True,
)
new_data = list(plotly_fig.data)
new_data.pop(2)
new_data.pop(2)
new_data.pop(1)
new_data.pop(1)
fig = go.Figure(data=new_data, layout=layout)
fig
```
| github_jupyter |
<a href="https://colab.research.google.com/github/AbuKaisar24/Machine-Learning-Algorithms-Performance-Measurement-for-Bengali-News-Sentiment-Classification.-/blob/master/Bengali_Newspaper_Sentiment_Analysis.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('/content/gdrive')
import pandas as pd
df = pd.read_excel("gdrive/My Drive/Colab Notebooks/ok.xlsx",encoding='utf-8')
df=df.drop("News Type",axis=1)
df.head(14)
df.columns
df.dtypes
df.shape
df['Headline'].value_counts()
df.isnull().sum()
df.dropna()
for i in range(5):
print("News:",i+1)
print("Text:",df.Headline[i])
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
df['Sentiment'].plot(kind='hist')
df['length'] = df['Headline'].str.split().apply(len)
df.head(10)
df.hist(column='length', by='Sentiment', bins=50,figsize=(10,5))
df.length.describe()
import string
def remove_punc(s):
table = str.maketrans({key: None for key in string.punctuation})
return s.translate(table)
x = df['Headline'].fillna(' ')
y = df['Sentiment']
print(x.head())
print(y.head())
x = df['Headline'].apply(remove_punc)
x.shape
contractions = {
"বি.দ্র ": "বিশেষ দ্রষ্টব্য",
"ড.": "ডক্টর",
"ডা.": "ডাক্তার",
"ইঞ্জি:": "ইঞ্জিনিয়ার",
"রেজি:": "রেজিস্ট্রেশন",
"মি.": "মিস্টার",
"মু.": "মুহাম্মদ",
"মো.": "মোহাম্মদ",
}
import re
import string
from collections import Counter
vocab = Counter()
def clean_text(x,remove_stopwords = False):
if True:
new_text = []
for word in x:
if word in contractions:
new_text.append(contractions[word])
else:
new_text.append(word)
x = " ".join(new_text)
# Format words and remove unwanted characters
whitespace = re.compile(u"[\s\u0020\u00a0\u1680\u180e\u202f\u205f\u3000\u2000-\u200a]+", re.UNICODE)
bangla_digits = u"[\u09E6\u09E7\u09E8\u09E9\u09EA\u09EB\u09EC\u09ED\u09EE\u09EF]+"
english_chars = u"[a-zA-Z0-9]"
punc = u"[(),$%^&*+={}\[\]:\"|\'\~`<>/,¦!?½£¶¼©⅐⅑⅒⅓⅔⅕⅖⅗⅘⅙⅚⅛⅜⅝⅞⅟↉¤¿º;-]+"
bangla_fullstop = u"\u0964" #bangla fullstop(dari)
punctSeq = u"['\"“”‘’]+|[.?!,…]+|[:;]+"
x = re.sub(bangla_digits, " ", x)
x = re.sub(punc, " ", x)
x = re.sub(english_chars, " ", x)
x = re.sub(bangla_fullstop, " ", x)
x = re.sub(punctSeq, " ", x)
x = whitespace.sub(" ", x).strip()
x = re.sub(r'https?:\/\/.*[\r\n]*', '', x, flags=re.MULTILINE)
x = re.sub(r'\<a href', ' ', x)
x = re.sub(r'&‘:‘ ’', '', x)
x = re.sub(r'[_"\-;%()|+&=*%.,!?:#$@\[\]/]। ,', ' ', x)
x = re.sub(r'<br />', ' ', x)
x = re.sub(r'\'', ' ', x)
x = re.sub(r"[\@$#%~+-\.\'।\"]"," ",x)
x = re.sub(r"(?m)^\s+", "", x)
x = re.sub("[()]","",x)
x = re.sub("[‘’]","",x)
x = re.sub("[!]","",x)
x = re.sub("[/]","",x)
x = re.sub("[:]","",x)
x = re.sub('\ |\?|\.|\!|\/|\;|\:', ' ',x)
x = x.strip("/")
if remove_stopwords:
k = []
with open('gdrive/My Drive/Colab Notebooks/Banglastopword.txt', 'r',encoding="utf-8") as f:
for word in f:
word = word.split()
k.append(word[0])
x = [t for t in x if t not in k]
x = "".join(x)
return x
import matplotlib.pyplot as plt
import seaborn as sns
import nltk
from nltk.corpus import stopwords
import string
import math
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix, accuracy_score, roc_auc_score, roc_curve
from sklearn.model_selection import GridSearchCV
from sklearn import metrics
%matplotlib inline
vocab = CountVectorizer(analyzer=clean_text,encoding='utf-8').fit(x)
print(len(vocab.vocabulary_))
r0 = x[0]
print(r0)
vocab0 = vocab.transform([r0])
print(vocab0)
print(vocab.get_feature_names()[50])
print(vocab.get_feature_names()[50])
x = vocab.transform(x)
print("Shape of the sparse matrix: ", x.shape)
print("Non-Zero occurences: ",x.nnz)
density = (x.nnz/(x.shape[0]*x.shape[1]))*100
print("Density of the matrix = ",density)
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.15,random_state=101)
from sklearn.naive_bayes import MultinomialNB
mnb = MultinomialNB()
mnb.fit(x_train,y_train)
predmnb = mnb.predict(x_test)
print("Confusion Matrix for Multinomial Naive Bayes:")
print(confusion_matrix(y_test,predmnb))
print("Score:",round(accuracy_score(y_test,predmnb)*100,2))
print("Classification Report:",classification_report(y_test,predmnb))
acc1=metrics.accuracy_score(y_test, predmnb)
print("Accuracy:",acc1*100,"%")
from sklearn.ensemble import RandomForestClassifier
rmfr = RandomForestClassifier()
rmfr.fit(x_train,y_train)
predrmfr = rmfr.predict(x_test)
print("Confusion Matrix for Random Forest Classifier:")
print(confusion_matrix(y_test,predrmfr))
print("Score:",round(accuracy_score(y_test,predrmfr)*100,2))
print("Classification Report:",classification_report(y_test,predrmfr))
acc2=metrics.accuracy_score(y_test,predrmfr)
print("Accuracy:",acc2*100,"%")
from sklearn.tree import DecisionTreeClassifier
dt = DecisionTreeClassifier()
dt.fit(x_train,y_train)
preddt = dt.predict(x_test)
print("Confusion Matrix for Decision Tree:")
print(confusion_matrix(y_test,preddt))
print("Score:",round(accuracy_score(y_test,preddt)*100,2))
print("Classification Report:",classification_report(y_test,preddt))
acc3=metrics.accuracy_score(y_test,preddt)
print("Accuracy:",acc3*100,"%")
from sklearn.svm import SVC
svm = SVC(random_state=101)
svm.fit(x_train,y_train)
predsvm = svm.predict(x_test)
print("Confusion Matrix for Support Vector Machines:")
print(confusion_matrix(y_test,predsvm))
print("Score:",round(accuracy_score(y_test,predsvm)*100,2))
print("Classification Report:",classification_report(y_test,predsvm))
import pickle
acc4=metrics.accuracy_score(y_test,predsvm)
print("Accuracy:",acc4*100,"%")
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=3)
knn.fit(x_train,y_train)
predknn = knn.predict(x_test)
print("Confusion Matrix for K Neighbors Classifier:")
print(confusion_matrix(y_test,predknn))
print("Score: ",round(accuracy_score(y_test,predknn)*100,2))
print("Classification Report:")
print(classification_report(y_test,predknn))
acc5=metrics.accuracy_score(y_test,predknn)
print("Accuracy:",acc5*100,"%")
headline=input()
prt=vocab.transform([headline])
m1=svm.predict(prt)
print("Predicted Rating:",m1)
if m1==0:
print("Negative News Headline")
else:
print("Positive News Headline")
m2=knn.predict(prt)
print("Predicted Rating:",m2)
if m2==0:
print("Negative News Headline")
else:
print("Positive News Headline")
m3=dt.predict(prt)
print("Predicted Rating:",m3)
if m3==0:
print("Negative News Headline")
else:
print("Positive News Headline")
m4=mnb.predict(prt)
print("Predicted Rating:",m4)
if m4==0:
print("Negative News Headline")
else:
print("Positive News Headline")
m5=rmfr.predict(prt)
print("Predicted Rating:",m5)
if m5==0:
print("Negative News Headline")
else:
print("Positive News Headline")
import matplotlib.pyplot as plt; plt.rcdefaults()
import numpy as np
import matplotlib.pyplot as plt
objects = ('NB','RF','DT','SVM','KNN')
y_pos = np.arange(len(objects))
performance = [acc1,acc2,acc3,acc4,acc5]
plt.bar(y_pos, performance, align='center', alpha=0.5)
plt.xticks(y_pos, objects)
plt.ylabel('Score')
plt.title('Best Accuracy')
plt.show()
```
| github_jupyter |
Current and near-term quantum computers suffer from imperfections, as we repeatedly pointed it out. This is why we cannot run long algorithms, that is, deep circuits on them. A new breed of algorithms started to appear since 2013 that focus on getting an advantage from imperfect quantum computers. The basic idea is extremely simple: run a short sequence of gates where some gates are parametrized. Then read out the result, make adjustments to the parameters on a classical computer, and repeat the calculation with the new parameters on the quantum hardware. This way we create an iterative loop between the quantum and the classical processing units, creating classical-quantum hybrid algorithms.
<img src="../figures/hybrid_classical_quantum.svg" alt="Hybrid classical-quantum paradigm" style="width: 400px;"/>
These algorithms are also called variational to reflect the variational approach to changing the parameters. One of the most important examples of this approach is the quantum approximate optimization algorithm, which is the subject of this notebook.
# Quantum approximate optimization algorithm
The quantum approximate optimization algorithm (QAOA) is a shallow-circuit variational algorithm for gate-model quantum computers that was inspired by quantum annealing. We discretize the adiabatic pathway in some $p$ steps, where $p$ influences precision. Each discrete time step $i$ has two parameters, $\beta_i, \gamma_i$. The classical variational algorithms does an optimization over these parameters based on the observed energy at the end of a run on the quantum hardware.
More formally, we want to discretize the time-dependent $H(t)=(1-t)H_0 + tH_1$ under adiabatic conditions. We achieve this by Trotterizing the unitary. For instance, for time step $t_0$, we can split this unitary as $U(t_0) = U(H_0, \beta_0)U(H_1, \gamma_0)$. We can continue doing this for subsequent time steps, eventually splitting up the evolution to $p$ such chunks:
$$
U = U(H_0, \beta_0)U(H_1, \gamma_0)\ldots U(H_0, \beta_p)U(H_1, \gamma_p).
$$
At the end of optimizing the parameters, this discretized evolution will approximate the adiabatic pathway:
<img src="../figures/qaoa_process.svg" alt="Quantum approximate optimization algorithm" style="width: 400px;"/>
The Hamiltonian $H_0$ is often referred to as the driving or mixing Hamiltonian, and $H_1$ as the cost Hamiltonian. The simplest mixing Hamiltonian is $H_0 = -\sum_i \sigma^X_i$, the same as the initial Hamiltonian in quantum annealing. By alternating between the two Hamiltonians, the mixing Hamiltonian drives the state towards an equal superposition, whereas the cost Hamiltonian tries to seek its own ground state.
Let us import the necessary packages first:
```
import numpy as np
from functools import partial
from pyquil import Program, api
from pyquil.paulis import PauliSum, PauliTerm, exponential_map, sZ
from pyquil.gates import *
from scipy.optimize import minimize
from forest_tools import *
np.set_printoptions(precision=3, suppress=True)
qvm_server, quilc_server, fc = init_qvm_and_quilc()
n_qubits = 2
```
Now we can define our mixing Hamiltonian on some qubits. As in the notebook on classical and quantum many-body physics, we had to define, for instance, an `IZ` operator to express $\mathbb{I}\otimes\sigma_1^Z$, that is, the $\sigma_1^Z$ operator acting only on qubit 1. We can achieve the same effect the following way (this time using the Pauli-X operator). The coefficient here means the strength of the transverse field at the given qubit. This operator will act trivially on all qubits, except the given one. Let's define the mixing Hamiltonian over two qubits:
```
Hm = [PauliTerm("X", i, -1.0) for i in range(n_qubits)]
```
As an example, we will minimize the Ising problem defined by the cost Hamiltonian $H_c=-\sigma^Z_1 \otimes \sigma^Z_2$, whose minimum is reached whenever $\sigma^Z_1 = \sigma^Z_2$ (for the states $|-1, -1\rangle$, $|11\rangle$ or any superposition of both)
```
J = np.array([[0,1],[0,0]]) # weight matrix of the Ising model. Only the coefficient (0,1) is non-zero.
Hc = []
for i in range(n_qubits):
for j in range(n_qubits):
Hc.append(PauliTerm("Z", i, -J[i, j]) * PauliTerm("Z", j, 1.0))
```
During the iterative procedure, we will need to compute $e^{-i \beta H_c}$ and $e^{-i \gamma H_m}$. Using the function `exponential_map` of PyQuil, we can build two functions that take respectively $\beta$ and $\gamma$ and return $e^{-i \beta H_c}$ and $e^{-i \gamma H_m}$
```
exp_Hm = []
exp_Hc = []
for term in Hm:
exp_Hm.append(exponential_map(term))
for term in Hc:
exp_Hc.append(exponential_map(term))
```
We set the number of time evolution steps $p=1$ and initialize the $\gamma_i$ and $\nu_i$ parameters:
```
p = 1
β = np.random.uniform(0, np.pi*2, p)
γ = np.random.uniform(0, np.pi*2, p)
```
The initial state is a uniform superposition of all the states $|q_1,...,q_n\rangle$. It can be created using Hadamard gates on all the qubits |0> of an new program.
```
initial_state = Program()
for i in range(n_qubits):
initial_state += H(i)
```
To create the circuit, we need to compose the different unitary matrice given by `evolve`.
```
def create_circuit(β, γ):
circuit = Program()
circuit += initial_state
for i in range(p):
for term_exp_Hc in exp_Hc:
circuit += term_exp_Hc(-β[i])
for term_exp_Hm in exp_Hm:
circuit += term_exp_Hm(-γ[i])
return circuit
```
We now create a function `evaluate_circuit` that takes a single vector `beta_gamma` (the concatenation of $\beta$ and $\gamma$) and returns $\langle H_c \rangle = \langle \psi | H_c | \psi \rangle$ where $\psi$ is defined by the circuit created with the function above.
```
def evaluate_circuit(beta_gamma):
β = beta_gamma[:p]
γ = beta_gamma[p:]
circuit = create_circuit(β, γ)
return qvm.pauli_expectation(circuit, sum(Hc))
```
Finally, we optimize the angles:
```
qvm = api.QVMConnection(endpoint=fc.sync_endpoint, compiler_endpoint=fc.compiler_endpoint)
result = minimize(evaluate_circuit, np.concatenate([β, γ]), method='L-BFGS-B')
result
```
# Analysis of the results
We create a circuit using the optimal parameters found.
```
circuit = create_circuit(result['x'][:p], result['x'][p:])
```
We use the `statevector_simulator` backend in order to display the state created by the circuit.
```
wf_sim = api.WavefunctionSimulator(connection=fc)
state = wf_sim.wavefunction(circuit)
print(state)
```
We see that the state is approximately $(0.5 + 0.5i) \left( |00 \rangle + |11 \rangle \right) = e^{i \theta} \frac{1}{\sqrt{2}} \left( |00 \rangle + |11 \rangle \right)$, where $\theta$ is a phase factor that doesn't change the probabilities. It corresponds to a uniform superposition of the two solutions of the classicial problem: $(\sigma_1=1$, $\sigma_2=1)$ and $(\sigma_1=-1$, $\sigma_2=-1)$
Let's now try to evaluate the operators $\sigma^Z_1$ and $\sigma^Z_2$ independently:
```
print(qvm.pauli_expectation(circuit, PauliSum([sZ(0)])))
print(qvm.pauli_expectation(circuit, PauliSum([sZ(1)])))
```
We see that both are approximatively equal to zero. It's expected given the state we found above (both spins takes -1 and 1 half of the time). It corresponds to a typical quantum behavior where $\mathbb{E}[\sigma^Z_1 \sigma^Z_2] \neq \mathbb{E}[\sigma^Z_1] \mathbb{E}[\sigma^Z_2]$
| github_jupyter |
# Readability indices
# Cognitive Load Simulation
Cognitive load is the resources your working memory has to use during problem solving and learning activities.
Total cognitive load = intrinsic cognitive load + extrinsic cognitive load + germane cognitive load
Intrinsic cognitive load = cognitive load associated with a topic
Extrinsic cognitive load = cognitive load caused by the way a topic is presented to the learner
Germane cognitive load = cognitive load involved in fitting new information into schema in long-term memory
Extrinsic and germane cognitive load can be reduced with intelligent instructional design.
## Objective: Simulate the cognitive load experienced by a student with a set of prior known concepts while reading a document and compare with readability indices
```
import nltk
from nltk.corpus import stopwords
stop_words = set(stopwords.words('english'))
filename = 'starwiki'
concepts = {}
import pickle
# Loading extracted concepts from file (see concept_extraction.ipynb)
#concepts = {'sents':sents,'rawtxt':rawtxt,'sent_to_npflat':sent_to_npflat,'sent_to_tags':sent_to_tags,'sent_to_ltags':sent_to_ltags,'np_to_sent':np_to_sent,'Conceptdata':Conceptdata}
with open(filename +'concepts.pickle', 'rb') as f:
concepts = pickle.load(f)
# Loading idf dictionary (see Build_IDF_dictionary.ipynb)
with open('idf_dict.pickle','rb') as f1:
idf_dict =pickle.load(f1)
sents = concepts['sents']
rawtxt = concepts['rawtxt']
sent_to_npflat = concepts['sent_to_npflat']
sent_to_tags= concepts['sent_to_tags']
sent_to_ltags = concepts['sent_to_ltags']
np_to_sent = concepts['np_to_sent']
Conceptdata = concepts['Conceptdata']
import math
def get_idf(wrd,totaldocs=10788):
wrd = wrd.lower()
return idf_dict.get(wrd,math.log(totaldocs))
def calc_cl_per_sentence(sent_to_npflat,maxidf=9.1):
sent_to_clt = []
for i in range(len(sent_to_npflat)):
npinsent = sent_to_npflat[i]
clt= 0
for np in npinsent:
tokens = np.split(' ')
idf = 0
for t in tokens:
if t not in stop_words:
idf = idf + get_idf(t)/len(tokens)
if (idf>=maxidf):
clt = clt + 1
sent_to_clt.append(clt)
return sent_to_clt
def plot_clt():
from matplotlib import pyplot as plt
plt.xlabel('document sentence #')
plt.ylabel('Load added to working memory by sentence')
plt.title('Cognitive Load for '+filename)
plt.plot(list(range(1,len(sent_to_npflat)+1)),calc_cl_per_sentence(sent_to_npflat),drawstyle='steps')
plt.savefig('cltfig1.png')
return None
sent_to_clt = calc_cl_per_sentence(sent_to_npflat)
print('Mininum cognitive load sentence: ',sents[sent_to_clt.index(min(sent_to_clt))])
print('Maximum cognitive load sentence: ',sents[sent_to_clt.index(max(sent_to_clt))])
plot_clt()
```
## Readability indices
Readability is the ease with which a reader can understand a text, and is calculated through a number of different empirical equations.
1. Flesch–Kincaid readability tests
New reading ease score = 1.599nosw − 1.015sl − 31.517
Where: nosw = number of one-syllable words per 100 words and
sl = average sentence length in words.
```
from nltk.corpus import cmudict
cmud = cmudict.dict()
def manual_syllable_count(phrase):
vowels = {'a','e','i','o','u'}
consonants = {'b','c','d','f','g','h','j','k','l','m','n','p','q','r','s','t','v','w','x','z'}
y = {'y'}
length = len(phrase)
count_s = 0
# syllables are counted in middle and end from the starting consonant or y sound with vowel sound following
# in the starting: vowel sounds from a,e,i,o,u,and y are counted as 1 syllable regardless
# in the end: consonant - vowel end with e is not counted, every other case including y as the vowel is counted
first = phrase[0]
#print(first)
# dividing middle portion of word into pairs
pairs = [phrase[i:i+2] for i in range(len(phrase)-2)]
# getting ending pair
end = phrase[len(phrase)-2:len(phrase)]
if first in vowels|y:
count_s = count_s + 1
#print(first,count_s)
for p in pairs:
if p[0] in consonants|y and p[1] in vowels|y:
count_s = count_s + 1
#print(p,count_s)
#print(end)
if end[0] in consonants|y and end[1] in {'a','i','o','u','y'}:
count_s = count_s + 1
#print(end,count_s)
return count_s
# 'employee'
# 'e' :1
# 'em' :0
# 'mp' :0
# 'pl' :0
# 'lo' :1
# 'oy' :0
# 'ye' :1
# 'ee' :0
# getting first letter
def syllable_count(phrase):
toks = nltk.word_tokenize(phrase)
count = 0
for t in toks:
#syll_list = list(chain.from_iterable(cmud.get(t,[[0]])))
syll_list = cmud.get(t,[[0]])[0] # randomly choosing the first pronunciation
#print(syll_list)
if syll_list==[0]:
count = count + manual_syllable_count(t)
else:
count = count + sum([1 for y in syll_list if y[-1].isdigit()])
return count
print(syllable_count('rhythm'))
```
| github_jupyter |
# Event data
One of the main benfits of working with kloppy that it loads metadata with the event data. This metadata includes teams (name, ground and provider id) and players (name, jersey number, optional position and provider id). Using this metadata it becomes very easy to an analyse that is usable by humans, because it includes names instead of only numbers.
This section shows how metadata is organized and some use-cases.
## Loading statsbomb data
The datasets module of kloppy makes it trivial to load statsbomb data. Keep in mind that by using the data you accept the license of the open-data project.
```
from kloppy import datasets
dataset = datasets.load("statsbomb", options={"event_types": ["pass", "shot"]})
```
## Exploring metadata
kloppy always loads the metadata for you and makes it available at the `metadata` property.
```
metadata = dataset.metadata
home_team, away_team = metadata.teams
```
After loading the data the metadata can be used to iterate over teams and players. By default `metadata.teams` contain `[HomeTeam, AwayTeam]`. `Team` and `Player` entities have the `__str__` magic method implemented to help you casting it to a string. When you want to
```
print(f"{home_team.ground} - {home_team}")
print(f"{away_team.ground} - {away_team}")
[f"{player} ({player.jersey_no})" for player in home_team.players]
# get provider id for team
f"statsbomb team id: {home_team.team_id} - {away_team.team_id}"
# same for the players
[f"{player} id={player.player_id}" for player in metadata.teams[0].players]
# get player from first event
player = dataset.events[0].player
print(player)
print(player.team)
print(f"Teams are comparable? {player.team == away_team}")
```
The `Team` and `Player` entities also contain the magic methods to use those a keys in dictionaries or use them in sets. This makes it easy to do some calculations, and show the results without mapping the player_id to a name.
```
from collections import defaultdict
passes_per_player = defaultdict(list)
for event in dataset.events:
if event.event_name == "pass":
passes_per_player[event.player].append(event)
for player, passes in passes_per_player.items():
print(f"{player} has {len(passes)} passes")
```
Now let's filter on home_team.
```
for player, passes in passes_per_player.items():
if player.team == home_team:
print(f"{player} has {len(passes)} passes")
```
## Use metadata when transforming to pandas dataframe
The metadata can also be used when transforming a dataset to a pandas dataframe. The `additional_columns` argument should be passed to `to_pandas`.
```
from kloppy import to_pandas
dataframe = to_pandas(dataset, additional_columns={
'player_name': lambda event: str(event.player),
'team_name': lambda event: str(event.player.team)
})
dataframe[[
'event_id', 'event_type', 'result', 'timestamp', 'player_id',
'player_name', 'team_name'
]].head()
```
| github_jupyter |
# Introduction
In this article, we discuss how to construct a Geometric Brownian Motion(GBM) simulation using Python. While building the script, we also explore the intuition behind GBM model. I will not be getting into theoretical background of its derivation. It's beyond the scope of this article. I care more about giving a high level understanding of what GBM needs as parameters, what its components are and how it creates predictions. I will try to have a bottom-up approach and build up the logic of GBM starting from its components. The simulation model we develop here is a discrete time model. Therefore, all mathematics discussed here is discrete time analogy of Geometric Brownian Motion for continuous stochastic processes. At the end of this article, we learn how to create simulations using GBM and you will have a full code for GBM simulation.
# Content
1. What GBM does?
2. What input parameters do we need for GBM simulation?
3. The components of GBM: Drift and Diffusion
4. Building the discrete time GBM model
5. Making predictions in Python
## 1. What GBM does?
I use E.ON's stock prices as an example throughout the article when explaining the related concepts. E.ON is an electric utility company based in Germany and it is one of the biggest in Europe. I retrieve its stock prices(in Euros) from Xetra Exchange through Python package of Quandl. Here is a link where you can display the stock prices: https://www.investing.com/equities/e.on-chart?cid=23453. In order to be able to use Quandl, you need to sign up and get an authorization token from its website and also you need to install quandl Python package. Assuming that you completed these steps, you can just use the code below to extract stock price data.
Geometric Brownian Motion is widely used to model stock prices in finance and there is a reason why people choose it. In the line plot below, x axis indicates the days between 1 Jan, 2019 - 31 Jul, 2019 and y axis indicates the stock price in Euros. I want you to focus only on major, longer duration trends in the plot, disregarding the small fluctutations. You would realize that the stock price follows a wavy path. It increases from January to April, then it decreases until mid-May, followed by another increase series until mid-June and finally decreasing until the end of July. Now, I want you to focus on the shorter-term fluctuations. Let's investigate July. You can say that the general trend is downward, but the stock price doesn't just fall in a smooth manner. During its journey to end of July, it goes up and down continuously which indicates that there is not an interpretable pattern in this movement.
So, we have already identified two components of the stock price movement. In July,
1. E.ON's stock price tends to fall, and
2. there are random shocks each day to the falling stock price that result in an irregular line plot.
As a result, we need a suitable model that takes into account both types of movements in the stock price. This is where Geometric Brownian Motion comes into play. GBM has two components that do this job. One component incorporates the long-term trend while the other component apply random shocks. We will talk about these in later sections.
```
import pandas as pd
import numpy as np
import quandl
import matplotlib.pyplot as plt
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
# Plot of E.ON(a big scale energy company in Europe) stock prices since beginning of 2019 (up to July)
prices = quandl.get("FSE/EON_X", authtoken="your_authorization_token",
start_date = '2019-01-01', end_date = '2019-07-31').reset_index(drop = False)[['Date', 'Close']]
plt.figure(figsize = (15, 5))
plt.plot(prices['Date'], prices['Close'])
plt.xlabel('Days')
plt.ylabel('Stock Prices, €')
plt.show()
```
## 2. What input parameters do we need for GBM simulation?
Below are the input parameters that our GBM simulation model will take. We will discuss each of them one by one in detail. For the sake of this article, I will use E.ON's stock prices of July to make predictions for August. Also, you should note that I talk about trading days when I explain things using dates. We assume that a week is composed of only the weekdays, meaning that what comes after Friday is Monday. So, I want you to keep these in mind while reading about the input parameters.
```
# Parameter Definitions
# So : initial stock price
# dt : time increment -> a day in our case
# T : length of the prediction time horizon(how many time points to predict, same unit with dt(days))
# N : number of time points in prediction the time horizon -> T/dt
# t : array for time points in the prediction time horizon [1, 2, 3, .. , N]
# mu : mean of historical daily returns
# sigma : standard deviation of historical daily returns
# b : array for brownian increments
# W : array for brownian path
```
We retrieve historical stock prices between start_date and end_date. Then using our GBM model, we will get our simulations until pred_end_date. We can play with these variables and create different settings.
```
start_date = '2019-07-01'
end_date = '2019-07-31'
pred_end_date = '2019-08-31'
# We get daily closing stock prices of E.ON for July 2019
S_eon = quandl.get("FSE/EON_X", authtoken="your_authorization_token",
start_date = start_date, end_date = end_date).reset_index(drop = False)[['Date', 'Close']]
print(S_eon.head())
print(S_eon.tail())
```
Note that, stock prices for only the trading days are retrieved, as you can realize from the data above.
### 1. So
This is the initial stock price. Forward stock price simulations take their roots from this initial stock price value. We will multiply this initial value with some expressions to get predictions for each of the trading days in August. In our case, So is the closing stock price in July 31, 2019.
```
So = S_eon.loc[S_eon.shape[0] - 1, "Close"]
print(So)
```
### 2. dt
This is the time increment in our model. It means the time unit that we assume. Remember that we have the closing prices of E.ON stocks for each trading day, then the time increment in our model is 1 day. Please note that dt follows from the time steps in historical data. If the stock price data is published every 7.2 hours, then dt would be 0.3 since it corresponds to 7.2 hours = 0.3 days.
```
dt = 1
print(dt)
```
### 3. T
T denotes the length of our prediction time horizon. We should be careful and consistent when specifying values for T and dt. For example, in our case, T should be 22 days since we want predictions for 22 trading days of August and when assigning a value to dt, following from our declaration of T, we should remember that dt must be represented in terms of days. The time unit for these two parameters has to be the same. Now, looking at a different example, suppose we have two stock price values for each trading day in our data and we know that we will make predictions for the 22 trading days in August. Under this setting, our time increment would be 0.5 days and we would assign 0.5 to dt. So, we would have 44 predictions after applying GBM, two predictions for each trading day in August.
Remember the example in the previous part where we supposed stock price data is published every 7.2 hours. If we want our prediction time horizon to be 72 hours(T = 72), then the appropriate dt would be 7.2 hours itself, without needing to convert it to 0.3 days. So, at the end, we would have 72 / 7.2 = 10 predictions with 7.2 hours between each of them.
If you follow this idea when building and using a GBM model, it becomes a lot easier to use your model for different equities under different settings. This feels confusing to many people and that's why I try to standardize it like this here :) The ultimate point we are trying to reach is calculating N(explained in the next part) correctly. It is the number of time steps within our prediction horizon which is consistent with our historical data in terms of time increment magnitude. I know it's a simple thing, but building a line of reasoning is always a good idea to prevent potential confusions in different applications of our code in the future.
We can infer the number of trading days in August, using the pred_end_date variable we declared at the beginning of this Section. Using the code below, even we change pred_end_date, we can extract the number of time points to be predicted by counting the weekdays between end_date + 1 day and pred_end_date. What we need in our case is number of trading days between 1 Aug, 2019 and 31 Aug, 2019.
```
n_of_wkdays = pd.date_range(start = pd.to_datetime(end_date, format = "%Y-%m-%d") + pd.Timedelta('1 days'),
end = pd.to_datetime(pred_end_date, format = "%Y-%m-%d")).to_series().map(lambda x:
1 if x.isoweekday() in range(1,6) else 0).sum()
T = n_of_wkdays
print(T)
```
### 4. N
This parameter comes automatically after assignment of dt and T. It is the number of time points in the prediction time horizon. In our case, our time increment is 1 day and we will get predictions for 22 trading days. This means, we have 22 different time points(days) and we will have 22 predictions at the end.
```
N = T / dt
print(N)
```
### 5. t
This is an array where we show the time progression in our model. It is like a time ticker where we measure time through counting number of time points elapsed. Building array t follows from the calculation of N. Remember we have 22 time points(days) to make predictions for. That's why array t starts from 1 and goes up to 22. When we use an element from array t, it means, that much time point elapsed in the model. As our predictions get far away from the date of So(starting day), they will fluctuate more due to this array t with more time points getting elapsed. In our case, t would be the array below for our model.
Some other examples:
A. If dt is 0.5 days(two stock prices for each day) and T is 22 days, t would be [1 2 3 ... 44]
B. If dt is 7.2 hours and T is 72 hours, t would be [1 2 3 ... 10]
```
t = np.arange(1, int(N) + 1)
print(t)
```
### 6. mu
This is the mean return of the stock prices within the historical date range selected. In our case, we pick trading days of July as the historical data and we should calculate mu using stock prices of July. Before calculating mu, we should calculate the return for each trading day. The calculation is below.

```
returns = (S_eon.loc[1:, 'Close'] - S_eon.shift(1).loc[1:, 'Close']) / S_eon.shift(1).loc[1:, 'Close']
print(returns.tolist())
```

After calculating returns for all of the days in July, we need to calculate the arithmetic average using those returns to obtain mu. We will then use mu in our drift component calculation. It will have an effect on the long-term movement of the stock price. Also, as you can see below, mu is ~ -0.3 % which indicates that there is negative return on average during the month July and we should take into this account when calculating forward predictions for August.
```
mu = np.mean(returns)
print(mu)
```
One important thing to note is that we don't need to multiply mean return with a number for adjustment. We stay loyal to the time increment magnitude of our historical data and we create prediction series in the same way the historical data exists. To elaborate more, suppose, we are retrieving stock price data every 7.2 hours and we have the data for last 72 hours. Also, we want to make predictions for the following 36 hours. When we calculate mean return from the historical data, that mean return accounts for the associated time step length of 7.2 hours(There are 7.2 hours between two data points). When making predictions, following from our logic explained in above parts, we will have 5 time periods(36/7.2) with 7.2 hours between them. This means, we can use mean return as mu in our model without needing to adjust it according to time step length between our prediction time points.
### 7. sigma
This is standard deviation of returns of the stock prices in July. Sigma is important, because, it will be helpful to incorporate random shocks to our predictions. Remember from the line plot of E.ON stock prices above, stock price continuously go up and down one day to another and magnitude or direction of this small movement seems to be random. Here, sigma will help us in determining the magnitude of the movement. Sigma, by itself, doesn't add the randomness we need to the model. We will use standard normal random variable when picking up random values. Sigma will contribute by scaling the magnitude of random shock so that the small fluctuations occur in accordance with the historical volatility of the stock prices. We don't want any irrelevant random values coming from standard normal distribution. Below, you can see how sigma is calculated.

```
sigma = np.std(returns)
print(sigma)
```
Remember we said that we don't multiply mu with any number for adjustment, because we stay loyal to the length of time steps in the historical data when making predictions. The same exact logic applies here and we don't need to adjust sigma either.
### 8. b
This array is the array where we add randomness to our model. Important thing here is the "scen_size" variable. Since this is a simulation model, we want to have many hypothetical paths for the stock price within prediction time horizon. Assigning 2 to "scen_size" means, at the end we will have 2 different stock price series. The logic for each scenario is the same with the other. So, I will discuss as if we are creating only 1 prediction series. Remember from the discussion of t, we declared an array for time progression which counts elapsed time points. Here array b, for each corresponding prediction time point, stores a random number coming from standard normal distribution. These random numbers will add the random shocks, that we have been talking since beginning of the article, to the model. That's why we will use numpy.random.normal() to generate random values from standard normal distribution. Again, we don't multiply this random value with any number for adjustment, following the same reasoning with mu and sigma.

In the example above, we assume our historical stock price data is daily and that's why we assign 1 to dt. Also, We want predictions for 5 days, which means the value of T is 5. Following these, N is calculated as 5 time points(5 1-day periods). So, for each of these time points, we generated a random number coming from standard normal distribution(mean=0, sigma=1) using numpy.random.normal() method. These random values composed our array b at the end. In our case, T should be 22 days. Below is the code where we create 2 arrays of random values with length 22 for 2 different scenarios.
```
scen_size = 2
b = {str(scen): np.random.normal(0, 1, int(N)) for scen in range(1, scen_size + 1)}
print(b)
```
### 9. W
W is the Brownian path and it determines how the stock prices fluctuate from beginning time point(So) to some other time point t. You should distinguish between b and W. In the next section, the difference between them will be a lot clearer, but still I want to mention briefly here. b is the random shock being applied to stock price at a time point when making a prediction for stock price of the NEXT time point. So, suppose, at time period 3, stock price is S_3. When making prediction for time period 4, b(4) is applied to S_3 as the random shock. W, on the other hand, is THE PATH. It means that it includes effects of all the random shocks since beginning of the prediction time horizon. It is the total effect of randomness incorporated into So(initial stock price) until the specific time point we are concerned with. So, suppose, we are making prediction for time period 4. We need to apply all the random shocks up-to and including time point 4 to So. Therefore, instead of b(4), we use W(4) which is the cumulative sum of array b elements with index less than or equal to 4. Below is the continuation of the example in the previous part where we discussed array b.

```
W = {str(scen): b[str(scen)].cumsum() for scen in range(1, scen_size + 1)}
print(W)
```
Above is the code for our case. Following from array b calculation in the previous part, we take the cumulative sums according to W(k) expression above and create array W.
This concludes our discussion of input parameters to GBM. We understood the reasoning behind each of them with examples and in the next sections, we will build GBM model from its components. It will make the input parameters in this section more meaningful to you.
## 3. The components of GBM: Drift and Diffusion
Remember from Section 1, we already identified the two components of Geometric Brownian Motion. One is longer-term trend in the stock prices, and other one is the shorter-term random fluctuations. Now, we will give them names. We will call longer-term trends as Drift and we will call shorter-term fluctuations as Diffusion. In this section, I want you to suppose that we are at a specific time point (k-1) and we are going to make a prediction for time point (k). However, the same assumptions we talked in the previous section are still valid. Just as a reminder:
Assumption 1: length of time period between (k-1) and (k), which is dt, is in line with the historical data frequency.
Assumption 2: the time in our simulation progresses through counting time periods.
These assumptions make us ignore multiplying mu, sigma and z(k) with a time-length value for adjustment and we can form our array t as a time point counter [1 2 3 ...].
Let's move on. We are at time point (k-1). Stock price at this time point needs to obey the longer-term trend in the stock prices while getting exposed to a random shock when we use it to create a prediction for time point (k). So, we need to apply the two components of GBM to this stock price.
Let's see what these components are in mathematical terms:
1. Drift:

Drift reflects the longer-term trend in the stock prices. As you can recall from the previous section, we already know mu and sigma. They are constant values calculated from the historical stock price data in July. So, our drift is a constant value and if we suppose there is no random shock, we can apply it to the stock price at time point (k-1) as you can see in above expression. If we keep applying the drift without any random shock, stock prices smoothly goes up if drift is positive or goes down if drift is negative. You should notice that the stock prices can never become 0 since drift is applied through the exponential function. This is the story of drift. Now, let's move with the diffusion.
2. Diffusion:

Diffusion reflects the shorter-term fluctuations. As you can recall from Section 2, array b stores the random shock information we need and it retrieves that random shock information from standard normal random variable z. When calculating the diffusion component, we multiply the random value z(k) with sigma. Here, you can notice how the randomness is incorporated into the GBM model. Also, the diffusion component makes it possible to create different stock price prediction scenarios. The drift component is constant. We can't create different scenarios out of it, but the diffusion component helps us create as many scenarios as we want since it involves Wiener process(It creates independent, stationary and normally distributed random shocks).
So, when making prediction for time point (k) considering the stock price at time point (k-1), we need to add the effects of both of the components. The total combined effect gives us the prediction for time point (k). If there would only be the drift component, we would always know what the stock price will become in the next time point, but the diffusion component disturbs this smoothness by introducing random shock. So, we don't observe monotonous increase or decrease trend for the stock prices. Below is how the total combined effect is applied to the stock price at time point (k-1).

## 4. Building the discrete time GBM model
Up to now, we learnt why we need GBM, what the parameters of a discrete-time GBM model are and how to make a prediction for 1 time point ahead. Now, we build the generic closed form equation of the Geometric Brownian Motion adjusted for discrete-time context. In the previous section, we showed how to predict S(k) by multiplying S(k-1) with exp(drift + diffusion). As the continuation of this, we will try to predict S(k) using So(the initial stock price which is known).

To do that, we first replace S(k-1) with S(k-2)*exp(drift(k-1) + diffusion(k-1)).
Then we replace S(k-2) with S(k-3)*exp(drift(k-2) + diffusion(k-2)).
If we continue doing this, at the end we will have So multiplied by many exponential terms. You can see what we obtain at the end of this procedure below.

After that, we extend the equation above, expressing drift and diffusion terms separately this time.

Above is the penultimate equation for S(k).It means, we can predict S(k) from So by adding the combined effect of k many drifts and the cumulative diffusion up to k. From So to S(k), there are k many time points which we call as the path. Remember from the previous section, drift doesn't change from one time point to another. That's why, to predict k time points ahead, we just add the drift k many times. This is different for diffusion. In Section 2, when discussing W, we learnt that it is the Brownian path. You can recall that part. Based on that discussion, to predict k time points ahead, we should consider the the total effect of all the random shocks encountered on the path. In the last step of building the equation, we modify the equation just above by making below replacements.

Then, it gives us the final form of the equation with parameters that are all familiar to us and I hope all the discussions we made in Section 2 now help you to understand why we created those parameters in the way we did.
We know So, we know mu, we know sigma, we know array t and we know array W. We can easily create predictions for all the time points in our prediction time horizon at a single step. We will do this in the next section.

## 5. Making predictions in Python
This is the last section of our article and it is the most fun part. We can now create the predictions. First thing to do is calculating the drift for all the time points in the prediction time horizon. You already remember array t. We just multiply it with drift and we get an array of drifts. This drift array contains the total drift for all the time points in the prediction time horizon.
About diffusion, we will have diffusion array for each of the scenarios. Remember, we control how many scenarios we want using the "scen_size" variable. Now, for this problem we only have 2 scenarios. You can go up as much as you want. That's why you see 2 different diffusion arrays labeled with '1' and '2', respectively. But, it is the same logic for each of them. We multiply the Brownian(array W) path with sigma.
```
drift = (mu - 0.5 * sigma**2) * t
print(drift)
diffusion = {str(scen): sigma * W[str(scen)] for scen in range(1, scen_size + 1)}
print(diffusion)
```
Below is the predictions. Since we have 2 scenarios, we will have two stock price prediction series as a result of the operation below. First line of code is just the final GBM equation we derived in the previous section. Second line just adds So to the prediction series, since it is the starting point and we may want to see it in our plots.
```
S = np.array([So * np.exp(drift + diffusion[str(scen)]) for scen in range(1, scen_size + 1)])
S = np.hstack((np.array([[So] for scen in range(scen_size)]), S))
print(S)
```
As the last thing to do, we now create the plot of our predictions. As you can see below, we have 2 stock price prediction scenarios. One goes up and the other one goes down. The drift in our problem is negative as you can observe from the drift array. This means the longer-term trend is downward for the stock prices(Remember we are only considering July as the historical data. I am talking based on this!). However, in spite of a negative drift, we have a prediction that is going up. This is due to the random shocks. Remember, we create them using standard normal random variable. In the scenario where prices are going up, it is clear that these random shocks exceeded the drift effect and it resulted into such a series.
```
# Plotting the simulations
plt.figure(figsize = (20,10))
for i in range(scen_size):
plt.title("Daily Volatility: " + str(sigma))
plt.plot(pd.date_range(start = S_eon["Date"].max(), end = pred_end_date, freq = 'D').map(lambda x:
x if x.isoweekday() in range(1, 6) else np.nan).dropna(), S[i, :])
plt.ylabel('Stock Prices, €')
plt.xlabel('Prediction Days')
plt.show()
```
# Conclusion
In this article, we learnt how to build a simulation model for stock prices using Geometric Brownian Motion in discrete-time context. Below is the full code. When you put your authorization token taken from Quandl after your registration and install the required Python packages, you can use the code right away. In each run, you will get different stock price scenarios. It is because, we use np.random.normal() without setting seed. Each time we run the model, we will have a different array W and it will result into different predictions.
Geometric Brownian Motion helps us to see what paths stock prices may follow and lets us to be prepared for what is coming. Of course, it is never possible to predict the exact future, but these statistical methods give us the chance of creating sound trading and hedging strategies that we can rely on.
```
import pandas as pd
import numpy as np
import quandl
import matplotlib.pyplot as plt
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
stock_name = 'FSE/EON_X'
# Plot of E.ON(a big scale energy company in Europe) stock prices since beginning of 2019 (up to July)
prices = quandl.get(stock_name,
authtoken="py3UYy43X9dTYJb7X6es",
start_date = '2019-01-01',
end_date = '2019-07-31').reset_index(
drop = False)[['Date', 'Close']]
plt.figure(figsize = (15, 5))
plt.plot(prices['Date'], prices['Close'])
plt.xlabel('Days')
plt.ylabel('Stock Prices, €')
plt.show()
#--------------------------------------------------- GEOMETRIC BROWNIAN MOTION ------------------------------------------------
# Parameter Definitions
# So : initial stock price
# dt : time increment -> a day in our case
# T : length of the prediction time horizon(how many time points to predict, same unit with dt(days))
# N : number of time points in prediction the time horizon -> T/dt
# t : array for time points in the prediction time horizon [1, 2, 3, .. , N]
# mu : mean of historical daily returns
# sigma : standard deviation of historical daily returns
# b : array for brownian increments
# W : array for brownian path
start_date = '2019-07-01'
end_date = '2019-07-31'
pred_end_date = '2019-08-31'
# We get daily closing stock prices of E.ON for July 2019
S_eon = quandl.get(stock_name,
authtoken="py3UYy43X9dTYJb7X6es",
start_date = start_date,
end_date = end_date).reset_index(
drop = False)[['Date', 'Close']]
print(S_eon.head())
print(S_eon.tail())
returns = (S_eon.loc[1:, 'Close'] - \
S_eon.shift(1).loc[1:, 'Close']) / \
S_eon.shift(1).loc[1:, 'Close']
print(returns.tolist())
# Parameter Assignments
So = S_eon.loc[S_eon.shape[0] - 1, "Close"]
dt = 1 # day # User input
n_of_wkdays = pd.date_range(start = pd.to_datetime(end_date,
format = "%Y-%m-%d") + pd.Timedelta('1 days'),
end = pd.to_datetime(pred_end_date,
format = "%Y-%m-%d")).to_series().map(lambda x:
1 if x.isoweekday() in range(1,6) else 0).sum()
T = n_of_wkdays # days # User input -> follows from pred_end_date
N = T / dt
t = np.arange(1, int(N) + 1)
mu = np.mean(returns)
sigma = np.std(returns)
scen_size = 50 # User input
b = {str(scen): np.random.normal(0, 1, int(N)) for scen in range(1, scen_size + 1)}
W = {str(scen): b[str(scen)].cumsum() for scen in range(1, scen_size + 1)}
# Calculating drift and diffusion components
drift = (mu - 0.5 * sigma**2) * t
print(drift)
diffusion = {str(scen): sigma * W[str(scen)] for scen in range(1, scen_size + 1)}
print(diffusion)
# Making the predictions
S = np.array([So * np.exp(drift + diffusion[str(scen)]) for scen in range(1, scen_size + 1)])
S = np.hstack((np.array([[So] for scen in range(scen_size)]), S)) # add So to the beginning series
print(S)
# Plotting the simulations
plt.figure(figsize = (20,10))
for i in range(scen_size):
plt.title("Daily Volatility: " + str(sigma))
plt.plot(pd.date_range(start = S_eon["Date"].max(),
end = pred_end_date, freq = 'D').map(lambda x:
x if x.isoweekday() in range(1, 6) else np.nan).dropna(), S[i, :])
plt.ylabel('Stock Prices, €')
plt.xlabel('Prediction Days')
plt.show()
# Dataframe format for predictions - first 10 scenarios only
Preds_df = pd.DataFrame(S.swapaxes(0, 1)[:, :10]).set_index(
pd.date_range(start = S_eon["Date"].max(), end = pred_end_date, freq = 'D').map(lambda x:
x if x.isoweekday() in range(1, 6) else np.nan).dropna()
).reset_index(drop = False)
```
| github_jupyter |
# **03_gen_supplement.ipynb**:
This ipython notebook interprets MAC results. Running this entire script generates a single file containing a summary of MAC results
- `ENSEMBLE_DIR/supplementary/Supplementary File XYZ.xlsx`
This code replaces previous results and is not additive like 01 and 02 .py scripts. Must be run after 01 and 02 py and results become interpretable with samples > 1000.
```
# import pandas as pd
import numpy as np
import pandas as pd
from cobra.io import load_json_model
import sys,os,argparse,resource,warnings,itertools
from os import listdir
from mnc_utils import func_convert_gene2name_df, get_mnc_direction, correct_pvals, get_sig_feat_df, get_flux_df_anova, get_sprice_df_anova, get_pathway_enrichments
from tqdm import tqdm
from cobrascape.species import load_json_obj, save_json_obj
import cobrascape.ensemble as ens
from scipy import stats
```
# Specify ENSEMBLE_DIR and pheno_list
gene_to_pathways is recommended for pathway analysis.
```
ENSEMBLE_DIR = "mnc_ensemble_0"
pheno_list = ["pyrazinamide"]
gene_to_pathways = load_json_obj("cobra_model/gene_to_pathways_filt.json")
## TESTSET is set to False automatically since the interpretations are available for the training set, not test set.
## The test set is used in validating the MACs
TESTSET = False
## SPRICE_RCOST_BOOL determines whether the median shadow price and reduced cost is calculated per strain. This
## step is very time consuming so the default is False. Only median fluxes are determined.
SPRICE_RCOST_BOOL = False
```
Set other parameters
```
bic_thresh_val = 10
save_data=True
med_or_mean='median'#"mean"#'median'
obj_direct_norm = True
ENSEMBLE_MAP_ASSESS = ENSEMBLE_DIR+"/popfva_assessment/"
SUPP_FILE_LOC = ENSEMBLE_DIR+"/supplementary/"
if not os.path.exists(SUPP_FILE_LOC):
print('\t... creating directory for supplementary data:'+SUPP_FILE_LOC)
os.makedirs(SUPP_FILE_LOC)
ensemble_args_dict = load_json_obj(ENSEMBLE_DIR+"/mnc_ensemble_args.json")
action_num = ensemble_args_dict["action_num"] # 4
ADD_NA_BOUND = ensemble_args_dict["nabound"] # False
STANDARDIZE_ = ensemble_args_dict["popFVA_STANDARDIZE"] # False
print("action_num (%d), nabound (%s), standardize (%s)"%(action_num, str(ADD_NA_BOUND),str(STANDARDIZE_)))
COBRA_MODEL = load_json_model(ENSEMBLE_DIR+"/base_cobra_model.json")
print("input: (S)toichimetric genome-scale model= (genes: %d, reactions: %d, metabolites: %d)" % (len(COBRA_MODEL.genes),
len(COBRA_MODEL.reactions), len(COBRA_MODEL.metabolites)))
```
# Analyze MACs and generate results tables
```
##########################################
# 1. BIC of MACs
##########################################
def load_samples_assess_df(ENSEMBLE_MAP_ASSESS, pheno_list):
### -------------- LOAD 2 -----------------
print("...loading SAMPLES_ASSESS_DF to identify minimum BIC or AIC MNCs")
onlyfiles = [f for f in listdir(ENSEMBLE_MAP_ASSESS) if os.path.isfile(os.path.join(ENSEMBLE_MAP_ASSESS, f))]
onlyfiles = [f for f in onlyfiles if f != ".DS_Store"]
samplesAfter = [f for f in onlyfiles if "sample_" in f]
wanted_keys = []
### Options for what we want in SAMPLES_ASSESS_DF
for pheno_id in pheno_list:
wanted_keys.extend(["AIC_"+pheno_id, "BIC_"+pheno_id])
SAMPLES_ASSESS_DF = {}
for landscape_sample_name in tqdm(samplesAfter):
landscape_sample_num = landscape_sample_name.split("_")[1]
sample_id = "sampled_map_"+str(landscape_sample_num)
landscape_assess_sample_file = ENSEMBLE_MAP_ASSESS+landscape_sample_name
if os.path.exists(landscape_assess_sample_file):
landscape_assess_sample = load_json_obj(landscape_assess_sample_file)
SAMPLES_ASSESS_DF[sample_id] = {}
SAMPLES_ASSESS_DF[sample_id].update(dict((k, landscape_assess_sample[k]) for k in wanted_keys if k in landscape_assess_sample))
# transform to pandas dataframe
SAMPLES_ASSESS_DF = pd.DataFrame.from_dict(SAMPLES_ASSESS_DF,orient="index")
print("\t... SAMPLES_ASSESS_DF shape: (samples: %d, assess_cols: %d)" % (SAMPLES_ASSESS_DF.shape[0], SAMPLES_ASSESS_DF.shape[1]))
return SAMPLES_ASSESS_DF
SAMPLES_ASSESS_DF = load_samples_assess_df(ENSEMBLE_MAP_ASSESS, pheno_list)
for pheno_id in pheno_list:
top_models = SAMPLES_ASSESS_DF["BIC_"+pheno_id].copy()
top_models.sort_values(inplace=True)
top_models.to_csv(ENSEMBLE_DIR+"/tables/best_mncs_"+pheno_id+".csv",header=True)
##########################################
# 2. Objective functions of MACs
##########################################
for pheno_id in pheno_list:
### Get sample ids for best antibiotic-specific MNCs
top_models = pd.read_csv(ENSEMBLE_DIR+"/tables/best_mncs_"+pheno_id+".csv",index_col=0)
top_models = top_models[top_models - top_models.min() < bic_thresh_val].dropna()
sample_list = top_models.index.tolist()
print("Number of %s MACs: %d"%(pheno_id, len(sample_list)))
obj_allsamples_df = pd.DataFrame()
obj_direction_minmax = "max"
r_filt_allsamples_dict = {}
for sample_id in tqdm(sample_list):
obj_sample_fn = ENSEMBLE_DIR+"/mnc_objectives/"+"obj_"+sample_id+"__"+pheno_id+".json"
r_obj_sample_dict = load_json_obj(obj_sample_fn)
r_filt_allsamples_dict.update(r_obj_sample_dict)
obj_allsamples_df = pd.DataFrame.from_dict(r_filt_allsamples_dict,orient="index")
### Align the objectives in high quality models by have them all be maximizing.
obj_hqsamples_max_df = obj_allsamples_df.copy() # .fillna(0)
for sampled_map_id, coef_row in obj_allsamples_df.iterrows():
sample_id = "sample_"+sampled_map_id.split("_")[-1]
obj_direct = get_mnc_direction(pheno_id, sample_id, ENSEMBLE_DIR)
### If the direction is a minimization, multiple all objective coefficients by -1.
if obj_direct[0] == "min":
obj_hqsamples_max_df.loc[sampled_map_id] = -1*obj_hqsamples_max_df.loc[sampled_map_id]
obj_hqsamples_abs_df = abs(obj_hqsamples_max_df)
obj_hqsamples_max_df
obj_avg_df = obj_hqsamples_max_df.apply(lambda x: x.mean())
obj_med_df = obj_hqsamples_max_df.apply(lambda x: x.median())
obj_avg_abs_df = obj_hqsamples_abs_df.apply(lambda x: x.mean())
obj_med_abs_df = obj_hqsamples_abs_df.apply(lambda x: x.median())
if save_data==True:
obj_hqsamples_max_df.to_csv(ENSEMBLE_DIR+"/tables/mnc_objectives_"+pheno_id+"__MAX"+".csv",header=True)
obj_hqsamples_abs_df.to_csv(ENSEMBLE_DIR+"/tables/mnc_objectives_"+pheno_id+"__MAX-ABS"+".csv",header=True)
obj_avg_df.to_csv(ENSEMBLE_DIR+"/tables/mnc_objectives_"+pheno_id+"__avg"+".csv",header=True)
obj_med_df.to_csv(ENSEMBLE_DIR+"/tables/mnc_objectives_"+pheno_id+"__med"+".csv",header=True)
obj_avg_abs_df.to_csv(ENSEMBLE_DIR+"/tables/mnc_objectives_"+pheno_id+"__avg-abs"+".csv",header=True)
obj_med_abs_df.to_csv(ENSEMBLE_DIR+"/tables/mnc_objectives_"+pheno_id+"__med-abs"+".csv",header=True)
if TESTSET==False:
print("...loading TRAINING data...")
X_species_final = pd.read_csv(ENSEMBLE_DIR+"/X_train.csv", index_col = 0)
Y_pheno_final = pd.read_csv(ENSEMBLE_DIR+"/y_train.csv", index_col = 0)
file_outtag = "train"
elif TESTSET==True:
print("...loading TEST data...")
X_species_final = pd.read_csv(ENSEMBLE_DIR+"/X_test.csv", index_col = 0)
Y_pheno_final = pd.read_csv(ENSEMBLE_DIR+"/y_test.csv", index_col = 0)
file_outtag = "test"
print("input: (G)enetic variant matrix= (strains: %d, alleles: %d)" % (X_species_final.shape[0], X_species_final.shape[1]))
### Load in the genetic variant matrix and AMR phenotypes for each case.
pheno_to_data2d_dict = {}
pheno_to_Y_dict = {}
if TESTSET==False:
ALLELE_PHENO_FILE = ENSEMBLE_DIR+"/allele_pheno_data/"
for pheno_id in pheno_list:
G_VARIANT_MATRIX_FILE = ALLELE_PHENO_FILE+"/allele_df_"+pheno_id+".csv"
PHENO_MATRIX_FILE = ALLELE_PHENO_FILE+"/pheno_df_"+pheno_id+".csv"
pheno_to_data2d_dict.update({pheno_id: pd.read_csv(G_VARIANT_MATRIX_FILE,index_col=0)})
pheno_to_Y_dict.update({pheno_id: pd.read_csv(PHENO_MATRIX_FILE,index_col=0)[pheno_id]}) ### series
elif TESTSET==True:
for pheno_id in pheno_list:
X_filtered, Y_filtered = cs.filter_pheno_nan(X_species_final, Y_pheno_final, pheno_id)
pheno_to_data2d_dict.update({pheno_id: X_filtered.loc[:,allele_list]})
pheno_to_Y_dict.update({pheno_id: Y_filtered})
##########################################
# 3. Flux states of MACs (may take a while...)
##########################################
for pheno_id in pheno_list:
top_models = pd.read_csv(ENSEMBLE_DIR+"/tables/best_mncs_"+pheno_id+".csv",index_col=0)
top_models = top_models[top_models - top_models.min() < bic_thresh_val].dropna()
sample_list = top_models.index.tolist()
print("Number of %s MACs: %d"%(pheno_id, len(sample_list)))
sig_flux_df, sol_dict = get_sig_feat_df(ENSEMBLE_DIR, pheno_id, sample_list, pheno_to_Y_dict,
feat_type="pop_fluxes",med_or_mean=med_or_mean) # 'mean'
sample_list = ["sample_"+x.split("_")[-1] for x in sample_list]
popfva_longform_all_df = pd.DataFrame()
flux_sol_all_df, sprice_sol_all_df, rcosts_sol_all_df = pd.DataFrame(), pd.DataFrame(), pd.DataFrame()
for sample_id in sample_list[:]:
obj_direction, _ = get_mnc_direction(pheno_id, sample_id, ENSEMBLE_DIR)
flux_sol_df = sol_dict[sample_id][pheno_id]["sol"]["pop_fluxes"].copy()
sprice_sol_df = sol_dict[sample_id][pheno_id]["sol"]["pop_sprices"].copy()
rcosts_sol_df = sol_dict[sample_id][pheno_id]["sol"]["pop_rcosts"].copy()
if obj_direction=="min" and obj_direct_norm==True:
flux_sol_df = 1 - flux_sol_df
sampled_map_num = "sampled_map_"+str(sample_id.split("_")[1])
flux_sol_df["index"] = flux_sol_df.index
# flux_sol_df["sample_id"] = sampled_map_num
flux_sol_df[pheno_id] = pheno_to_Y_dict[pheno_id]
flux_sol_all_df = pd.concat([flux_sol_all_df, flux_sol_df],axis=0,ignore_index=True)
sprice_sol_df["index"] = sprice_sol_df.index
sprice_sol_df[pheno_id] = pheno_to_Y_dict[pheno_id]
# sprice_sol_df["sample_id"] = sampled_map_num
sprice_sol_all_df = pd.concat([sprice_sol_all_df, sprice_sol_df],axis=0,ignore_index=True)
rcosts_sol_df["index"] = rcosts_sol_df.index
rcosts_sol_df[pheno_id] = pheno_to_Y_dict[pheno_id]
# rcosts_sol_df["sample_id"] = sampled_map_num
rcosts_sol_all_df = pd.concat([rcosts_sol_all_df, rcosts_sol_df],axis=0,ignore_index=True)
### Average the fluxes per strain and perform MNC flux GWAS
strain_flux_avg_df = flux_sol_df.copy()
strain_flux_med_df = flux_sol_df.copy()
for strain in tqdm(flux_sol_df.index.unique()[:]):
strain_flux_avg_df.loc[strain]=flux_sol_all_df[(flux_sol_all_df["index"]==strain)].apply(lambda x: x.mean(), axis=0)
strain_flux_med_df.loc[strain]=flux_sol_all_df[(flux_sol_all_df["index"]==strain)].apply(lambda x: x.median(), axis=0)
strain_flux_avg_df["index"] = strain_flux_avg_df.index
strain_flux_avg_df[pheno_id] = pheno_to_Y_dict[pheno_id]
strain_flux_med_df["index"] = strain_flux_med_df.index
strain_flux_med_df[pheno_id] = pheno_to_Y_dict[pheno_id]
strain_flux_avg_df.to_csv(ENSEMBLE_DIR+"/tables/strain_flux_avg_df_"+pheno_id+"__objnorm-"+str(obj_direct_norm)+".csv")
strain_flux_med_df.to_csv(ENSEMBLE_DIR+"/tables/strain_flux_med_df_"+pheno_id+"__objnorm-"+str(obj_direct_norm)+".csv")
popdf_mean_flux_anova_df = get_flux_df_anova(strain_flux_avg_df.dropna(), pheno_id, sol_dict[sample_id][pheno_id]["anova"]["pop_fluxes"])
popdf_median_flux_anova_df = get_flux_df_anova(strain_flux_med_df.dropna(), pheno_id, sol_dict[sample_id][pheno_id]["anova"]["pop_fluxes"])
popdf_mean_flux_anova_df.to_csv(ENSEMBLE_DIR+"/tables/strain_flux_avg_ANOVA_df_"+pheno_id+"__objnorm-"+str(obj_direct_norm)+".csv")
popdf_median_flux_anova_df.to_csv(ENSEMBLE_DIR+"/tables/strain_flux_med_ANOVA_df_"+pheno_id+"__objnorm-"+str(obj_direct_norm)+".csv")
### Average the metabolite shadow prices per strain and perform MNC flux GWAS
if SPRICE_RCOST_BOOL==True:
strain_sprice_avg_df = sprice_sol_df.copy()
strain_sprice_med_df = sprice_sol_df.copy()
for strain in tqdm(sprice_sol_df.index.unique()[:]):
strain_sprice_avg_df.loc[strain]=sprice_sol_all_df[(sprice_sol_all_df["index"]==strain)].apply(lambda x: x.mean(), axis=0)
strain_sprice_med_df.loc[strain]=sprice_sol_all_df[(sprice_sol_all_df["index"]==strain)].apply(lambda x: x.median(), axis=0)
strain_sprice_avg_df["index"] = strain_sprice_avg_df.index
strain_sprice_avg_df[pheno_id] = pheno_to_Y_dict[pheno_id]
strain_sprice_med_df["index"] = strain_sprice_med_df.index
strain_sprice_med_df[pheno_id] = pheno_to_Y_dict[pheno_id]
strain_sprice_avg_df.to_csv(ENSEMBLE_DIR+"/tables/strain_sprice_avg_df_"+pheno_id+"__objnorm-"+str(obj_direct_norm)+".csv")
strain_sprice_med_df.to_csv(ENSEMBLE_DIR+"/tables/strain_sprice_med_df_"+pheno_id+"__objnorm-"+str(obj_direct_norm)+".csv")
popdf_mean_sprice_anova_df = get_sprice_df_anova(strain_sprice_avg_df.dropna(), pheno_id, sol_dict[sample_id][pheno_id]["anova"]["pop_sprices"])
popdf_median_sprice_anova_df = get_sprice_df_anova(strain_sprice_med_df.dropna(), pheno_id, sol_dict[sample_id][pheno_id]["anova"]["pop_sprices"])
popdf_mean_sprice_anova_df.to_csv(ENSEMBLE_DIR+"/tables/strain_sprice_avg_ANOVA_df_"+pheno_id+"__objnorm-"+str(obj_direct_norm)+".csv")
popdf_median_sprice_anova_df.to_csv(ENSEMBLE_DIR+"/tables/strain_sprice_med_ANOVA_df_"+pheno_id+"__objnorm-"+str(obj_direct_norm)+".csv")
##########################################
# 4. Pathway enrichments of MACs
##########################################
if type(gene_to_pathways)==dict:
all_subsys_phenos = pd.DataFrame()
SUBSYS_NUM_THRESHOLD = 2
add_small_pval = 1e-20
save_fig = False
kegg_or_biocyc = "biocyc"#"kegg" #"biocyc"#"kegg"
if kegg_or_biocyc=="kegg":
g_to_pathways = kegg_gene_to_paths
elif kegg_or_biocyc=="biocyc":
g_to_pathways = gene_to_pathways
### Loop through each drug and output flux data and pathway enrichments
for pheno_id in pheno_list:
strain_flux_med_df=pd.read_csv(ENSEMBLE_DIR+"/tables/strain_flux_med_df_"+pheno_id+"__objnorm-"+str(obj_direct_norm)+".csv",index_col=0)
strain_flux_avg_df=pd.read_csv(ENSEMBLE_DIR+"/tables/strain_flux_avg_df_"+pheno_id+"__objnorm-"+str(obj_direct_norm)+".csv",index_col=0)
popdf_mean_flux_anova_df = pd.read_csv(ENSEMBLE_DIR+"/tables/strain_flux_avg_ANOVA_df_"+pheno_id+"__objnorm-"+str(obj_direct_norm)+".csv",index_col=0)
popdf_median_flux_anova_df = pd.read_csv(ENSEMBLE_DIR+"/tables/strain_flux_med_ANOVA_df_"+pheno_id+"__objnorm-"+str(obj_direct_norm)+".csv",index_col=0)
popdf_median_flux_anova_df_correct = correct_pvals(popdf_median_flux_anova_df,pval_col="pvalue",
method="bonferroni", correct_alpha=0.05) # "fdr_bh"
print(popdf_median_flux_anova_df_correct.shape)
if popdf_median_flux_anova_df_correct.shape[0]!=0:
subsys_sig_median, react_to_subsys, subsys_to_sig_react = get_pathway_enrichments(ENSEMBLE_DIR,
popdf_median_flux_anova_df, pheno_id, g_to_pathways,kegg_biocyc=kegg_or_biocyc, med_or_med="median",
pval_thresh=popdf_median_flux_anova_df_correct["pvalue"].max()+add_small_pval,
save_data=True
)
subsys_sig_median_filt = subsys_sig_median[subsys_sig_median["TOTAL_SUBSYS_NUM"]>SUBSYS_NUM_THRESHOLD]
# all_subsys_phenos = pd.concat([all_subsys_phenos, subsys_sig_median_filt[["subsys_pval_"+pheno_id]]],axis=1)
subsys_sig_median_FDR = correct_pvals(subsys_sig_median_filt,pval_col="subsys_pval_"+pheno_id, method="fdr_bh", correct_alpha=0.05)
# subsys_sig_median_FDR = correct_pvals(subsys_sig_median_filt,pval_col="subsys_pval_"+pheno_id, method="fdr_bh", correct_alpha=0.05)
subsys_sig_median_FDR.to_csv(ENSEMBLE_DIR+"/tables/pathway_enriched_FDR_"+pheno_id+".csv")
else:
print("No significant fluxes according to ANOVA F-test. You need to generate more MAC samples!")
##########################################
# 4. Pathway enrichments of MACs
##########################################
def get_constraint_df(sample_list, pheno_to_data2d_dict, ENSEMBLE_MAP_ASSESS,gene_to_name=None,gene_name=True):
""" Takes in a list of MNC samples and returns their allele-constraint maps (alleles vs samples dataframe)
The values are in categorical terms, i.e. "lb_0", "ub_2", etc...
"""
pheno_id = list(pheno_to_data2d_dict.keys())[0] # arbitrary, just need to get one key in dict
allele_col_ids = [x for x in pheno_to_data2d_dict[pheno_id].columns]
SAMPLES_AC_DF = {}
for sample_id in sample_list:
landscape_sample_num = sample_id.split("_")[-1]
landscape_sample_name = "sample_"+str(landscape_sample_num)+"_map_assess.json"
landscape_assess_sample_file = ENSEMBLE_MAP_ASSESS+landscape_sample_name
if os.path.exists(landscape_assess_sample_file):
landscape_assess_sample = load_json_obj(landscape_assess_sample_file)
SAMPLES_AC_DF[sample_id] = {}
SAMPLES_AC_DF[sample_id].update(dict((k, landscape_assess_sample[k]) for k in allele_col_ids if k in landscape_assess_sample))
SAMPLES_AC_DF = pd.DataFrame.from_dict(SAMPLES_AC_DF,orient="index")
# print("\t... SAMPLES_AC_DF shape: (samples: %d, assess_cols: %d)" % (SAMPLES_AC_DF.shape[0], SAMPLES_AC_DF.shape[1]))
if gene_name==True and gene_to_name!=None:
SAMPLES_AC_DF = func_convert_gene2name_df(SAMPLES_AC_DF.T, gene_to_name)
SAMPLES_AC_DF = SAMPLES_AC_DF[sample_list]
else:
SAMPLES_AC_DF = SAMPLES_AC_DF.T
SAMPLES_AC_DF = SAMPLES_AC_DF[sample_list]
return SAMPLES_AC_DF
def get_allele_lor_dict(allele_list, pheno_id, x_allele_dict, y_pheno_dict, gene_to_name=None,gene_name=True):
""" Takes in a list of alleles and a phenotype and returns a dictionary describing its log odds ratio,
number of strains with allele, and number of strains with the resistant phenotype.
"""
allele_df = pheno_to_data2d_dict[pheno_id]
if gene_name==True:
allele_df = func_convert_gene2name_df(allele_df.T, gene_to_name)
allele_df = allele_df.T
lor_dict = {}
for allele in allele_list:
num_strains_with_allele = len(allele_df[allele_df[allele]==1].index.tolist())
LOR, num_R = ens.log_odds_ratio(allele, allele_df, y_pheno_dict[pheno_id], addval=0.5)
lor_dict.update({allele: {"lor": LOR, "num_strains":num_strains_with_allele, "num_resist": num_R}})
return lor_dict
def align_constraints(sample_list, samples_ac_df, ENSEMBLE_DIR, pheno_id, COBRA_MODEL, coef_thresh=0.0, verbose=False):
""" Aligns the constraints according to the maximization direction.
Requires ".../tables/mnc_objectives_"+pheno_id+"__MAX"+".csv"
"""
sample_id_list = ["sample_"+x.split("_")[-1] for x in sample_list]
samples_ac_df_align = samples_ac_df.copy()
obj_hqsamples_max_df=pd.read_csv(ENSEMBLE_DIR+"/tables/mnc_objectives_"+pheno_id+"__MAX"+".csv",index_col=0)
# obj_hqsamples_abs_df=pd.read_csv(ENSEMBLE_DIR+"/tables/mnc_objectives_"+pheno_id+"__MAX-ABS"+".csv",index_col=0)
# obj_avg_df.to_csv(ENSEMBLE_DIR+"/tables/mnc_objectives_"+pheno_id+"__avg"+".csv",header=True)
# obj_med_df.to_csv(ENSEMBLE_DIR+"/tables/mnc_objectives_"+pheno_id+"__med"+".csv",header=True)
# obj_avg_abs_df.to_csv(ENSEMBLE_DIR+"/tables/mnc_objectives_"+pheno_id+"__avg-abs"+".csv",header=True)
# obj_med_abs_df=pd.read_csv(ENSEMBLE_DIR+"/tables/mnc_objectives_"+pheno_id+"__med-abs"+".csv",index_col=0,header=None)[1]
for sampled_map_num in tqdm(sample_list[:]):
sample_id = "sample_"+sampled_map_num.split("_")[-1]
obj_direction, pvals = get_mnc_direction(pheno_id, sample_id, ENSEMBLE_DIR)
if verbose==True:
print(sample_id, obj_direction, pvals)
alleles_changed = []
# sort dataframe based on absolute values of hqsamples max
rxnvar_order = abs(obj_hqsamples_max_df.loc[sampled_map_num][:]).sort_values(ascending=False).index.tolist()
for obj_var, coef in obj_hqsamples_max_df.loc[sampled_map_num][rxnvar_order].items():
rxn = "_".join(obj_var.split("_")[:-1])
rxn_genes = [x.id for x in COBRA_MODEL.reactions.get_by_id(rxn).genes]
rxn_alleles = [x for x in allele_list if x.split("_")[0] in rxn_genes]
## Don't change those that have already been changed...
rxn_alleles = [x for x in rxn_alleles if x not in alleles_changed]
if verbose==True:
print("\t",obj_var,coef) # , rxn_alleles
print("\t",samples_ac_df.loc[rxn_alleles][sampled_map_num].to_dict())
lb_alleles = samples_ac_df_align.loc[rxn_alleles][sampled_map_num][samples_ac_df_align.loc[rxn_alleles][sampled_map_num]<0].index.tolist()
ub_alleles = samples_ac_df_align.loc[rxn_alleles][sampled_map_num][samples_ac_df_align.loc[rxn_alleles][sampled_map_num]>0].index.tolist()
if obj_direction=="min":
# if objective direction is min, but aligned objective coefficent (max) for
# rxn_min is positive, change lbs to ubs
""" if objective direction is min, but aligned objective coefficent (max) for
rxn_min is positive, change lbs to ubs """
if "_min" in obj_var and coef>coef_thresh:
samples_ac_df_align.loc[lb_alleles,sampled_map_num] = samples_ac_df.loc[lb_alleles,sampled_map_num]*-1
alleles_changed.extend(lb_alleles)
# if objective direction is min, but aligned objective coefficent (max) is positive, change lbs to ubs
if "_max" in obj_var and coef>coef_thresh:
samples_ac_df_align.loc[ub_alleles,sampled_map_num] = samples_ac_df.loc[ub_alleles,sampled_map_num]*-1
alleles_changed.extend(ub_alleles)
if obj_direction=="max":
if "_min" in obj_var and coef<-coef_thresh:
samples_ac_df_align.loc[lb_alleles,sampled_map_num] = samples_ac_df.loc[lb_alleles,sampled_map_num]*-1
alleles_changed.extend(lb_alleles)
if "_max" in obj_var and coef<-coef_thresh:
samples_ac_df_align.loc[ub_alleles,sampled_map_num] = samples_ac_df.loc[ub_alleles,sampled_map_num]*-1
alleles_changed.extend(ub_alleles)
return samples_ac_df_align
def test_allele_constraint_chisquare(samples_ac_df_align, binary=True, ddof=0, double_freq=False):
""" Performs a chisquare test for each alleles constraints across hq models
if binary==True, test for whether the constraint is either upper or lower bound, not the specific value.
"""
allele_chisquare_dict = {}
for allele in samples_ac_df_align.index.tolist():
f_obs_bin = [
samples_ac_df_align.loc[allele][samples_ac_df_align.loc[allele]<0].shape[0],
samples_ac_df_align.loc[allele][samples_ac_df_align.loc[allele]>0].shape[0]
]
if double_freq==True:
f_obs_bin = [x*2 for x in f_obs_bin]
chisq, pval_bin = stats.chisquare(f_obs_bin, f_exp=None, ddof=ddof)
f_obs = []
for bnd in list(action_constraint_mapping.values()):
# -2, -1, 1, 2
f_obs.append(samples_ac_df_align.loc[allele].values.tolist().count(bnd))
if double_freq==True:
f_obs = [x*2 for x in f_obs]
chisq, pval = stats.chisquare(f_obs, f_exp=None, ddof=ddof)
# estimate entropy
h_obs = stats.entropy(f_obs)
### _1 _2
### LB | 0 | 1
### UB | 4 | 0
# cont_table = [[f_obs[0], f_obs[1]],[f_obs[2], f_obs[3]]]
# OR, pval_fisher = stats.fisher_exact(cont_table)
# , "pval_fisher":pval_fisher
allele_chisquare_dict.update({allele: {"lb_ub": f_obs_bin, "pval_bin": pval_bin, "cons": f_obs, "pval_dis": pval, "h_dis": h_obs}})
return allele_chisquare_dict
def get_allele_sigreacts(x, COBRA_MODEL, flux_gwas_df):
""" Takes in allele and FDR flux GWAS and returns the significant allele-reaction relation"""
# x = "Rv0346c"
allele_sig_reacts = []
allele_reacts = [x.id for x in COBRA_MODEL.genes.get_by_id(x).reactions]
for rxn in allele_reacts:
if rxn in flux_gwas_df.index:
if x in ast.literal_eval(flux_gwas_df.loc[rxn, "genes"]):
allele_sig_reacts.append(rxn)
return allele_sig_reacts
med_or_mean='median'#"mean"#'median'
bic_thresh_val = 10
strain_numcutoff = 3
allele_freqcutoff = 0.02
coef_thresh = 0.0
ddof= 0
obj_direct_norm = True
genename=False
save_data=True
action_constraint_mapping = ens.get_action_constraint_mapping(action_num, add_no_change=ADD_NA_BOUND)
gene_to_name = ens.load_json_obj("cobra_model/gene_to_name.json")
for pheno_id in pheno_list[:]:
top_models = pd.read_csv(ENSEMBLE_DIR+"/tables/best_mncs_"+pheno_id+".csv",index_col=0)
top_models = top_models[top_models - top_models.min() < bic_thresh_val].dropna()
sample_list = top_models.index.tolist()
# sample_list = ["sample_"+x.split("_")[-1] for x in sample_list]
print("Number of %s MACs: %d"%(pheno_id, len(sample_list)))
## Get constraint dataframe
samples_ac_df = get_constraint_df(sample_list,pheno_to_data2d_dict,ENSEMBLE_MAP_ASSESS,gene_to_name=gene_to_name,gene_name=genename)
samples_ac_df.replace(action_constraint_mapping,inplace=True)
## Get allele data
allele_list = samples_ac_df.index.tolist()
lor_dict = get_allele_lor_dict(allele_list,pheno_id, pheno_to_data2d_dict,pheno_to_Y_dict,gene_to_name=gene_to_name,gene_name=genename)
allele_data_df = pd.DataFrame.from_dict(lor_dict,orient="index")
samples_ac_df_align = align_constraints(sample_list,samples_ac_df,ENSEMBLE_DIR,pheno_id,COBRA_MODEL, coef_thresh=coef_thresh, verbose=False)
## test significance of allelic constraints
allele_chisquare_dict = test_allele_constraint_chisquare(samples_ac_df_align, binary=True,ddof=ddof,double_freq=True)
allele_chisquare_df = pd.DataFrame.from_dict(allele_chisquare_dict,orient="index")
## add significant pathway and reactions info to matrix
pathway_enrich_sheet = pd.read_csv(ENSEMBLE_DIR+"/tables/pathway_enriched_FDR_"+pheno_id+".csv", index_col=0)
flux_gwas_sheet = pd.read_csv(ENSEMBLE_DIR+"/tables/strain_flux_med_ANOVA_df_"+pheno_id+"__objnorm-"+str(obj_direct_norm)+".csv", index_col=0)
flux_gwas_sheet_correct = correct_pvals(
flux_gwas_sheet,pval_col="pvalue",method="bonferroni",correct_alpha=0.05
)
allele_chisquare_df["pathways"] = allele_chisquare_df.index.map(lambda x: [x for x in gene_to_pathways[x.split("_")[0]]])
allele_chisquare_df["enrich_pathways"] = allele_chisquare_df.index.map(lambda x: [x for x in gene_to_pathways[x.split("_")[0]] if x in pathway_enrich_sheet.index.tolist()])
# allele_chisquare_df["sig_reacts"] = allele_chisquare_df.index.map(lambda x: get_allele_sigreacts(x.split("_")[0], COBRA_MODEL, flux_gwas_sheet_correct))
allele_chisquare_df["sig_reacts"] = allele_chisquare_df.index.map(lambda x: get_allele_sigreacts(x.split("_")[0], COBRA_MODEL, flux_gwas_sheet_correct))
### Combine dataframes? Maybe wait to do this after the constraints have been organized according to min/max
allele_info_df = pd.concat([samples_ac_df_align, allele_data_df, allele_chisquare_df],axis=1,sort=True)
### Filter out alleles that appear in < # of strains
allele_info_df["allele_freq"] = allele_info_df["num_strains"]/float(pheno_to_data2d_dict[pheno_id].shape[0])
allele_info_df = allele_info_df[allele_info_df["num_strains"]>strain_numcutoff]
allele_info_df = allele_info_df[allele_info_df["allele_freq"]>allele_freqcutoff]
allele_info_df["allele_rv"] = allele_info_df.index
### Get names
allele_info_df = func_convert_gene2name_df(allele_info_df, gene_to_name)
allele_info_df["gene"] = allele_info_df.index.map(lambda x: x.split("_")[0])
allele_info_df.sort_values(["pval_bin", "lor"], ascending=[True,False],inplace=True)
allele_info_df["med_bnd"] = allele_info_df[sample_list].median(axis=1)
if save_data==True:
allele_info_df.to_csv(ENSEMBLE_DIR+"/tables/allelic_effects_"+pheno_id+"_ddof-"+str(ddof)+"_cutoff-"+str(np.round(coef_thresh, 4))+".csv")
```
# Generate supplement after running cells above.
```
def get_objective_sheet(ENSEMBLE_DIR, pheno_id):
obj_hqsamples_abs_df = pd.read_csv(ENSEMBLE_DIR+"/tables/mnc_objectives_"+pheno_id+"__MAX-ABS"+".csv",index_col=0)
obj_med_abs_df = pd.read_csv(ENSEMBLE_DIR+"/tables/mnc_objectives_"+pheno_id+"__med-abs"+".csv",index_col=0,header=None,squeeze=True)
obj_med_abs_df.index.name = None
obj_med_abs_df.name = "med-abs"
obj_hqsamples_abs_df = obj_hqsamples_abs_df.T
obj_df = pd.concat([obj_med_abs_df, obj_hqsamples_abs_df],axis=1)
obj_df.sort_values(["med-abs"],ascending=False, inplace=True)
return obj_df
# bic_thresh_val = 10
coef_thresh = 0.0
obj_direct_norm = True
for pheno_id in pheno_list:
### characters are too long for excel sheet so shorten to PAS
if len(pheno_id) >= 21:
pheno_id_name = pheno_id[:10]
else:
pheno_id_name = pheno_id
### Sheet 1 - outline of other sheets
outline_df = pd.DataFrame.from_dict({
pheno_id_name+"_MAC_modelBIC": 'List of best MACs with delta BIC < 10',
pheno_id_name+"_MAC_objectives": 'Objective functions for best MACs (absolute). First column is median absolute coefficient.',
pheno_id_name+"_MAC_flux_GWAS": 'Univariate statistical tests between median strain-specific MAC reaction fluxes (minmax scaled) and AMR with Bonferroni-corrted p-value<0.05',
pheno_id_name+"_MAC_shprice_GWAS": 'Univariate statistical tests between median strain-specific MAC metabolite shadow prices (minmax scaled) and AMR with Bonferroni-corrected p-value<0.05',
pheno_id_name+"_MAC_pathway_enrich": 'Pathway enrichments of significant fluxes with FDR<0.05',
pheno_id_name+"_MAC_allele_params": 'Allele-specific information in best MACs such as constraints, mutations, etc. Ranked according to chi-squared goodness of fit. Constraints were aligned in direction of objective reaction variable coefficient.',
},orient="index")
outline_df.columns=["sheet description for "+pheno_id]
with pd.ExcelWriter(SUPP_FILE_LOC+'Supplementary file '+pheno_id+'.xlsx') as writer:
outline_df.to_excel(writer, sheet_name=pheno_id_name+" supp outline")
top_models = pd.read_csv(ENSEMBLE_DIR+"/tables/best_mncs_"+pheno_id+".csv",index_col=0)
top_models = top_models[top_models - top_models.min() < bic_thresh_val].dropna()
top_models.to_excel(writer, sheet_name=pheno_id_name+"_MAC_modelBIC")
objective_sheet = get_objective_sheet(ENSEMBLE_DIR, pheno_id)
objective_sheet.to_excel(writer, sheet_name=pheno_id_name+"_MAC_objectives")
flux_gwas_sheet = pd.read_csv(ENSEMBLE_DIR+"/tables/strain_flux_med_ANOVA_df_"+pheno_id+"__objnorm-"+str(obj_direct_norm)+".csv", index_col=0)
flux_gwas_sheet_correct = correct_pvals(
flux_gwas_sheet,pval_col="pvalue",method="bonferroni",correct_alpha=0.05
)
flux_gwas_sheet_correct.to_excel(writer, sheet_name=pheno_id_name+"_MAC_flux_GWAS")
if SPRICE_RCOST_BOOL==True:
sprice_gwas_sheet = pd.read_csv(ENSEMBLE_DIR+"/tables/strain_sprice_med_ANOVA_df_"+pheno_id+"__objnorm-"+str(obj_direct_norm)+".csv",index_col=0)
sprice_gwas_sheet_correct = correct_pvals(
sprice_gwas_sheet,pval_col="pvalue",method="bonferroni",correct_alpha=0.05
)
sprice_gwas_sheet_correct.to_excel(writer, sheet_name=pheno_id_name+"_MAC_shprice_GWAS")
pathway_enrich_sheet = pd.read_csv(ENSEMBLE_DIR+"/tables/pathway_enriched_FDR_"+pheno_id+".csv", index_col=0)
pathway_enrich_sheet.to_excel(writer, sheet_name=pheno_id_name+"_MAC_pathway_enrich")
allele_info_df = pd.read_csv(ENSEMBLE_DIR+"/tables/allelic_effects_"+pheno_id+"_ddof-"+str(ddof)+"_cutoff-"+str(np.round(coef_thresh, 4))+".csv",index_col=0)
allele_info_df.to_excel(writer, sheet_name=pheno_id_name+"_MAC_allele_params")
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Lists" data-toc-modified-id="Lists-1"><span class="toc-item-num">1 </span>Lists</a></span><ul class="toc-item"><li><span><a href="#Indexing" data-toc-modified-id="Indexing-1.1"><span class="toc-item-num">1.1 </span>Indexing</a></span></li><li><span><a href="#Slicing" data-toc-modified-id="Slicing-1.2"><span class="toc-item-num">1.2 </span>Slicing</a></span></li><li><span><a href="#Built-in-List-Functions" data-toc-modified-id="Built-in-List-Functions-1.3"><span class="toc-item-num">1.3 </span>Built in List Functions</a></span></li><li><span><a href="#Copying-a-list" data-toc-modified-id="Copying-a-list-1.4"><span class="toc-item-num">1.4 </span>Copying a list</a></span></li></ul></li><li><span><a href="#Tuples" data-toc-modified-id="Tuples-2"><span class="toc-item-num">2 </span>Tuples</a></span><ul class="toc-item"><li><span><a href="#Mapping-one-tuple-to-another" data-toc-modified-id="Mapping-one-tuple-to-another-2.1"><span class="toc-item-num">2.1 </span>Mapping one tuple to another</a></span></li><li><span><a href="#Built-In-Tuple-functions" data-toc-modified-id="Built-In-Tuple-functions-2.2"><span class="toc-item-num">2.2 </span>Built In Tuple functions</a></span></li></ul></li><li><span><a href="#Sets" data-toc-modified-id="Sets-3"><span class="toc-item-num">3 </span>Sets</a></span><ul class="toc-item"><li><span><a href="#Built-in-Functions" data-toc-modified-id="Built-in-Functions-3.1"><span class="toc-item-num">3.1 </span>Built-in Functions</a></span></li></ul></li></ul></div>
# Data Structure in python
```
import os
from IPython.core.display import HTML
def load_style(directory = '../', name='customMac.css'):
styles = open(os.path.join(directory, name), 'r').read()
return HTML(styles)
load_style()
```
# Data Structures
In simple terms, It is the the collection or group of data in a particular structure.
## Lists
Lists are the most commonly used data structure. Think of it as a sequence of data that is enclosed in square brackets and data are separated by a comma. Each of these data can be accessed by calling it's index value.
Lists are declared by just equating a variable to '[ ]' or list.
```
a = []
print(type(a))
```
One can directly assign the sequence of data to a list x as shown.
```
x = ['apple', 'orange']
```
### Indexing
In python, Indexing starts from 0. Thus now the list x, which has two elements will have apple at 0 index and orange at 1 index.
```
x[0]
```
Indexing can also be done in reverse order. That is the last element can be accessed first. Here, indexing starts from -1. Thus index value -1 will be orange and index -2 will be apple.
```
x[-1]
```
As you might have already guessed, x[0] = x[-2], x[1] = x[-1]. This concept can be extended towards lists with more many elements.
```
y = ['carrot','potato']
```
Here we have declared two lists x and y each containing its own data. Now, these two lists can again be put into another list say z which will have it's data as two lists. This list inside a list is called as nested lists and is how an array would be declared which we will see later.
```
z = [x,y]
print(z)
```
Indexing in nested lists can be quite confusing if you do not understand how indexing works in python. So let us break it down and then arrive at a conclusion.
Let us access the data 'apple' in the above nested list.
First, at index 0 there is a list ['apple','orange'] and at index 1 there is another list ['carrot','potato']. Hence z[0] should give us the first list which contains 'apple'.
```
z1 = z[0]
print(z1)
```
Now observe that z1 is not at all a nested list thus to access 'apple', z1 should be indexed at 0.
```
z1[0]
```
Instead of doing the above, In python, you can access 'apple' by just writing the index values each time side by side.
```
z[0][0]
```
If there was a list inside a list inside a list then you can access the innermost value by executing z[ ][ ][ ].
### Slicing
Indexing was only limited to accessing a single element, Slicing on the other hand is accessing a sequence of data inside the list. In other words "slicing" the list.
Slicing is done by defining the index values of the first element and the last element from the parent list that is required in the sliced list. It is written as parentlist[ a : b ] where a,b are the index values from the parent list. If a or b is not defined then the index value is considered to be the first value for a if a is not defined and the last value for b when b is not defined.
```
num = [0,1,2,3,4,5,6,7,8,9]
print(num[0:4])
print(num[4:])
```
You can also slice a parent list with a fixed length or step length.
```
num[:9:3]
```
### Built in List Functions
To find the length of the list or the number of elements in a list, **len( )** is used.
```
len(num)
```
If the list consists of all integer elements then **min( )** and **max( )** gives the minimum and maximum value in the list.
```
min(num)
max(num)
```
Lists can be concatenated by adding, '+' them. The resultant list will contain all the elements of the lists that were added. The resultant list will not be a nested list.
```
x+y
[1,2,3] + [5,4,7]
```
There might arise a requirement where you might need to check if a particular element is there in a predefined list. Consider the below list.
```
names = ['Earth','Air','Fire','Water']
```
To check if 'Fire' and 'Rajath' is present in the list names. A conventional approach would be to use a for loop and iterate over the list and use the if condition. But in python you can use 'a in b' concept which would return 'True' if a is present in b and 'False' if not.
```
'Fire' in names
'Rajath' in names
```
In a list with elements as string, **max( )** and **min( )** is applicable. **max( )** would return a string element whose ASCII value is the highest and the lowest when **min( )** is used. Note that only the first index of each element is considered each time and if they value is the same then second index considered so on and so forth.
```
mlist = ['bzaa','ds','nc','az','z','klm']
print(max(mlist))
print(min(mlist))
```
Here the first index of each element is considered and thus z has the highest ASCII value thus it is returned and minimum ASCII is a. But what if numbers are declared as strings?
```
nlist = ['1','94','93','1000']
print(max(nlist))
print(min(nlist))
```
Even if the numbers are declared in a string the first index of each element is considered and the maximum and minimum values are returned accordingly.
But if you want to find the **max( )** string element based on the length of the string then another parameter 'key=len' is declared inside the **max( )** and **min( )** function.
```
print(max(names, key=len))
print(min(names, key=len))
```
But even 'Water' has length 5. **max()** or **min()** function returns the first element when there are two or more elements with the same length.
Any other built in function can be used or lambda function (will be discussed later) in place of len.
A string can be converted into a list by using the **list()** function.
```
list('hello')
```
**append( )** is used to add a element at the end of the list.
```
lst = [1,1,4,8,7]
lst.append(1)
print(lst)
```
**count( )** is used to count the number of a particular element that is present in the list.
```
lst.count(1)
```
**append( )** function can also be used to add a entire list at the end. Observe that the resultant list becomes a nested list.
```
lst1 = [5,4,2,8]
lst.append(lst1)
print(lst)
```
But if nested list is not what is desired then **extend( )** function can be used.
```
lst.extend(lst1)
print(lst)
```
**index( )** is used to find the index value of a particular element. Note that if there are multiple elements of the same value then the first index value of that element is returned.
```
lst.index(1)
```
**insert(x,y)** is used to insert a element y at a specified index value x. **append( )** function made it only possible to insert at the end.
```
lst.insert(5, 'name')
print(lst)
```
**insert(x,y)** inserts but does not replace element. If you want to replace the element with another element you simply assign the value to that particular index.
```
lst[5] = 'Python'
print(lst)
```
**pop( )** function return the last element in the list. This is similar to the operation of a stack. Hence it wouldn't be wrong to tell that lists can be used as a stack.
```
lst.pop()
```
Index value can be specified to pop a ceratin element corresponding to that index value.
```
lst.pop(0)
```
**pop( )** is used to remove element based on it's index value which can be assigned to a variable. One can also remove element by specifying the element itself using the **remove( )** function.
```
lst.remove('Python')
print(lst)
```
Alternative to **remove** function but with using index value is **del**
```
del lst[1]
print (lst)
```
The entire elements present in the list can be reversed by using the **reverse()** function.
```
lst.reverse()
print(lst)
```
Note that the nested list [5,4,2,8] is treated as a single element of the parent list lst. Thus the elements inside the nested list is not reversed.
Python offers built in operation **sort( )** to arrange the elements in ascending order.
```
a = [3,4,5,1,3,2]
a.sort()
print(a)
```
For descending order, By default the reverse condition will be False for reverse. Hence changing it to True would arrange the elements in descending order.
```
a.sort(reverse=True)
print(a)
```
Similarly for lists containing string elements, **sort( )** would sort the elements based on it's ASCII value in ascending and by specifying reverse=True in descending.
```
names.sort()
print(names)
names.sort(reverse=True)
print(names)
```
To sort based on length key=len should be specified as shown.
```
names.sort(key=len)
print(names)
names.sort(key=len,reverse=True)
print(names)
```
### Copying a list
Most of the new python programmers commit this mistake. Consider the following,
```
lista= [2,1,4,3]
listb = lista
print(listb)
```
Here, We have declared a list, lista = [2,1,4,3]. This list is copied to listb by assigning it's value and it get's copied as seen. Now we perform some random operations on lista.
```
lista.pop()
print(lista)
lista.append(9)
print(lista)
print(listb)
```
listb has also changed though no operation has been performed on it. This is because you have assigned the same memory space of lista to listb. So how do fix this?
If you recall, in slicing we had seen that parentlist[a:b] returns a list from parent list with start index a and end index b and if a and b is not mentioned then by default it considers the first and last element. We use the same concept here. By doing so, we are assigning the data of lista to listb as a variable.
```
lista = [2,1,4,3]
listb = lista[:]
print(listb)
lista.pop()
print(lista)
lista.append(9)
print(lista)
print(listb)
```
## Tuples
Tuples are similar to lists but only big difference is the elements inside a list can be changed but in tuple it cannot be changed. Think of tuples as something which has to be True for a particular something and cannot be True for no other values. For better understanding, Recall **divmod()** function. In short, 'tuple' object does not support item assignment
```
xyz = divmod(10,3)
print(xyz)
print(type(xyz))
```
Here the quotient has to be 3 and the remainder has to be 1. These values cannot be changed whatsoever when 10 is divided by 3. Hence divmod returns these values in a tuple.
To define a tuple, A variable is assigned to paranthesis ( ) or tuple( ).
```
tup = ()
tup2 = tuple()
```
If you want to directly declare a tuple it can be done by using a comma at the end of the data.
```
27,
```
27 when multiplied by 2 yields 54, But when multiplied with a tuple the data is repeated twice.
```
2*(27,)
```
Values can be assigned while declaring a tuple. It takes a list as input and converts it into a tuple or it takes a string and converts it into a tuple.
```
tup3 = tuple([1,2,3])
print(tup3)
tup4 = tuple('Hello')
print(tup4)
```
It follows the same indexing and slicing as Lists.
```
print(tup3[1])
tup5 = tup4[:3]
print(tup5)
```
### Mapping one tuple to another
```
(a,b,c)= ('alpha','beta','gamma')
print(a,b,c)
d = tuple('RajathKumarMP')
print(d)
```
### Built In Tuple functions
**count()** function counts the number of specified element that is present in the tuple.
```
d.count('a')
```
**index()** function returns the index of the specified element. If the elements are more than one then the index of the first element of that specified element is returned
```
d.index('a')
```
## Sets
Sets are mainly used to eliminate repeated numbers in a sequence/list. It is also used to perform some standard set operations.
Sets are declared as set() which will initialize a empty set. Also set([sequence]) can be executed to declare a set with elements
```
set1 = set()
print(type(set1))
set0 = set([1,2,2,3,3,4])
print(set0)
```
elements 2,3 which are repeated twice are seen only once. Thus in a set each element is distinct.
### Built-in Functions
```
set1 = set([1,2,3])
set2 = set([2,3,4,5])
```
**union( )** function returns a set which contains all the elements of both the sets without repition.
```
set1.union(set2)
```
**add( )** will add a particular element into the set. Note that the index of the newly added element is arbitrary and can be placed anywhere not neccessarily in the end.
```
set1.add(0)
set1
```
**intersection( )** function outputs a set which contains all the elements that are in both sets.
```
set1.intersection(set2)
```
**difference( )** function ouptuts a set which contains elements that are in set1 and not in set2.
```
set1.difference(set2)
```
**symmetric_difference( )** function ouputs a function which contains elements that are in one of the sets.
```
set2.symmetric_difference(set1)
```
**issubset( ), isdisjoint( ), issuperset( )** is used to check if the set1/set2 is a subset, disjoint or superset of set2/set1 respectively.
```
set1.issubset(set2)
set2.isdisjoint(set1)
set2.issuperset(set1)
```
**pop( )** is used to remove an arbitrary element in the set
```
set1.pop()
print(set1)
```
**remove( )** function deletes the specified element from the set.
```
set1.remove(2)
set1
```
**clear( )** is used to clear all the elements and make that set an empty set.
```
set1.clear()
set1
```
| github_jupyter |
```
%pylab inline
import pandas as pd
import logging
import imp
from rpy2.robjects import numpy2ri
RANDOM_SEED = 0
numpy2ri.activate()
import fairtest.utils.log as fairtest_log
imp.reload(fairtest_log)
fairtest_log.set_params(filename='fairtest.log', level=logging.INFO)
from fairtest import DataSource
import fairtest.investigation as inv
import fairtest.testing as testing
import fairtest.discovery as discovery
import fairtest.error_profiling as error_profiling
import fairtest.modules.metrics as metrics
import fairtest.modules.metrics.correlation as correlation
import fairtest.modules.metrics.regression as regression
import fairtest.modules.metrics.binary_metrics as binary_metrics
import fairtest.modules.statistics.confidence_interval as intervals
import ast
import sklearn.preprocessing as preprocessing
# adult, berkeley, staples, caffe
EXPERIMENT = 'staples'
if EXPERIMENT == 'adult':
dataname = 'adult'
data = pd.read_csv(
'../data/adult/adult.csv',
header=0,
sep=r'\s*,\s*',
engine='python',
na_values="?")
data = data.drop('fnlwgt', axis=1)
TARGET = 'income'
SENS = ['sex', 'race']
EXPL = []
metrics = {}
inv_type = 'testing'
elif EXPERIMENT == 'berkeley':
dataname = 'berkeley'
data = pd.read_csv(
'../data/berkeley/berkeley.csv',
header=0,
sep=r'\s*,\s*',
engine='python',
na_values="?")
TARGET = 'accepted'
SENS = ['gender']
EXPL = ['department']
metrics = {'gender': 'Diff'}
inv_type = 'testing'
elif EXPERIMENT == 'staples':
dataname = 'staples'
data = pd.read_csv(
'../data/staples/staples.csv',
header=0,
sep=r'\s*,\s*',
engine='python',
na_values="?")
data = data.drop('zipcode', axis=1)
data = data.drop('distance', axis=1)
data = data.drop('city', axis=1)
TARGET = 'price'
SENS = ['race', 'income']
EXPL = []
metrics = {}
inv_type = 'testing'
elif EXPERIMENT == 'caffe':
dataname = 'caffe'
data = pd.read_csv(
'../data/images/caffe.txt',
header=0,
sep=r'\s*\t\s*',
engine='python',
na_values="?")
TARGET = 'Labels'
SENS = ['Race']
EXPL = []
labeled_data = map(lambda s: ast.literal_eval(s), data[TARGET])
for l in labeled_data:
assert len(l) == 5
label_encoder = preprocessing.MultiLabelBinarizer()
labeled_data = label_encoder.fit_transform(labeled_data)
labels = label_encoder.classes_
df_labels = pd.DataFrame(labeled_data, columns=labels)
data = pd.concat([data.drop(TARGET, axis=1), df_labels], axis=1)
TARGET = labels.tolist()
metrics = {}
inv_type = 'discovery'
else:
raise ValueError('Unkwnown dataset %s' % EXPERIMENT)
budget=1
data_source = DataSource(data, random_state=RANDOM_SEED, budget=budget, train_size=0.5)
if inv_type == 'testing':
exp = testing.Testing(data_source, SENS, TARGET, EXPL, metrics=metrics, random_state=RANDOM_SEED)
elif inv_type == 'error_profiling':
exp = error_profiling.ErrorProfiling(data_source, SENS, TARGET, GROUND_TRUTH, EXPL, metrics=metrics, random_state=RANDOM_SEED)
elif inv_type == 'discovery':
exp = discovery.Discovery(data_source, SENS, TARGET, EXPL, metrics=metrics, topk=35, random_state=RANDOM_SEED)
else:
raise ValueError('Unkwnown investigation type %s' % inv)
print 'Training set: {}/{}'.format(len(exp.train_set), len(data))
exp.train_set.tail()
import fairtest.modules.context_discovery.guided_tree as guided_tree
inv.train([exp], score_aggregation='avg', min_leaf_size=100, max_depth=5)
#print exp.trained_trees['Age']
import fairtest.modules.statistics.hypothesis_test as tests
import fairtest.modules.statistics.multiple_testing as multitest
import fairtest.modules.context_discovery.tree_parser as tree_parser
import fairtest.modules.statistics.confidence_interval as intervals
inv.test([exp], exact=True, prune_insignificant=True)
import fairtest.modules.bug_report.report as rep
import fairtest.modules.bug_report.filter_rank as fr
output_dir = None
inv.report([exp], dataname, output_dir=output_dir, filter_conf=0.95, node_filter=fr.FILTER_BETTER_THAN_ANCESTORS)
```
| github_jupyter |
```
import json, sys, os, requests
import altair as alt
from altair import expr, datum
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
eco_git_home = (
"https://raw.githubusercontent.com/EconomicsObservatory/ECOvisualisations/main/"
)
vega_embed = requests.get(eco_git_home + "guidelines/html/vega-embed.html").text
colors = json.loads(
requests.get(eco_git_home + "guidelines/colors/eco-colors.json").content
)
category_color = json.loads(
requests.get(eco_git_home + "guidelines/colors/eco-category-color.json").content
)
hue_color = json.loads(
requests.get(eco_git_home + "guidelines/colors/eco-single-hue-color.json").content
)
mhue_color = json.loads(
requests.get(eco_git_home + "guidelines/colors/eco-multi-hue-color.json").content
)
div_color = json.loads(
requests.get(eco_git_home + "guidelines/colors/eco-diverging-color.json").content
)
config = json.loads(
requests.get(eco_git_home + "guidelines/charts/eco-global-config.json").content
)
height = config["height"]
width = config["width"]
height, width
def get_ons_data_ts(dataset, series, save=True, api_path="https://api.ons.gov.uk"):
r = requests.get(
api_path
+ "/timeseries/"
+ series.lower()
+ "/dataset/"
+ dataset.lower()
+ "/data"
)
content = json.loads(r.content)
if save:
print("Saving DATASET", dataset, "SERIES", series, "to local mirror...")
open("../../datasets/ons/" + dataset + "/" + series + ".json", "w").write(
json.dumps(content)
)
print("OK!")
return content
```
Employment
```
dataset = "lms"
data = {}
series = ["mgsx", "lf24", "kac3", "lf2s"]
series_colors = {
"mgsx": "eco-pink",
"lf24": "eco-turquiose",
"kac3": "eco-yellow",
"lf2s": "eco-mid-blue",
}
series_offset = {"mgsx": 0.8, "lf24": 0.5, "kac3": 1, "lf2s": 0.9}
series_domain = {"mgsx": [2, 12], "lf24": [65, 80], "kac3": [-2, 6], "lf2s": [20, 26]}
series_domain2 = {
"mgsx": [3.5, 5.5],
"lf24": [74, 77],
"kac3": [-2, 6],
"lf2s": [20, 21.5],
}
series_titles = {
"mgsx": "Unemployment",
"lf24": "Employment",
"kac3": "Wage growth",
"lf2s": "Economic inactivity rate",
}
series_subtitles = {
"mgsx": "Rate (%), age 16+ seasonally adjusted",
"lf24": "Rate (%), age 16+ seasonally adjusted",
"kac3": "Weekly earnings, three month average, y-o-y growth (%)",
"lf2s": "UK, Aged 16-64 (%)",
}
series_text3 = {
"mgsx": "year(datum.date)>=2021",
"lf24": "year(datum.date)>=2021",
"kac3": "year(datum.date)>=2021&month(datum.date)>=1",
"lf2s": "year(datum.date)>=2021&month(datum.date)>=1",
}
for serie in series:
d = get_ons_data_ts(dataset, serie)
data[serie] = {
"title": d["description"]["title"].split(")")[0] + ")",
"data": d["months"],
}
```
Now push datasets to GitHub.
```
for l in ["live", "eco", "local"]:
layers = []
for serie in series:
if l == "live":
url = (
"https://api.allorigins.win/raw?url=https://api.ons.gov.uk/timeseries/"
+ serie
+ "/dataset/"
+ dataset
+ "/data"
)
else:
url = (
"https://raw.githubusercontent.com/EconomicsObservatory/ECOdataHUB/main/datasets/ons/"
+ dataset
+ "/"
+ serie
+ ".json"
)
if l == "local":
data = alt.Data(
url=url,
format=alt.DataFormat(type="json", property="months"),
)
else:
data = pd.DataFrame(json.loads(requests.get(url).content)["months"])
data["date"] = pd.to_datetime(data["date"])
print("Processing", serie, "for", l, "...")
base = (
alt.Chart(data)
.transform_calculate(serie=repr(serie))
.transform_calculate(label="datum.year+' '+datum.month")
.transform_calculate(date="toDate(datum.year+' '+datum.month+' '+1)")
.encode(x=alt.X("date:T", axis=alt.Axis(grid=True, title="")))
)
area = base.mark_area(
interpolate="monotone",
fillOpacity=0.7,
stroke=colors["eco-gray"],
strokeWidth=0.5,
color=alt.Gradient(
gradient="linear",
stops=[
alt.GradientStop(color="white", offset=0),
alt.GradientStop(
color=colors[series_colors[serie]], offset=series_offset[serie]
),
],
x1=0.8,
x2=1,
y1=1,
y2=0,
),
).encode(
y=alt.Y(
"value:Q",
axis=alt.Axis(grid=True, title=""),
# scale=alt.Scale(domain=series_domain[serie]),
)
)
line = base.mark_line(color=colors[series_colors[serie]]).encode(
y=alt.Y(
"value:Q",
axis=alt.Axis(grid=True, title=""),
scale=alt.Scale(domain=series_domain2[serie]),
)
)
# Create a selection that chooses the nearest point & selects based on x-value
nearest = alt.selection(
type="single", nearest=True, on="mouseover", fields=["date"], empty="none"
)
# Transparent selectors across the chart. This is what tells us
# the x-value of the cursor
selectors = (
base.mark_point()
.encode(
opacity=alt.value(0),
)
.add_selection(nearest)
)
# Draw points on the line, and highlight based on selection
points = area.mark_point().encode(
opacity=alt.condition(nearest, alt.value(1), alt.value(0))
)
# Draw text labels near the points, and highlight based on selection
text = area.mark_text(align="left", dx=-25, dy=35).encode(
text=alt.condition(nearest, "value:N", alt.value(" "))
)
# Draw text labels near the points, and highlight based on selection
text2 = area.mark_text(align="right", dx=5, dy=10,).encode(
text=alt.condition(nearest, "label:N", alt.value(" ")),
x=alt.value(390),
y=alt.value(0),
)
# Draw text labels near the points, and highlight based on selection
text3 = line.mark_text(align="right", dx=5, dy=10, fontSize=13).encode(
text=alt.condition(series_text3[serie], "value:Q", alt.value(" ")),
x=alt.value(92),
y=alt.value(3),
)
# Draw a rule at the location of the selection
rules = base.mark_rule(color=colors["eco-gray"]).transform_filter(nearest)
# Put the five layers into a chart and bind the data
layer1 = alt.layer(area, selectors, points, rules, text, text2,).properties(
width=400,
height=150,
title={"text": series_titles[serie], "subtitle": series_subtitles[serie]},
)
layer2 = (
alt.layer(
line,
selectors,
points,
rules,
text,
text3,
)
.properties(width=100, height=150)
.transform_filter("datum.date>='2020'")
)
layers.append(alt.hconcat(layer1, layer2, spacing=20))
vlayers = alt.vconcat(
layers[0], layers[1], layers[2], layers[3]
) # .configure_view(stroke=None)
vlayers.save(dataset + "_" + l + ".json")
print("OK!")
vlayers
dataset = "lms"
data = {}
series = ["lf2s", "ybtm", "lf2t"]
series_colors = {"lf2s": "eco-pink", "ybtm": "eco-turquiose", "lf2t": "eco-yellow"}
series_offset = {"lf2s": 0.8, "ybtm": 0.5, "lf2t": 1}
series_domain = {"lf2s": [2, 12], "ybtm": [65, 80], "lf2t": [-2, 6]}
series_domain2 = {"lf2s": [20, 22], "ybtm": [15, 18], "lf2t": [24, 25]}
series_titles = {
"lf2s": "Inactivity Rate",
"ybtm": "Inactivity Rate",
"lf2t": "Inactivity Rate",
}
series_subtitles = {
"lf2s": "(%), All",
"ybtm": "(%), Men",
"lf2t": "(%), Women",
}
series_text3 = {
"lf2s": "year(datum.date)>=2021",
"ybtm": "year(datum.date)>=2021",
"lf2t": "year(datum.date)>=2021&month(datum.date)>=1",
}
for serie in series:
d = get_ons_data_ts(dataset, serie)
data[serie] = {
"title": d["description"]["title"].split(")")[0] + ")",
"data": d["months"],
}
```
Economic inactivity rate
```
for l in ["live", "eco", "local"]:
layers = []
for serie in series:
if l == "live":
url = (
"https://api.allorigins.win/raw?url=https://api.ons.gov.uk/timeseries/"
+ serie
+ "/dataset/"
+ dataset
+ "/data"
)
else:
url = (
"https://raw.githubusercontent.com/EconomicsObservatory/ECOdataHUB/main/datasets/ons/"
+ dataset
+ "/"
+ serie
+ ".json"
)
if l == "local":
data = alt.Data(
url=url,
format=alt.DataFormat(type="json", property="months"),
)
else:
data = pd.DataFrame(json.loads(requests.get(url).content)["months"])
data["date"] = pd.to_datetime(data["date"])
print("Processing", serie, "for", l, "...")
base = (
alt.Chart(data)
.transform_calculate(serie=repr(serie))
.transform_calculate(label="datum.year+' '+datum.month")
.transform_calculate(date="toDate(datum.year+' '+datum.month+' '+1)")
.encode(x=alt.X("date:T", axis=alt.Axis(grid=True, title="")))
)
area = base.mark_area(
interpolate="monotone",
fillOpacity=0.7,
stroke=colors["eco-gray"],
strokeWidth=0.5,
color=alt.Gradient(
gradient="linear",
stops=[
alt.GradientStop(color="white", offset=0),
alt.GradientStop(
color=colors[series_colors[serie]], offset=series_offset[serie]
),
],
x1=0.8,
x2=1,
y1=1,
y2=0,
),
).encode(
y=alt.Y(
"value:Q",
axis=alt.Axis(grid=True, title=""),
# scale=alt.Scale(domain=series_domain[serie]),
)
)
line = base.mark_line(color=colors[series_colors[serie]]).encode(
y=alt.Y(
"value:Q",
axis=alt.Axis(grid=True, title=""),
scale=alt.Scale(domain=series_domain2[serie]),
)
)
# Create a selection that chooses the nearest point & selects based on x-value
nearest = alt.selection(
type="single", nearest=True, on="mouseover", fields=["date"], empty="none"
)
# Transparent selectors across the chart. This is what tells us
# the x-value of the cursor
selectors = (
base.mark_point()
.encode(
opacity=alt.value(0),
)
.add_selection(nearest)
)
# Draw points on the line, and highlight based on selection
points = area.mark_point().encode(
opacity=alt.condition(nearest, alt.value(1), alt.value(0))
)
# Draw text labels near the points, and highlight based on selection
text = area.mark_text(align="left", dx=-25, dy=35).encode(
text=alt.condition(nearest, "value:N", alt.value(" "))
)
# Draw text labels near the points, and highlight based on selection
text2 = area.mark_text(align="right", dx=5, dy=10,).encode(
text=alt.condition(nearest, "label:N", alt.value(" ")),
x=alt.value(390),
y=alt.value(0),
)
# Draw text labels near the points, and highlight based on selection
text3 = line.mark_text(align="right", dx=5, dy=10, fontSize=13).encode(
text=alt.condition(series_text3[serie], "value:Q", alt.value(" ")),
x=alt.value(92),
y=alt.value(3),
)
# Draw a rule at the location of the selection
rules = base.mark_rule(color=colors["eco-gray"]).transform_filter(nearest)
# Put the five layers into a chart and bind the data
layer1 = alt.layer(area, selectors, points, rules, text, text2,).properties(
width=400,
height=150,
title={"text": series_titles[serie], "subtitle": series_subtitles[serie]},
)
layer2 = (
alt.layer(
line,
selectors,
points,
rules,
text,
text3,
)
.properties(width=100, height=150)
.transform_filter("datum.date>='2020'")
)
layers.append(alt.hconcat(layer1, layer2, spacing=20))
layers = alt.vconcat(
layers[0], layers[1], layers[2]
) # .configure_view(stroke=None)
layers.save(dataset + "2_" + l + ".json")
print("OK!")
layers
ages = {
"mgvf": "16-24",
"ycgm": "25-34",
"ycgs": "35-49",
"lf28": "50-64",
"k5hu": "65+",
}
dfs = []
for s in ages:
print(s)
df = pd.DataFrame(get_ons_data_ts("lms", s)["months"])
df["series"] = s
df["age"] = ages[s]
df["date"] = pd.to_datetime(df["date"])
dfs.append(df)
df = pd.concat(dfs)
df = (
df.set_index("date")
.sort_index()
.loc["2019-10-01":]
.reset_index()
.set_index(["age", "date"])["value"]
.unstack()
.astype(int)
)
for c in df.columns[1:]:
df[c] = df[df.columns[0]] - df[c]
df[df.columns[0]] = 0
df = df.stack().reset_index()
df.columns = ["age", "date", "value"]
f = "unemployment_by_age"
df.to_csv("../../datasets/eco/unemployment/" + f + ".csv")
base = alt.Chart(df).encode(
y=alt.Y(
"value:Q",
axis=alt.Axis(
grid=True,
title="thousand",
titleAnchor="start",
labelColor=colors["eco-gray"],
titleColor=colors["eco-gray"],
tickColor=colors["eco-gray"],
domainColor=colors["eco-gray"],
titleFontSize=10,
titleFontWeight="normal",
ticks=False,
labelAlign="left",
labelBaseline="middle",
labelPadding=-5,
labelOffset=-10,
titleX=30,
titleY=295,
titleBaseline="bottom",
titleAngle=0,
titleAlign="left",
tickCount=6,
format=".0f",
),
),
x=alt.X(
"date:T",
axis=alt.Axis(
grid=False,
title="",
titleAnchor="start",
labelColor=colors["eco-gray"],
titleColor=colors["eco-gray"],
tickColor=colors["eco-gray"],
domainColor=colors["eco-gray"],
titleFontSize=10,
titleFontWeight="normal",
labelExpr="datum.value<toDate('2019-11-01')?'':month(datum.value)==0?timeFormat(datum.value,'%Y'):timeFormat(datum.value,'%B')",
labelAngle=0,
),
),
color=alt.Color("age:N", legend=None, scale=alt.Scale(range=color_scale)),
)
lines = base.mark_line()
labels = (
base.mark_text(align="left", dx=5)
.encode(text=alt.Text("age:N"))
.transform_filter("datum.date>toDate('2020-12-01')")
)
layer = lines + labels
layer.configure_view(stroke=None).properties(title="").properties(height=300, width=400)
layer.save("../../panels/unemployment/" + f + ".json")
layer
ages = {
"mgwy": "16-24",
"ycgp": "25-34",
"ycgv": "35-49",
"lf2e": "50-64",
"k5hw": "65+",
}
dfs = []
for s in ages:
print(s)
df = pd.DataFrame(get_ons_data_ts("lms", s)["months"])
df["series"] = s
df["age"] = ages[s]
df["date"] = pd.to_datetime(df["date"])
dfs.append(df)
df = pd.concat(dfs)
df = (
df.set_index("date")
.sort_index()
.loc["2019-10-01":]
.reset_index()
.set_index(["age", "date"])["value"]
.unstack()
.astype(float)
)
for c in df.columns[1:]:
df[c] = df[df.columns[0]] - df[c]
df[df.columns[0]] = 0
df = df.stack().reset_index()
df.columns = ["age", "date", "value"]
f = "unemployment_rate_by_age_nov2019"
df.to_csv("../../datasets/eco/unemployment/" + f + ".csv")
base = alt.Chart(df).encode(
y=alt.Y(
"value:Q",
axis=alt.Axis(
grid=True,
title="change in percentage points",
titleAnchor="start",
labelColor=colors["eco-gray"],
titleColor=colors["eco-gray"],
tickColor=colors["eco-gray"],
domainColor=colors["eco-gray"],
titleFontSize=10,
titleFontWeight="normal",
ticks=False,
labelAlign="left",
labelBaseline="middle",
labelPadding=-5,
labelOffset=-10,
titleX=22,
titleY=262,
titleBaseline="bottom",
titleAngle=0,
titleAlign="left",
tickCount=6,
format=".0f",
),
),
x=alt.X(
"date:T",
axis=alt.Axis(
grid=False,
title="",
titleAnchor="start",
labelColor=colors["eco-gray"],
titleColor=colors["eco-gray"],
tickColor=colors["eco-gray"],
domainColor=colors["eco-gray"],
titleFontSize=10,
titleFontWeight="normal",
labelExpr="datum.value<toDate('2019-11-01')?'':month(datum.value)==0?timeFormat(datum.value,'%Y'):timeFormat(datum.value,'%B')",
labelAngle=0,
),
),
color=alt.Color("age:N", legend=None, scale=alt.Scale(range=color_scale)),
)
lines = base.mark_line()
labels = (
base.mark_text(align="left", dx=5)
.encode(text=alt.Text("age:N"))
.transform_filter("datum.date>toDate('2020-12-01')")
)
layer = lines + labels
layer.configure_view(stroke=None).properties(title="").properties(height=300, width=400)
layer.save("../../panels/unemployment/" + f + ".json")
layer
df = pd.concat(dfs)
df = (
df.set_index("date")
.sort_index()
.loc["2019-10-01":]
.reset_index()
.set_index(["age", "date"])["value"]
.unstack()
.astype(float)
)
df = df.stack().reset_index()
df.columns = ["age", "date", "value"]
f = "unemployment_rate_by_age"
df.to_csv("../../datasets/eco/unemployment/" + f + ".csv")
base = alt.Chart(df).encode(
y=alt.Y(
"value:Q",
axis=alt.Axis(
grid=True,
title="%",
titleAnchor="start",
labelColor=colors["eco-gray"],
titleColor=colors["eco-gray"],
tickColor=colors["eco-gray"],
domainColor=colors["eco-gray"],
titleFontSize=10,
titleFontWeight="normal",
ticks=False,
labelAlign="left",
labelBaseline="middle",
labelPadding=-5,
labelOffset=-10,
titleX=22,
titleY=-5,
titleBaseline="bottom",
titleAngle=0,
titleAlign="left",
tickCount=6,
format=".0f",
),
),
x=alt.X(
"date:T",
axis=alt.Axis(
grid=False,
title="",
titleAnchor="start",
labelColor=colors["eco-gray"],
titleColor=colors["eco-gray"],
tickColor=colors["eco-gray"],
domainColor=colors["eco-gray"],
titleFontSize=10,
titleFontWeight="normal",
labelExpr="datum.value<toDate('2019-11-01')?'':month(datum.value)==0?timeFormat(datum.value,'%Y'):timeFormat(datum.value,'%B')",
labelAngle=0,
),
),
color=alt.Color("age:N", legend=None, scale=alt.Scale(range=color_scale)),
)
lines = base.mark_line()
labels = (
base.mark_text(align="left", dx=5)
.encode(text=alt.Text("age:N"))
.transform_filter("datum.date>toDate('2020-12-01')")
)
layer = lines + labels
layer.configure_view(stroke=None).properties(title="").properties(height=300, width=400)
layer.save("../../panels/unemployment/" + f + ".json")
layer
```
| github_jupyter |
# Deep Learning Explained
# Module 3 - Lab - Introduction to Deep Neural Networks
## 1.0 Overview
This lesson introduces you to the basics of neural network architecture in the form of deep forward networks. This architecture is the quintessential deep neural net architecture. In this lab you will learn the following:
- Why is deep learning important and how it relates to representation, learning and inference.
- How a basic preceptron works.
- How to apply different types of loss functions.
- Understand why nonlinear activation is important and why rectified linear units are a good choice.
- How back propagation works, and how you apply the chain rule of calculus to determine gradient.
- Understand the architectural trade-off between depth and width in deep networks.
### 1.1 Why is deep learning important?
Deep learning methods are a form of **artificial intelligence (AI)** or **machine intelligence**. More specifically, deep learning algorithms are a type of **machine learning**.
What properties does machine intelligence require? There have been many answers to this question over the history of computing. In this case, we will take a practical view, sometimes known as **weak AI**. There are three key properties an intelligent machine must have; a **representation** for the AI model, **inference** for the output of the model, and **learning** to train the model. The figure below shows a highly abstracted view of machine intelligence, showing the relationship between representation, learning and inference.
<img src="img/MachineIntelligence.jpg" alt="Drawing" style="width:600px; height:300px"/>
<center>**Schematic view of machine intelligence**</center>
**That's it!** The entire rest of this course will focus on just these three points: representation, learning and inference!
## 2.0 Forward propagation: The representation problem
To create useful neutral network we need a **representation** that has two important properties.
First, there needs to be a way to represent complex functions of the input. Without this property, nothing is gained, since there are numerous machine learning algorithms that work with simple representations. We will spend the rest of this section exploring this problem.
Second, the representation needs to be **learnable**. Quite obviously, no machine intelligence representation is useful if there is not a practical algorithm to learn it. We will take up this problem in another section.
### 2.1 Linear networks
Let's start with the simplest possible network. It has inputs, and an output. The output is a **afine transformation** of the input values. We say this network performs an afine transformation since there is a bias term $b$.
<img src="img/LinearNetwork.jpg" alt="Drawing" style="width:400px; height:250px"/>
<center>**Figure 2.1**
**A simple afine network**</center>
This output $y$ of this network is just:
$$y = f(x) = \sum_i w_i \cdot x_i + b$$
This network performs linear regression. Being able to perform only afine transformations, it can't do anything else.
This representation is certainly learnable. However, it does not gain us anything over familiar linear regression methods.
### 2.2 The preceptron
To get started, let's have a look at a simple **preceptron** model. The perceptron was proposed by Rosenblatt (1962). He built on the earlier attempts at a neural network models by McCulloch and Pitts (1943) and Heeb (1949). The perceptron adds **nonliner activation** to the afine network.
<img src="img/Preceptron.jpg" alt="Drawing" style="width:350px; height:250px"/>
<center>**Figure 2.2 Schematic of perceptron with nonlinear activation**</center>
The output $y$ of the perceptron is given by the following:
$$y = f(x) = \sigma \Big( \sum_i w_i \cdot x_i + b \Big)$$
The output of the network is now nonlinear, give the **activation function** $\sigma(x)$.
But, the preceptron is nothing more than a logistic regression classifier. The fact that the preceptron could only solve linearly separable problems was pointed out by Minsky and Papert (1969). The failure of the preceptron to learn an **exclusive or (XOR)** function is well known. See for example, Section 6.1 in GBC.
Again, this representation is certainly learnable. However, as before, it does not gain us anything over well known logistic regression models.
### 2.3 Forward networks - A better representation!
The problem with the perceptron is one of representation. There is no way that this simple network can represent anything but a linearly separable function. To represent more complex functions, we need a more complex network. In more technical terms we need a network with greater **model capacity**.
What we need is a network with layers of **hidden nodes** with nonlinear activation. The figure below shows a simple example of a neural network with one **hidden layer** with two nodes. Since every node (including inputs) is connected to every other node we call this architecture a **fully connected neural network**.
<img src="img/Hidden.jpg" alt="Drawing" style="width:600px; height:350px"/>
<center>**Figure 2.3
Fully connected neural network with single hidden layer**</center>
Let's walk through some aspects of these diagrams.
1. The neural network is divided into three layers. The input layer, the hidden layer and the output layer.
2. The values in the input layer are multiplied by a weight matrix, $W^1$.
3. The nodes in the hidden layer sum their inputs and add a bias term, $b^1$.
4. The outputs of the hidden layer nodes are multiplied by a weight vector, $W^2$.
5. The output layer sums the inputs and adds another bias term, $b^2$.
### 2.4 Neural network architectures - Finding representations
The representations achievable by neural network with just a single hidden layer are quite powerful. In fact, Cybenko (1989) showed that such a network with an infinite number of hidden units using sigmoidal activation can approximate any arbitrary function. Hornik (1991) generalized this to apply to any activation function. We call this theorem the **universal approximation theorem**.
A universal approximation theorem may seem like a really exciting development; especially if you are a machine intelligence nerd. However, one must be circumspect when viewing such a result. A representation with an infinite number of nodes cannot be learned in any practical sense. Still it is comforting to know that, at least in principle, a representation can be learned for arbitrarily complex problems.
While infinitely wide networks with a single layer are unrealistic, we are not limited to one layer. In fact, depth is typically more effective at creating complex representations rather than width in neural networks. Depth is measured by the count of hidden layers stacked one on top of the other in the network. Hence, the term deep neural networks.
The Figure 2.4 below shows the results of an empirical study by Goodfellow, Shlens and Szegedy (2014) of accuracy of the network vs depth. Notice that accuracy increases rapidly with depth until about 8 layers, after which the effect is reduced.
<img src="img/Accuracy-Layers.jpg" alt="Drawing" style="width:600px; height:350px"/>
<center>**Figure 2.4 Empirical results of accuracy vs. number of layers**
Diagram from Goodfellow et. al. 2014</center>
Another view of the empirical study by Goodfellow et. al. is shown in Figure 2.5 below. In this case accuracy verses number of model parameters is compared for three different network architectures. The deeper network (11 layers) makes more efficient use of the parameters in terms of improved accuracy. The number of parameters in a layer is approximately the total number of parameters divided by the number of layers. Notice that for the particular case tested convolutional neural networks are more efficient than fully-connected networks. We will discuss convolutional neural networks in a subsequent lesson.
Of particular interest is the fact that the fully-connected network and the shallow convolutional neural network appear to be over-fitting as the test accuracy actually decreases as the number of parameters increases. We will discuss the significant problems of over-fitting in neural networks in a subsequent lesson.
<img src="img/Accuracy-Parameters.jpg" alt="Drawing" style="width:600px; height:350px"/>
<center>**Figure 2.5 Empirical results of accuracy for different network architectures**
Diagram from Goodfellow et. al. 2014</center>
**Summary:** Deep networks tend to produce better models, with less tendency to over-fit, for a given level of complexity.
### 2.6 Activation functions
Without a nonlinear activation function, a neural net is just an afine transformation. Afine transformations limit representation to only linearly separable functions. To create more general representations **nonlinear activation functions** are required.
In present practice, four types of activation functions are generally used for fully connected networks.
1. **Linear** activation is used for the output layer of regression neural networks.
2. The **rectilinear** activation function is used for most hidden units. The rectilinear activation function is often referred to as **ReLU**.
3. A **leaky rectilinear** activation acts like a ReLU function for positive inputs, but has a small negative value or leakage for negative input values. The leaky ReLU activation function can improve training for some deep neural networks, as the derivative is not zero below 0.
3. The **logistic** or **sigmoid** activation function is used for binary classifiers.
4. The **softmax** activation function is used for multi-class classifiers.
Rectilinear functions are typically used as the activation function for hidden units in neural networks. The rectilinear function is defined at:
$$f(x) = max(0, x)$$
The rectilinear function is linear for positive responses and zero for responses less than 0.0. Notice that the derivatives of the rectilinear function are not continuous. While this might seem to be a problem, in practice, even gradient-based optimization functions work well with this activation function.
The rectilinear function is plotted in the cell below:
```
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
## Compute the lectilinear function
def reclu(x): return(max(0,x))
x = [float(x)/100 for x in range(-100, 100)]
y = [reclu(y) for y in x]
## Plot the result
def plot_figs(x,y,title, figsize = (4, 3)):
plt.figure(figsize=figsize).gca() # define axis
sns.set_style("darkgrid")
plt.plot(x, y)
plt.ylim((-0.1,1.0))
plt.title(title)
plt.xlabel('X')
plt.ylabel('Y')
plot_figs(x,y,'The Rectilinear Function')
```
Another widely used activation function is the **logistic** or **sigmoid**. The sigmoid is used as the activation for the output layer of a binary classifier. The general sigmoid function can be written as:
$$\sigma(x) = \frac{L}{1 + e^{-k(x_0-x)}}\\
where\\
L = max\ value\\
k = slope\\
x_0 = sigmoid\ midpoint$$
With $L=1$, $k=1$, and $x_0 = 0$, the logistic function becomes:
$$\sigma(x) = \frac{1}{1 + e^{-x}} = \frac{e^x}{1+e^x}$$
The sigmoid function can asymptotically approach $0$ or $1$, but will never reach these extreme values. However, because of the rapid decrease in the derivative away from $0$ the sigmoid can **saturate** when using gradient-based training. For this reason, the sigmoid is typically not used for hidden layers in neural networks.
When used in a binary classifier a threshold is set to determine if the result is $0$ or $1$. The threshold can be adjusted to bias the result as desired.
The code in the cell below plots the sigmoid function.
```
from math import exp
def sigmoid(x): return exp(x)/(1 + exp(x))
x = [float(x)/100 for x in range(-700, 700)]
y = [sigmoid(y) for y in x]
plot_figs(x,y,'The Logistic Function', figsize = (5,3))
```
The **softmax** function or **normalized exponential function** is used for the output activation function of a multi-class classifiers. The softmax function is the multinomial generalization of the sigmoid or logistic function. The probability of each class $j$ is written as:
$$\sigma(z_j) = \frac{e^{z_j}}{\sum_{k=1}^K e^{z_k}}$$
The normalization $\sum_{k=1}^K e^{z_k}$ ensures the sum of probabilities for all classes add to $1.0$. The class selected by the classifier is the class with the largest value of $\sigma(z_j)$.
### 2.7 Computational example
Now that we have gone though some basic theory for feed-forward networks, let's try a simple example. We will construct a fully connected network to compute this simple function:
$$y = x_1 - x_2$$
****
**Comment.** You have likely noticed that this function is linear and can be computed easily without a neural network. Of course, that is not the point. We use a simple function to make the results easy to understand.
****
****
**Note.** The neural network for this example does not require any bias terms.
****
As a first step, we will create test data for 3 cases; $x_1 > x_2$, $x_1 = x_2$, and $x_1 <x_2$
```
x = [(2,1), (1,1), (1,2)]
for x_in in x:
print(x_in[0] - x_in[1])
```
Now that we have our test data and test cases we can move to the next step. We will create the first weight tensor.
```
import numpy as np
W_1 = np.array([[1.0, -1.0], [-1.0, 1.0]])
print(W_1)
```
We also need the weight tensor, for the second set of weights.
```
W_2 = np.array([1, -1])
print(W_2)
```
We have the input test cases and the network weights. Now, it is time to compute the results and check them. The code in the cell below performs the feed forward network computation. The first function computes the matrix product of the weights with the input tensor and applies a rectilinear activation function. This function computes the output of the hidden layer given the input tensor.
$$h = \sigma(W^1 \cdot x)$$
The second function computes the vector product of the weight tensor with the output tensor of the hidden layer which is the output of the network:
$$o = W^2 \cdot h$$
```
def hidden(x, W):
"""Computes the output of the hidden layer"""
h = np.dot(W, x) # product of weights and input vector
return np.array([reclu(x) for x in h]) # apply activation function and return
def output(h, W):
"""Computes the result for the hidden layer"""
return np.dot(W, h) # dot product of weight vector and input vector
## Run the test cases and check the results
for y in x:
h = hidden(y, W_1)
print(output(h, W_2))
```
Our first fully connected neural network passes all the tests!
Notice that even a network to compute a simple function requires 6 weights. You can see that for more complex functions any practical algorithm must learn a large number of weights. The limitations of Numpy would quickly become evident for large scale problems involving hundreds of millions of weights.
*******
**Exercise 1:** The first simple neural network worked as intended. But, what happens if you change the activation of the hidden units? In the cell below, modify the code from the previous example to use sigmodial activation. Use an expanded set of test cases `[(3,1), (2,1), (1,1), (1,2), (1,3)]`.
*******
```
x = [(3,1), (2,1), (1,1), (1,2), (1,3)]
```
How have the output values changed? Did you need to update the output weight tensor values to get a better approximation of the function?
## 3.0 Learning in neural networks: Backpropagation
Now that we have a promising representation, we need to determine if it is trainable. The answer is not only yes we can, but that we can do so in a computationally efficient manner, using a cleaver algorithm known as **backpropagation**.
The backpropagation algorithm was developed independently multiple times. The earliest work on this algorithm was by Kelly (1960) in the context of control theory and Bryson (1961) in the context of dynamic programming. Rumelhart, Hinton and Williams (1986) demonstrated empirically that backpropagation can be used to train neural networks. Their paper marks the start of the modern history of neural networks, and set off the first wave of enthusiasm.
The backpropagation algorithm requires several components. First, we need a **loss function** to measure how well our representation matches the function we are trying to learn. Second, we need a way to propagate changes in the representation through the complex network For this we will use the **chain rule of calculus** to compute **gradients** of the representation. In the general case, this process requires using automatic differentiation methods.
The point of backpropagration is to learn the optimal weight for the neural network. The algorithm proceeds iteratively through a series of small steps. Once we have the gradient of the loss function we can update the tensor of weights.
$$W_{t+1} = W_t + \alpha \nabla_{W} J(W_t) $$
where
$W_t = $ the tensor of weights or model parameters at step $t$.
$\alpha\ = $ step size or learning rate.
$J(W) = $ loss function given the weights.
$\nabla_{W} J(W) = $ gradient of $J$ with respect to the weights $W$.
It should be evident that the back propagation algorithm is a form of gradient decent. The weights are updated in small steps following the gradient of $J(W)$ down hill.
Finally, we need a way evaluate the performance of the model. Without evaluation metrics we have no way to compare the performance of a given model, or compare the performance of several models.
In the next sections, we will address each of loss functions, gradient computation and performance measurement.
### 3.1 Loss functions
To train a neural network we must have a **loss function**, also known as a **cost function**. In simple terms, the loss function measures the fit of a model to the training data. The lower the loss, the better the fit.
To train deep learning models **cross entropy** is often used as a loss function. This is an information theoretic measure of model fit. We can understand cross entropy as follows.
First define **Shannon entropy** as:
$$\mathbb{H}(I) = E[I(X)] = E[-ln_b(P(X))] = - \sum_{i=1}^n P(x_i) ln_b(P(x_i)$$
Where:
$E[X] = $ the expectation of $X$.
$I(X) = $ the information content of $X$.
$P(X) = $ probability of $X$.
$b = $ base of the logarithm.
This rather abstract formula gives us a way to compute the expected information content of a set of values $X$. The more likely (higher probability) of $X$ the less informative it is.
To create a loss function from the definition of Shannon entropy we start with the **Kullback-Leibler divergence (KL divergence)** or **relative entropy**. The KL divergence is an information theoretic measure of the difference between two distributions, $P(X)$ and $Q(X)$.
$$\mathbb{D}_{KL}(P \parallel Q) = - \sum_{i=1}^n p(x_i)\ ln_b \frac{p(x_i)}{q(x_i)}$$
Ideally, in the case of training a machine learning model we want a distribution $Q(X)$, which is identical to the actual data distribution $P(X)$.
But, you may say, if we could know $P(X)$ why compute $Q(X)$ at all? Fortunately, we do not have to. We can rewrite the KL divergence as:
$$\mathbb{D}_{KL}(P \parallel Q) = \sum_{i=1}^n p(x_i)\ ln_b p(x_i) - \sum_{i=1}^n p(x_i)\ ln_b q(x_i)$$
Since $P(X)$ is fixed and we wish to find $Q(X)$ when we train our model, we can minimize the term on the right, which is the **cross entropy** defined as:
$$\mathbb{H}(P,Q) = - \sum_{i=1}^n p(x_i)\ ln_b q(x_i)$$
From the formulation of KL divergence above you can see the following.
$$\mathbb{D}_{KL}(P \parallel Q) = \mathbb{H}(P) + \mathbb{H}(P,Q)\\
\mathbb{D}_{KL}(P \parallel Q) = Entropy(P) + Cross\ Entropy(P,Q)$$
Thus, we can minimize divergence by minimizing cross entropy. This idea is both intuitive and computationally attractive. The closer the estimated distribution $q(X)$ is to the distribution of the true underling process $p(X)$, the lower the cross entropy and the lower the KL divergence.
In general we will not know $p(X)$. In fact, if we did, why would we need to solve a training problem? So, we can use the following approximation.
$$\mathbb{H}(P,Q) = - \frac{1}{N} \sum_{i=1}^n ln_b q(x_i)$$
You may notice, that this approximation, using the average log likelihood, is equivalent to a maximum likelihood estimator (MLE).
### 3.2 Cross Entropy for Gaussian Likelihood
Let's look at a specific case of a model with Gaussian likelihood. What is the cross entropy? We can start by thinking about the definition of likelihood.
$$p(data|model) = p(data|f(\theta)) = p(x_i|f(\hat{\mu},\sigma))= \frac{1}{2 \pi \sigma^2} e^{\frac{-(x_i - \hat{\mu})^2}{2 \sigma^2}}$$
We take the negative logarithm of this likelihood model.
$$-log\big(p(data|model) \big) = - \frac{1}{2}\big( log( 2 \pi \sigma^2) + \frac{(x_i - \hat{\mu})^2}{2 \sigma^2} \big)$$
Now, the first term on the right is a constant, as is the denominator of the second term if we assume known variance. Since our goal is to minimize cross entropy, we can eliminate these quantities and be left with just the following.
$$-(x_i - \hat{\mu})^2$$
This is one issue we need to deal with. Our formulation of cross entropy involves the unknown true distribution of the underling process $p(X)$. However, since $p(x_i)$ is fixed but unknown we can just write the following.
$$min \big( \mathbb{H}(P,Q) \big) \propto argmin_{\mu} \big( - \sum_{i=1}^n (x_i - \hat{\mu})^2 \big)$$
This is just the definition of a Maximum Likelihood Estimator (MLE) for the least squares problem! In fact, since the cross entropy is computed using the negative log likelihood, it will always be minimized by the MLE.
### 3.3 Chain rule of calculus
Key to the back propagation algorithm is the chain rule of calculus; not to be confused with the chain rule of probability. The chain rule allows us to back propagate gradients though an arbitrarily complex graph of functions.
Now, suppose there is a function $y = g(x)$, and another function $z = f(y) = f(g(x))$. How do we compute the derivative of $z$ with respect to $x$? Applying the chain rule we get:
$$\frac{dz}{dx} = \frac{dz}{dy}\frac{dy}{dx}$$
Consider $x \in R^M$ $g(x) \Rightarrow R^M$ and $ f(y) \Rightarrow z \in R$. The chain rule becomes:
$$\frac{\partial z}{\partial x} = \sum_{j \in M} \frac{\partial z}{\partial y_j}\frac{\partial y_j}{\partial x_i}$$
Which we can rewrite as
$$\nabla_{x}z = \Big( \frac{\partial x}{\partial y} \Big)^T \nabla_{y}z$$
Here, $\frac{\partial x}{\partial y}$ is the $n x m$ **Jacobian matrix** of partial derivatives. The Jacobian is multiplied by the gradient with respect to $y$, $\nabla_{y}z$. You can think of the Jacobian as a transformation for a gradient with respect to $y$ to what we really want, the gradient with respect to $z$.
### 3.4 Example of finding a gradient.
Let's work out backpropagation for a very simple neural network with a single hidden layer with two units. This neural network, including the loss function, is shown in Figure 3.1 below. This network has been highly simplified. There are only three layers, input layer, a two unit hidden layer with no bias terms, and a single unit output layer. There are only two weight tensors for this network. Further, the hidden units use rectilinear activation and the output unit uses linear activation. These activation functions have simple partial derivatives.
<img src="img/LossGraph.jpg" alt="Drawing" style="width:600px; height:300px"/>
<center>**Figure 3.1
Simple single layer neural network with loss function** </center>
First, we need to work out the forward propagation relationships. We can compute the outputs of the hidden layer as follows.
$$S_{\{1,2\}} = \sigma_h \big( W^1 \cdot X_{\{1,2\}} \big) = \sigma \big( \sum_j W^1_{i,j} x_j \big)$$
In the same way, the result from the output layer can be computed as follows, since the activation function for this layer is linear.
$$S_3 = W^2 \cdot S_{\{1,2\}} = \sum_i W^2_i \sigma \big( \sum_j W^1_{i,j} x_j \big)$$
To perform backpropagation, we need fill out the gradient vector by computing $\frac{\partial J(W)}{\partial W}$ for each weight in the model.
$$\frac{\partial J(W)}{\partial W} =
\begin{bmatrix}
\frac{\partial J(W)}{\partial W^2_{11}} \\
\frac{\partial J(W)}{\partial W^2_{12}} \\
\frac{\partial J(W)}{\partial W^2_{21}} \\
\frac{\partial J(W)}{\partial W^2_{22}} \\
\frac{\partial J(W)}{\partial W^1_{1}} \\
\frac{\partial J(W)}{\partial W^1_{2}}
\end{bmatrix}$$
To keep things simple in this example we will just use a non-normalized squared error loss function. This is just the MLE estimator (without normalization) for a Gaussian distribution.
$$J(W) = - \frac{1}{2} \sum_{l=1}^n (y_l - S_{3,l})^2 $$
Where:
$y_k = $ the label for the lth case.
$\hat{y_k} = S_{3,k} =$ the output of the network for the lth case.
We want to compute the gradients with respect to the input and output tensors:
$$\frac{\partial J(W)}{\partial W^1}, \ \frac{\partial J(W)}{\partial W^2}$$
Let's start with the easier case of the partial derivatives with respect to the output tensor. We can apply the chain rule as follows:
$$\frac{\partial J(W)}{\partial W^2_k} = \frac{\partial J(W)}{\partial S_{3,k}} \frac{\partial S_{3,k}}{\partial W^2_k}$$
The first partial derivative of the chain is:
$$\frac{\partial J(W)}{\partial S_{3,k}} = \frac{\partial - \frac{1}{2} (y_k - S_{3,k})^2} {\partial S_{3,k}} = y_k - S_{3,k} $$
And, the second partial derivative in the chain, given the linear activation of the output unit, becomes:
$$\frac{\partial S_{3,k}}{\partial W^2_k} = \frac{\partial W^2_k S_{j,k}}{\partial W^2_k} = S_{j,l}, \ j \in \{1,2\}$$
Multiplying the two components of the chain gives us:
$$\frac{\partial J(W)}{\partial W^2_k} = S_{j,k} (y_k - S_{3,k}), \ j \in \{1,2\} $$
The partial derivatives with respect to the input tensor are a bit more complicated. To apply the chain rule we must work backwards from the loss function. This gives the following chain:
$$\frac{\partial J(W)}{\partial W^1_{i,j}} = \frac{\partial J(W)}{\partial S_{3}} \frac{\partial S_{3}}{\partial S_{j}} \frac{\partial S_{j}}{\partial W^1_{i,j}}$$
First, we find the right most partial derivative in our chain:
\begin{equation}
\frac{\partial S_j}{\partial W^1_{i,j}} =
\begin{cases}
\frac {\partial W^1_{i,j} x_{i,k}}{\partial W^1_{i,j}}, & \text{if $S_j>0$} \\
0, & \text{otherwise}
\end{cases}
\end{equation}
Which given the ReLU activation results in:
\begin{equation}
\frac{\partial S_j}{\partial W^1_{i,j}} =
\begin{cases}
1, & \text{if $S_j>0$} \\
0, & \text{otherwise}
\end{cases}
\end{equation}
The middle partial derivative must account for the nonlinearity:
$$\frac{\partial S_{3}}{\partial S_{j}} = W^2_j$$
We have already computed $\frac{\partial J(W)}{\partial S_{3}}$. Multiplying all three partial derivatives we find:
\begin{equation}
\frac{\partial J(W)}{\partial W^1_{i,j}} =
\begin{cases}
(y_k - S_{3,k}) W^2_j, & \text{if $S_j>0$} \\
0, & \text{otherwise}
\end{cases}
\end{equation}
Where $S_3$ and $S_{\{1,2 \}} are computed using the relationships given above.
## 4.0 Network for XOR Function
In an earlier section the work of Minsky and Papert was mentioned. A key failure of the early neural network architectures, such as the perceptron, was the inability to learn function that are not linearly separable. The **exclusive OR** or **XOR** is just such a function. The truth table of the XOR looks like this:
| Input 1 | Input 2 | Output |
|---------|---------|--------|
| 0 | 0 | 0|
| 0 | 1 | 1 |
| 1 | 0 | 1 |
| 1 | 1 | 0 |
In words, the XOR function is 0 if both inputs are the same, or 1 if both inputs are different. Hence, the reason it is known as exclusive OR.
****************************
**Exercise 2:** In the cell below, you will create code for a neural network which performs the XOR operation. The following elements are required:
1. An input weight tensor
2. A hidden layer with two units using ReLU activation
3. A output weight tensor
4. An output unit with linear activation
There are 4 possible test (input) cases. Test your code for all cases.
## 5.0 Performance Metrics
Now that we have the components for training a basic neural network in place we need a way to evaluate the performance. It turns out, there is nothing special evaluation of neural network models as opposed to other machine learning models.
For regression models, one typically uses the standard metrics such as root mean square error (RMSE), mean absolute error (MAE).
For classification models, one also typically uses the standard metrics including the confusion matrix, accuracy, precision and recall.
| github_jupyter |
```
import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import time
import seaborn as sns
sns.set()
dataset = pd.read_csv('HistoricalQuotes.csv')
del dataset['date']
del dataset['volume']
dataset.head()
count = 0; temp = dataset.iloc[0, 0]
while temp > 10:
temp /= 10.0; count += 1
count = 10.0 ** count
stockvals = dataset.iloc[:, :].values / count
stockvals
class Model:
def __init__(self, learning_rate, num_layers, size, size_layer):
def lstm_cell():
return tf.nn.rnn_cell.LSTMCell(size_layer, state_is_tuple = False)
self.rnn_cells = tf.nn.rnn_cell.MultiRNNCell([lstm_cell() for _ in xrange(num_layers)], state_is_tuple = False)
self.X = tf.placeholder(tf.float32, (None, None, size))
self.Y = tf.placeholder(tf.float32, (None, None, size))
self.hidden_layer = tf.placeholder(tf.float32, (None, num_layers * 2 * size_layer))
self.outputs, self.last_state = tf.nn.dynamic_rnn(self.rnn_cells, self.X, initial_state = self.hidden_layer, dtype = tf.float32)
self.rnn_W = tf.Variable(tf.random_normal((size_layer, size)))
self.rnn_B = tf.Variable(tf.random_normal((size,)))
outputs_reshaped = tf.reshape(self.outputs, [-1, size_layer])
y_batch_long = tf.reshape(self.Y, [-1, size])
self.logits = (tf.matmul(outputs_reshaped, self.rnn_W) + self.rnn_B)
self.cost = tf.square(y_batch_long - self.logits)
self.optimizer = tf.train.AdamOptimizer(learning_rate).minimize(self.cost)
def visualize_stock(stockvals, modelnn, num_layers, size_layer):
current_palette = sns.color_palette("Paired", 8)
day, day_future, data = [], [], []
data.append(stockvals[0, :]); day.append(0); day_future.append(0)
init_value = np.zeros((1, num_layers * 2 * size_layer))
# predict 10 days forward
for x in xrange(stockvals.shape[0] + 10):
batch = np.zeros((1, 1, stockvals.shape[1]))
try:
batch[0, :, :] = stockvals[x: x + 1, :]
day_future.append(x + 1)
day.append(x + 1)
except:
batch[0, :, :] = probs[0, :]
day_future.append(x + 1)
last_state, probs = sess.run([modelnn.last_state, modelnn.logits], feed_dict={modelnn.X: batch, modelnn.hidden_layer: init_value})
init_value = last_state
data.append(probs[0, :].tolist())
fig = plt.figure(figsize = (10,10))
ax = plt.subplot(111)
x_true = stockvals
x_predict = np.array(data)
ax.plot(day[:-1], x_true[:, 0], label = 'true close', color = current_palette[0])
ax.plot(day_future, x_predict[:, 0], label = 'predict close', color = current_palette[1])
ax.plot(day[:-1], x_true[:, 1], label = 'true open', color = current_palette[2])
ax.plot(day_future, x_predict[:, 1], label = 'predict close', color = current_palette[3])
ax.plot(day[:-1], x_true[:, 2], label = 'true high', color = current_palette[4])
ax.plot(day_future, x_predict[:, 2], label = 'predict close', color = current_palette[5])
ax.plot(day[:-1],x_true[:, 3], label = 'true low', color = current_palette[6])
ax.plot(day_future, x_predict[:, 3], label = 'predict close', color = current_palette[7])
box = ax.get_position()
ax.set_position([box.x0, box.y0 + box.height * 0.1, box.width, box.height * 0.9])
ax.legend(loc = 'upper center', bbox_to_anchor= (0.5, -0.05), fancybox = True, shadow = True, ncol = 5)
plt.title('stock market')
plt.xlabel('day')
plt.ylabel('value')
plt.show()
num_layers = 2
size_layer = 128
epoch = 1000
sess = tf.InteractiveSession()
modelnn = Model(0.001, num_layers, stockvals.shape[1], size_layer)
sess.run(tf.global_variables_initializer())
for i in xrange(epoch):
last_time = time.time()
init_value = np.zeros((1, num_layers * 2 * size_layer))
for x in xrange(stockvals.shape[0] - 1):
batch = np.zeros((1, 1, stockvals.shape[1]))
batch_y = np.zeros((1, 1, stockvals.shape[1]))
batch[0, :, :] = stockvals[x: x + 1, :]
batch_y[0, :, :] = stockvals[x + 1: x + 2, :]
last_state, _, loss = sess.run([modelnn.last_state, modelnn.optimizer, modelnn.cost], feed_dict = {modelnn.X: batch, modelnn.Y: batch_y, modelnn.hidden_layer: init_value})
init_value = last_state
diff = time.time() - last_time
if (i + 1) % 100 == 0:
print "epoch: " + str(i + 1) + ", loss: " + str(np.mean(loss)) + ", speed: " + str(diff) + " s / epoch"
visualize_stock(stockvals, modelnn, num_layers, size_layer)
```
| github_jupyter |
# Summarize results at cell-type level for the purpose of contrasting results across species
1. Proportion of FDR < 10% at various parameter for scDRS.
2. Cell type level p-value for scDRS
3. Geary's C statistics for 10kb, 1000 genes default settings.
4. LDSC-SEG p-value
```
%load_ext lab_black
%load_ext autoreload
%autoreload 2
import numpy as np
import pandas as pd
import os, sys
import scTRS.data_loader as dl
import pandas as pd
import numpy as np
from os.path import join
from statsmodels.stats.multitest import multipletests
import yaml
import scanpy as sc
sys.path.append("/n/home12/khou/holystore/")
import paper_utils
import glob
from scipy import stats
from scipy.stats import rankdata
DATA_PATH = "/n/holystore01/LABS/price_lab/Users/mjzhang/scDRS_data"
dict_dset = {
"tms_facs": join(
DATA_PATH, "tabula_muris_senis/tabula-muris-senis-facs-official-raw-obj.h5ad"
),
}
dict_df_obs = {k: sc.read_h5ad(dict_dset[k]).obs for k in dict_dset}
URL_SUPP_TABLE = "supp_tables.xlsx"
df_trait_info = pd.read_excel(
URL_SUPP_TABLE,
sheet_name=0,
)
df_celltype_info = pd.read_excel(
URL_SUPP_TABLE,
sheet_name=1,
)
def calc_mc_pval(method, list_trait, stats="percentile_95"):
assert method in ["top1000-zscore.vs", "top1000-uniform.vs", "top2000-zscore.vs"]
dset = "tms_facs"
SCORE_PATH = f"00_calc_score/tms_facs/score_file/{method}"
dict_dset = {
"tms_facs": join(
DATA_PATH,
"tabula_muris_senis/tabula-muris-senis-facs-official-raw-obj.h5ad",
),
"tms_droplet": join(
DATA_PATH,
"tabula_muris_senis/tabula-muris-senis-droplet-official-raw-obj.h5ad",
),
"ts_facs": join(
DATA_PATH, "single_cell_data/tabula_sapiens/obj_smartseq2_raw.h5ad"
),
}
df_obs = sc.read_h5ad(dict_dset[dset]).obs
df_obs.cell_ontology_class = [
c.replace(" ", "_").replace(",", "") for c in df_obs.cell_ontology_class
]
assert stats in ["max", "percentile_99", "percentile_95", "percentile_90"]
col_ctrl_norm_score = [f"ctrl_norm_score_{i}" for i in range(1000)]
dict_mc_pval = {}
for trait in list_trait:
df_tmp = pd.read_csv(
join(SCORE_PATH, f"{trait}.full_score.gz"), sep="\t", index_col=0
)
df_tmp = pd.merge(
df_obs[["cell_ontology_class"]], df_tmp, left_index=True, right_index=True
)
if stats == "max":
stats_func = lambda x: np.max(x)
elif stats == "percentile_99":
stats_func = lambda x: np.percentile(x, 99)
elif stats == "percentile_95":
stats_func = lambda x: np.percentile(x, 95)
elif stats == "percentile_90":
stats_func = lambda x: np.percentile(x, 90)
else:
raise NotImplementedError
df_tmp = df_tmp.groupby("cell_ontology_class").agg(
{
trait: lambda x: stats_func(x)
for trait in ["norm_score"] + col_ctrl_norm_score
}
)
rank_score = rankdata(df_tmp.values, axis=1)
mc_pval = pd.Series(
1 - (rank_score[:, 0] - 1) / rank_score.shape[1], index=df_tmp.index
)
dict_mc_pval[trait] = mc_pval
df_mc_pval = pd.DataFrame(dict_mc_pval).T
df_mc_pval.columns.name = None
return df_mc_pval
import submitit
executor = submitit.AutoExecutor(folder="~/submitit/")
executor.update_parameters(timeout_min=45, mem_gb=12, slurm_partition="shared")
dict_jobs = {}
for method in ["top1000-zscore.vs", "top1000-uniform.vs", "top2000-zscore.vs"]:
jobs = executor.map_array(
lambda x: calc_mc_pval(method, x, stats="percentile_95"),
np.array_split(df_trait_info["Trait_Identifier"].values, 30),
)
df_pval = pd.concat(j.result() for j in jobs)
df_pval.to_csv(f"data/summary_ct/df_pval.{method}.csv")
```
# Summarize FDR proportion
```
for method in ["top1000-zscore.vs", "top1000-uniform.vs", "top2000-zscore.vs"]:
dset = "tms_facs"
SCORE_PATH = f"00_calc_score/tms_facs/score_file/{method}"
dict_df = dict()
for trait in df_trait_info["Trait_Identifier"].values:
df_temp = pd.read_csv(
join(SCORE_PATH, f"{trait}.score.gz"), sep="\t", index_col=0
)
dict_df[trait] = df_temp["pval"]
df_trs_pval = pd.DataFrame(dict_df)
df_obs = dict_df_obs[dset]
df_obs.cell_ontology_class = [
c.replace(" ", "_").replace(",", "") for c in df_obs.cell_ontology_class
]
df_fdr_prop = paper_utils.agg_trs_pval(
df_obs=df_obs,
df_pval=df_trs_pval,
stats="fdr_prop",
groupby="cell_ontology_class",
fdr_prop_threshold=0.1,
)
# adjust order
df_fdr_prop.to_csv(f"data/summary_ct/drs_fdr_prop.{method}.csv")
```
# Summarize Geary's C
```
trait_list = df_trait_info.Trait_Identifier.values
def summarize_gearysc(trait_list, tissue_list, rls_dir):
dict_df_gearysc = dict()
for trait in trait_list:
df_trait = []
for tissue in tissue_list:
df_tmp = pd.read_csv(join(rls_dir, f"{trait}.{tissue}.csv"), index_col=0)
df_tmp.index = [c.replace(" ", "_").replace(",", "") for c in df_tmp.index]
df_tmp.index = f"{tissue}" + "." + df_tmp.index
df_trait.append(df_tmp)
df_trait = pd.concat(df_trait)
dict_df_gearysc[trait] = df_trait
# use p-values to assign significance
df_tmp = pd.concat(
[dict_df_gearysc[trait]["pval"] for trait in dict_df_gearysc], axis=1
).dropna()
df_tmp.columns = [trait for trait in dict_df_gearysc]
df_tmp["tissue"] = [i.split(".")[0] for i in df_tmp.index]
df_tmp["ct"] = [i.split(".")[1] for i in df_tmp.index]
# Geary's C signfinificance, first calculated within cell-type
# Then aggregated across cell-types across tissues
# We assign an FDR for each cell-type trait pair
df_gearysc_meta_fdr = {}
# calculate p-value
for ct, df_ct in df_tmp.groupby("ct"):
df_gearysc_meta_fdr[ct] = [
stats.combine_pvalues(df_ct[col])[1] for col in trait_list
]
df_gearysc_meta_fdr = pd.DataFrame(df_gearysc_meta_fdr, index=trait_list).T
# convert to FDR
df_gearysc_meta_fdr = pd.DataFrame(
multipletests(df_gearysc_meta_fdr.values.flatten(), method="fdr_bh")[1].reshape(
df_gearysc_meta_fdr.shape
),
index=df_gearysc_meta_fdr.index,
columns=df_gearysc_meta_fdr.columns,
).T
return df_gearysc_meta_fdr
for dset in ["tms_facs", "ts_facs", "tms_droplet"]:
df_gearysc = summarize_gearysc(
trait_list,
dict_df_obs[dset].tissue.unique(),
f"02_calc_gearysc/gearysc/{dset}/",
)
df_gearysc.to_csv(f"data/summary_ct/df_gearysc_fdr.{dset}.csv")
```
# LDSC-SEG
```
ldsc_result_dir = "01_calc_ldsc/out/cts_result/"
df_ldsc_pval = pd.DataFrame(index=df_fdr_prop.index, columns=df_fdr_prop.columns)
for trait in df_fdr_prop.index:
rls = pd.read_csv(
join(ldsc_result_dir, f"{trait}.cell_type_results.txt"),
sep="\t",
)
df_ldsc_pval.loc[trait, :] = rls.set_index("Name").loc[
df_ldsc_pval.columns, "Coefficient_P_value"
]
df_ldsc_pval = df_ldsc_pval.astype(float)
df_ldsc_pval.to_csv("summary_ct/ldsc_seg_pval.csv")
```
| github_jupyter |
# Exp 95 analysis
See `./informercial/Makefile` for experimental
details.
```
import os
import numpy as np
from IPython.display import Image
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import seaborn as sns
sns.set_style('ticks')
matplotlib.rcParams.update({'font.size': 16})
matplotlib.rc('axes', titlesize=16)
from infomercial.exp import meta_bandit
from infomercial.local_gym import bandit
from infomercial.exp.meta_bandit import load_checkpoint
import gym
def plot_meta(env_name, result):
"""Plots!"""
# episodes, actions, scores_E, scores_R, values_E, values_R, ties, policies
episodes = result["episodes"]
actions =result["actions"]
bests =result["p_bests"]
scores_E = result["scores_E"]
scores_R = result["scores_R"]
values_R = result["values_R"]
values_E = result["values_E"]
ties = result["ties"]
policies = result["policies"]
# -
env = gym.make(env_name)
best = env.best
print(f"Best arm: {best}, last arm: {actions[-1]}")
# Plotz
fig = plt.figure(figsize=(6, 14))
grid = plt.GridSpec(6, 1, wspace=0.3, hspace=0.8)
# Arm
plt.subplot(grid[0, 0])
plt.scatter(episodes, actions, color="black", alpha=.5, s=2, label="Bandit")
plt.plot(episodes, np.repeat(best, np.max(episodes)+1),
color="red", alpha=0.8, ls='--', linewidth=2)
plt.ylim(-.1, np.max(actions)+1.1)
plt.ylabel("Arm choice")
plt.xlabel("Episode")
# Policy
policies = np.asarray(policies)
episodes = np.asarray(episodes)
plt.subplot(grid[1, 0])
m = policies == 0
plt.scatter(episodes[m], policies[m], alpha=.4, s=2, label="$\pi_E$", color="purple")
m = policies == 1
plt.scatter(episodes[m], policies[m], alpha=.4, s=2, label="$\pi_R$", color="grey")
plt.ylim(-.1, 1+.1)
plt.ylabel("Controlling\npolicy")
plt.xlabel("Episode")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# score
plt.subplot(grid[2, 0])
plt.scatter(episodes, scores_E, color="purple", alpha=0.4, s=2, label="E")
plt.plot(episodes, scores_E, color="purple", alpha=0.4)
plt.scatter(episodes, scores_R, color="grey", alpha=0.4, s=2, label="R")
plt.plot(episodes, scores_R, color="grey", alpha=0.4)
plt.plot(episodes, np.repeat(tie_threshold, np.max(episodes)+1),
color="violet", alpha=0.8, ls='--', linewidth=2)
plt.ylabel("Score")
plt.xlabel("Episode")
# plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# Q
plt.subplot(grid[3, 0])
plt.scatter(episodes, values_E, color="purple", alpha=0.4, s=2, label="$Q_E$")
plt.scatter(episodes, values_R, color="grey", alpha=0.4, s=2, label="$Q_R$")
plt.plot(episodes, np.repeat(tie_threshold, np.max(episodes)+1),
color="violet", alpha=0.8, ls='--', linewidth=2)
plt.ylabel("Value")
plt.xlabel("Episode")
# plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# Ties
plt.subplot(grid[4, 0])
plt.scatter(episodes, bests, color="red", alpha=.5, s=2)
plt.ylabel("p(best)")
plt.xlabel("Episode")
plt.ylim(0, 1)
# Ties
plt.subplot(grid[5, 0])
plt.scatter(episodes, ties, color="black", alpha=.5, s=2, label="$\pi_{tie}$ : 1\n $\pi_\pi$ : 0")
plt.ylim(-.1, 1+.1)
plt.ylabel("Ties index")
plt.xlabel("Episode")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
def plot_epsilon(env_name, result):
"""Plots!"""
# episodes, actions, scores_E, scores_R, values_E, values_R, ties, policies
episodes = result["episodes"]
actions =result["actions"]
bests =result["p_bests"]
scores_R = result["scores_R"]
values_R = result["values_R"]
epsilons = result["epsilons"]
# -
env = gym.make(env_name)
best = env.best
print(f"Best arm: {best}, last arm: {actions[-1]}")
# Plotz
fig = plt.figure(figsize=(6, 14))
grid = plt.GridSpec(6, 1, wspace=0.3, hspace=0.8)
# Arm
plt.subplot(grid[0, 0])
plt.scatter(episodes, actions, color="black", alpha=.5, s=2, label="Bandit")
plt.plot(episodes, np.repeat(best, np.max(episodes)+1),
color="red", alpha=0.8, ls='--', linewidth=2)
plt.ylim(-.1, np.max(actions)+1.1)
plt.ylabel("Arm choice")
plt.xlabel("Episode")
# score
plt.subplot(grid[1, 0])
plt.scatter(episodes, scores_R, color="grey", alpha=0.4, s=2, label="R")
plt.ylabel("Score")
plt.xlabel("Episode")
# plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# Q
plt.subplot(grid[2, 0])
plt.scatter(episodes, values_R, color="grey", alpha=0.4, s=2, label="$Q_R$")
plt.ylabel("Value")
plt.xlabel("Episode")
# plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# best
plt.subplot(grid[3, 0])
plt.scatter(episodes, bests, color="red", alpha=.5, s=2)
plt.ylabel("p(best)")
plt.xlabel("Episode")
plt.ylim(0, 1)
# Decay
plt.subplot(grid[4, 0])
plt.scatter(episodes, epsilons, color="black", alpha=.5, s=2)
plt.ylabel("$\epsilon_R$")
plt.xlabel("Episode")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
def plot_critic(critic_name, env_name, result):
# -
env = gym.make(env_name)
best = env.best
# Data
critic = result[critic_name]
arms = list(critic.keys())
values = list(critic.values())
# Plotz
fig = plt.figure(figsize=(8, 3))
grid = plt.GridSpec(1, 1, wspace=0.3, hspace=0.8)
# Arm
plt.subplot(grid[0])
plt.scatter(arms, values, color="black", alpha=.5, s=30)
plt.plot([best]*10, np.linspace(min(values), max(values), 10), color="red", alpha=0.8, ls='--', linewidth=2)
plt.ylabel("Value")
plt.xlabel("Arm")
```
# Load and process data
```
data_path ="/Users/qualia/Code/infomercial/data/"
exp_name = "exp95"
sorted_params = load_checkpoint(os.path.join(data_path, f"{exp_name}_sorted.pkl"))
# print(sorted_params.keys())
best_params = sorted_params[0]
tie_threshold=best_params['tie_threshold']
sorted_params
```
# Performance
of best parameters
```
env_name = 'BanditHardAndSparse121-v0'
num_episodes = 605000
# Run w/ best params
result = meta_bandit(
env_name=env_name,
num_episodes=num_episodes,
lr_R=best_params["lr_R"],
tie_threshold=best_params["tie_threshold"],
seed_value=2,
)
print(best_params)
plot_meta(env_name, result=result)
plot_critic('critic_R', env_name, result)
plot_critic('critic_E', env_name, result)
```
# Sensitivity
to parameter choices
```
total_Rs = []
ties = []
lrs_R = []
lrs_E = []
trials = list(sorted_params.keys())
for t in trials:
total_Rs.append(sorted_params[t]['total_R'])
ties.append(sorted_params[t]['tie_threshold'])
lrs_R.append(sorted_params[t]['lr_R'])
# Init plot
fig = plt.figure(figsize=(5, 18))
grid = plt.GridSpec(6, 1, wspace=0.3, hspace=0.8)
# Do plots:
# Arm
plt.subplot(grid[0, 0])
plt.scatter(trials, total_Rs, color="black", alpha=.5, s=6, label="total R")
plt.xlabel("Sorted params")
plt.ylabel("total R")
_ = sns.despine()
plt.subplot(grid[1, 0])
plt.scatter(trials, ties, color="black", alpha=1, s=6, label="total R")
# plt.yscale('log')
plt.xlabel("Sorted params")
plt.ylabel("Tie threshold")
_ = sns.despine()
plt.subplot(grid[2, 0])
plt.scatter(trials, lrs_R, color="black", alpha=.5, s=6, label="total R")
plt.xlabel("Sorted params")
plt.ylabel("lr_R")
_ = sns.despine()
plt.subplot(grid[3, 0])
plt.scatter(lrs_R, total_Rs, color="black", alpha=.5, s=6, label="total R")
plt.xlabel("lrs_R")
plt.ylabel("total_Rs")
_ = sns.despine()
plt.subplot(grid[4, 0])
plt.scatter(ties, total_Rs, color="black", alpha=.5, s=6, label="total R")
plt.xlabel("$\epsilon$")
plt.ylabel("total_Rs")
# plt.xlim(0, 0.002)
_ = sns.despine()
```
# Parameter correlations
```
from scipy.stats import spearmanr
spearmanr(ties, lrs_R)
```
# Distributions
of parameters
```
# Init plot
fig = plt.figure(figsize=(5, 6))
grid = plt.GridSpec(3, 1, wspace=0.3, hspace=0.8)
plt.subplot(grid[0, 0])
plt.hist(ties, color="black")
plt.xlabel("tie threshold")
plt.ylabel("Count")
_ = sns.despine()
plt.subplot(grid[1, 0])
plt.hist(lrs_R, color="black")
plt.xlabel("lr_R")
plt.ylabel("Count")
_ = sns.despine()
```
of total reward
```
# Init plot
fig = plt.figure(figsize=(5, 2))
grid = plt.GridSpec(1, 1, wspace=0.3, hspace=0.8)
plt.subplot(grid[0, 0])
plt.hist(total_Rs, color="black", bins=50)
plt.xlabel("Total reward")
plt.ylabel("Count")
# plt.xlim(0, 10)
_ = sns.despine()
```
| github_jupyter |
# Numpy Practice
- Author: Alireza Dirafzoon
- Contributions are welcome :)
```
import numpy as np
### array()
a = [1, 2, 3]
x = np.array(a)
x = np.asarray(a)
x
x.tolist()
x.astype(np.float32)
### arange()
np.arange(3)
np.arange(0,7,2)
np.arange(3, -1, -1)
### zeros, ones, eye, linspace
np.zeros(3)
np.zeros((3,3))
np.ones(3)
np.ones((3,3))
np.eye(3)
np.linspace(0,10,3) # 3 points, from 0 to 10, inclusive
np.linspace(0,7,3)
### np.random
# random.rand(): uniform distr over [0, 1)
np.random.rand()
np.random.rand(2)
np.random.rand(2,3)
# random.randn(): normal distr.
np.random.randn(2,3)
# random.randint: int in [low,high) / [0, high)
np.random.randint(1,4)
np.random.randint(1,4, (2,2))
np.random.randint(4)
## array methods
### reshape
a = np.arange(1,7)
a = a.reshape(2,3)
a
### max, min, atgmax, argmin
a.max(axis = 0)
a.argmax(axis=0)
a.max(axis = 1)
a.max()
a.argmax()
### shape and dtype
a.shape
a.dtype
a.nbytes
### 2D array/matrix
m = np.arange(12).reshape(3,4)
m
m.mean(axis=0)
m.std(axis=1)
m.T # or m.transpose
m.reshape((-1,12))
m.reshape(-1)
m.ravel()
## indexing and selection
m[1][2]
m[1,2]
m[1,:]
## boradcasting
a = np.zeros(10)
a[5:8] = 5 # note we can't do this with list
a
suba = a[:3]
suba[:] = 2
suba
a # note that suba is not a copy, just points to a slice of a
suba = np.copy(a[:3])
suba
m = np.zeros((4,4))
m[1] = 2
m
# fancy indexing
m[[1,3]]
## selection
a = np.arange(4)
a > 2 # note we can't do this with list
(a == 2).astype(np.int16).sum()
a.nonzero()
## Operations
a, b = np.arange(0,3), np.arange(3,6)
a + b
a - b
a * b # element-wise
a/b # element-wise
np.multiply(a,b) # element-wise
# dot product of arrays
np.dot(a,b)
# cross product
np.cross(a,b)
# matrix multiplication
np.matmul(a,b.T)
a = np.arange(9).reshape((3,3)) #2D
b = np.array([0,1,0]) # 1D
print(a,b)
np.matmul(a,b) # 2D * 1D -> broadcasts the 1D array, treating it as a col
np.power(a,2) # element-wise
np.power(a,b) # element-wise
np.mod(a,b)
np.sqrt(a)
np.exp(a)
np.sin(a)
np.log(a)
## Kmeans
x1 = np.add(np.random.randn(10,2), 5)
x2 = np.add(np.random.randn(10,2), -5)
X = np.concatenate([x1,x2], axis=0)
mu, clusters = kmeans(X,2)
import random
def dist():
pass
def assign_clusters(X, mu):
def kmeans(X,k):
mu = random.sample(X,k)
it = 1
max_it = 100
while it < max_it:
# assign clusters to centers
clusters = assign_clusters(X, mu)
# calculate new centers
mu = calculate_centers(mu, clusters)
return mu, clusters
mu = np.random.rand(5,2)
x = np.random.rand(2)
[-mu[i[0]] for i in enumerate(mu)]
for i, mu_i in enumerate(mu):
print(min(i, np.linalg.norm(x - mu_i), key=lambda x:x[1]))
```
| github_jupyter |
# Tutorial 2: Training a spiking neural network on a simple vision dataset
Friedemann Zenke (https://fzenke.net)
> For more details on surrogate gradient learning, please see:
> Neftci, E.O., Mostafa, H., and Zenke, F. (2019). Surrogate Gradient Learning in Spiking Neural Networks.
> https://arxiv.org/abs/1901.09948
In Tutorial 1 we have seen how to train a simple multi-layer spiking neural network on a small synthetic dataset. In this tutorial we will apply what we have learned so far to a slightly larger dataset.
Specifically, we will use the [Fashion MNIST dataset](https://github.com/zalandoresearch/fashion-mnist).
```
import os
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
import seaborn as sns
import torch
import torch.nn as nn
import torchvision
# The coarse network structure is dicated by the Fashion MNIST dataset.
nb_inputs = 28*28
nb_hidden = 100
nb_outputs = 10
time_step = 1e-3
nb_steps = 100
batch_size = 256
dtype = torch.float
device = torch.device("cpu")
# Uncomment the line below to run on GPU
device = torch.device("cuda:0")
# Here we load the Dataset
root = os.path.expanduser("~/data/datasets/torch/fashion-mnist")
train_dataset = torchvision.datasets.FashionMNIST(root, train=True, transform=None, target_transform=None, download=True)
test_dataset = torchvision.datasets.FashionMNIST(root, train=False, transform=None, target_transform=None, download=True)
# Standardize data
# x_train = torch.tensor(train_dataset.train_data, device=device, dtype=dtype)
x_train = np.array(train_dataset.train_data, dtype=np.float)
x_train = x_train.reshape(x_train.shape[0],-1)/255
# x_test = torch.tensor(test_dataset.test_data, device=device, dtype=dtype)
x_test = np.array(test_dataset.test_data, dtype=np.float)
x_test = x_test.reshape(x_test.shape[0],-1)/255
# y_train = torch.tensor(train_dataset.train_labels, device=device, dtype=dtype)
# y_test = torch.tensor(test_dataset.test_labels, device=device, dtype=dtype)
y_train = np.array(train_dataset.train_labels, dtype=np.int)
y_test = np.array(test_dataset.test_labels, dtype=np.int)
# Here we plot one of the raw data points as an example
data_id = 1
plt.imshow(x_train[data_id].reshape(28,28), cmap=plt.cm.gray_r)
plt.axis("off")
```
Since we are working with spiking neural networks we ideally want to use a temporal code to make use of spike timing. To that end we will use a spike latency code to feed spikes to our network.
```
def current2firing_time(x, tau=20, thr=0.2, tmax=1.0, epsilon=1e-7):
""" Computes first firing time latency for a current input x assuming the charge time of a current based LIF neuron.
Args:
x -- The "current" values
Keyword args:
tau -- The membrane time constant of the LIF neuron to be charged
thr -- The firing threshold value
tmax -- The maximum time returned
epsilon -- A generic (small) epsilon > 0
Returns:
Time to first spike for each "current" x
"""
idx = x<thr
x = np.clip(x,thr+epsilon,1e9)
T = tau*np.log(x/(x-thr))
T[idx] = tmax
return T
def sparse_data_generator(X, y, batch_size, nb_steps, nb_units, shuffle=True ):
""" This generator takes datasets in analog format and generates spiking network input as sparse tensors.
Args:
X: The data ( sample x event x 2 ) the last dim holds (time,neuron) tuples
y: The labels
"""
labels_ = np.array(y,dtype=np.int)
number_of_batches = len(X)//batch_size
sample_index = np.arange(len(X))
# compute discrete firing times
tau_eff = 20e-3/time_step
firing_times = np.array(current2firing_time(X, tau=tau_eff, tmax=nb_steps), dtype=np.int)
unit_numbers = np.arange(nb_units)
if shuffle:
np.random.shuffle(sample_index)
total_batch_count = 0
counter = 0
while counter<number_of_batches:
batch_index = sample_index[batch_size*counter:batch_size*(counter+1)]
coo = [ [] for i in range(3) ]
for bc,idx in enumerate(batch_index):
c = firing_times[idx]<nb_steps
times, units = firing_times[idx][c], unit_numbers[c]
batch = [bc for _ in range(len(times))]
coo[0].extend(batch)
coo[1].extend(times)
coo[2].extend(units)
i = torch.LongTensor(coo,device=device)
v = torch.FloatTensor(np.ones(len(coo[0])),device=device)
X_batch = torch.sparse.FloatTensor(i, v, torch.Size([batch_size,nb_steps,nb_units]),device=device)
y_batch = torch.tensor(labels_[batch_index],device=device)
yield X_batch.to(device=device), y_batch.to(device=device)
counter += 1
```
### Setup of the spiking network model
```
tau_mem = 10e-3
tau_syn = 5e-3
alpha = float(np.exp(-time_step/tau_syn))
beta = float(np.exp(-time_step/tau_mem))
weight_scale = 7*(1.0-beta) # this should give us some spikes to begin with
w1 = torch.empty((nb_inputs, nb_hidden), device=device, dtype=dtype, requires_grad=True)
torch.nn.init.normal_(w1, mean=0.0, std=weight_scale/np.sqrt(nb_inputs))
w2 = torch.empty((nb_hidden, nb_outputs), device=device, dtype=dtype, requires_grad=True)
torch.nn.init.normal_(w2, mean=0.0, std=weight_scale/np.sqrt(nb_hidden))
print("init done")
def plot_voltage_traces(mem, spk=None, dim=(3,5), spike_height=5):
gs=GridSpec(*dim)
if spk is not None:
dat = (mem+spike_height*spk).detach().cpu().numpy()
else:
dat = mem.detach().cpu().numpy()
for i in range(np.prod(dim)):
if i==0: a0=ax=plt.subplot(gs[i])
else: ax=plt.subplot(gs[i],sharey=a0)
ax.plot(dat[i])
ax.axis("off")
```
We can now run this code and plot the output layer "membrane potentials" below. As desired these potentials do not have spikes riding on them.
## Training the network
```
class SuperSpike(torch.autograd.Function):
"""
Here we implement our spiking nonlinearity which also implements
the surrogate gradient. By subclassing torch.autograd.Function,
we will be able to use all of PyTorch's autograd functionality.
Here we use the normalized negative part of a fast sigmoid
as this was done in Zenke & Ganguli (2018).
"""
scale = 100.0 # controls steepness of surrogate gradient
@staticmethod
def forward(ctx, input):
"""
In the forward pass we compute a step function of the input Tensor
and return it. ctx is a context object that we use to stash information which
we need to later backpropagate our error signals. To achieve this we use the
ctx.save_for_backward method.
"""
ctx.save_for_backward(input)
out = torch.zeros_like(input)
out[input > 0] = 1.0
return out
@staticmethod
def backward(ctx, grad_output):
"""
In the backward pass we receive a Tensor we need to compute the
surrogate gradient of the loss with respect to the input.
Here we use the normalized negative part of a fast sigmoid
as this was done in Zenke & Ganguli (2018).
"""
input, = ctx.saved_tensors
grad_input = grad_output.clone()
grad = grad_input/(SuperSpike.scale*torch.abs(input)+1.0)**2
return grad
# here we overwrite our naive spike function by the "SuperSpike" nonlinearity which implements a surrogate gradient
spike_fn = SuperSpike.apply
def run_snn(inputs):
h1 = torch.einsum("abc,cd->abd", (inputs, w1))
syn = torch.zeros((batch_size,nb_hidden), device=device, dtype=dtype)
mem = torch.zeros((batch_size,nb_hidden), device=device, dtype=dtype)
mem_rec = [mem]
spk_rec = [mem]
# Compute hidden layer activity
for t in range(nb_steps):
mthr = mem-1.0
out = spike_fn(mthr)
rst = torch.zeros_like(mem)
c = (mthr > 0)
rst[c] = torch.ones_like(mem)[c]
new_syn = alpha*syn +h1[:,t]
new_mem = beta*mem +syn -rst
mem = new_mem
syn = new_syn
mem_rec.append(mem)
spk_rec.append(out)
mem_rec = torch.stack(mem_rec,dim=1)
spk_rec = torch.stack(spk_rec,dim=1)
# Readout layer
h2= torch.einsum("abc,cd->abd", (spk_rec, w2))
flt = torch.zeros((batch_size,nb_outputs), device=device, dtype=dtype)
out = torch.zeros((batch_size,nb_outputs), device=device, dtype=dtype)
out_rec = [out]
for t in range(nb_steps):
new_flt = alpha*flt +h2[:,t]
new_out = beta*out +flt
flt = new_flt
out = new_out
out_rec.append(out)
out_rec = torch.stack(out_rec,dim=1)
other_recs = [mem_rec, spk_rec]
return out_rec, other_recs
def train(x_data, y_data, lr=2e-3, nb_epochs=10):
params = [w1,w2]
optimizer = torch.optim.Adam(params, lr=lr, betas=(0.9,0.999))
log_softmax_fn = nn.LogSoftmax(dim=1)
loss_fn = nn.NLLLoss()
loss_hist = []
for e in range(nb_epochs):
local_loss = []
for x_local, y_local in sparse_data_generator(x_data, y_data, batch_size, nb_steps, nb_inputs):
output,_ = run_snn(x_local.to_dense())
m,_=torch.max(output,1)
log_p_y = log_softmax_fn(m)
loss_val = loss_fn(log_p_y, y_local)
optimizer.zero_grad()
loss_val.backward()
optimizer.step()
local_loss.append(loss_val.item())
mean_loss = np.mean(local_loss)
print("Epoch %i: loss=%.5f"%(e+1,mean_loss))
loss_hist.append(mean_loss)
return loss_hist
def compute_classification_accuracy(x_data, y_data):
""" Computes classification accuracy on supplied data in batches. """
accs = []
for x_local, y_local in sparse_data_generator(x_data, y_data, batch_size, nb_steps, nb_inputs, shuffle=False):
output,_ = run_snn(x_local.to_dense())
m,_= torch.max(output,1) # max over time
_,am=torch.max(m,1) # argmax over output units
tmp = np.mean((y_local==am).detach().cpu().numpy()) # compare to labels
accs.append(tmp)
return np.mean(accs)
loss_hist = train(x_train, y_train, lr=2e-4, nb_epochs=30)
plt.figure(figsize=(3.3,2),dpi=150)
plt.plot(loss_hist)
plt.xlabel("Epoch")
plt.ylabel("Loss")
sns.despine()
print("Training accuracy: %.3f"%(compute_classification_accuracy(x_train,y_train)))
print("Test accuracy: %.3f"%(compute_classification_accuracy(x_test,y_test)))
def get_mini_batch(x_data, y_data, shuffle=False):
for ret in sparse_data_generator(x_data, y_data, batch_size, nb_steps, nb_inputs, shuffle=shuffle):
return ret
x_batch, y_batch = get_mini_batch(x_test, y_test)
output, other_recordings = run_snn(x_batch.to_dense())
mem_rec, spk_rec = other_recordings
fig=plt.figure(dpi=100)
plot_voltage_traces(mem_rec, spk_rec)
fig=plt.figure(dpi=100)
plot_voltage_traces(output)
# Let's plot the hiddden layer spiking activity for some input stimuli
nb_plt = 4
gs = GridSpec(1,nb_plt)
fig= plt.figure(figsize=(7,3),dpi=150)
for i in range(nb_plt):
plt.subplot(gs[i])
plt.imshow(spk_rec[i].detach().cpu().numpy().T,cmap=plt.cm.gray_r, origin="lower" )
if i==0:
plt.xlabel("Time")
plt.ylabel("Units")
sns.despine()
```
In conclusion we see that already this simple spiking network solves the classification problem with ~85% accuracy and there is plenty of room left for tweaking. However, the hidden layer activities do not look very biological. Although the network displays population sparseness in that only a subset of neurons are active at any given time, the individual neurons' firing rates are pathologically high. This is not too surprising since we have not incentivised low activity levels in any way. We will create such an incentive to address this issue by activity regularization in one of the next tutorials.
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
| github_jupyter |
## Calculate the GPS Distance with the Haversine Formula
* Dan Couture [@MathYourLife](https://twitter.com/MathYourLife), [github](https://github.com/MathYourLife)
* 2015-03-05
### Problem
I've got the start and end gps location from an excursion across town and need to determine the travel distance
start: 43.059535, -71.013171
end: 43.083620, -70.892085
You could always use [google maps](https://www.google.com/maps/dir/43.059535,+-71.013171/%2743.08361,-70.89202%27/), but that would just be cheating.
### Haversine Formula
The goal for this formula is to calculate the shortest great circle distance between two points on the globe designated by latitude and longitudes. The added benefit of the Haversine equation is that it calculates the central angle as well where $s = r\theta$.

source: https://software.intel.com/sites/default/files/great%20circle.png
The Haversine formula is mainly based on calculation of the central angle, $\theta$, between two gps coordinates. Using the formula for arc length on a sphere
$$
s = r \theta
$$
where $r$ is the Earth's radius, and $\theta$ is the central angle calculated as
$$
\theta = 2 \arcsin\left( \sqrt{\sin^2 \left(\frac{\phi_2-\phi_1}{2}\right) + \cos(\phi_1)\cos(\phi_2)\sin^2 \left( \frac{\lambda_2-\lambda_1}{2} \right) } \right)
$$
with:
$$
\begin{align}
\phi &= \text{latitude}\\
\lambda &= \text{longitude}\\
\end{align}
$$
```
import numpy as np
import math
# Mean radius of the earth
EARTH_RADIUS = 6371.009
def haversine(lat1, lon1, lat2, lon2):
"""
Calculate the great circle distance between two points
on the earth (specified in decimal degrees)
Return (central angle, distance between points in km)
"""
# convert decimal degrees to radians
lat1, lon1, lat2, lon2 = [math.radians(x) for x in [lat1, lon1, lat2, lon2]]
# haversine formula
dlon = lon2 - lon1
dlat = lat2 - lat1
a = math.sin(dlat/2)**2 + math.cos(lat1) * math.cos(lat2) * math.sin(dlon/2)**2
central_angle = 2 * math.asin(math.sqrt(a))
# s = r * theta
km = EARTH_RADIUS * central_angle
return (central_angle, km)
```
### Calculate Excursion Distance
```
start = (43.059535, -71.013171)
end = (43.083620, -70.892085)
central_angle, km = haversine(*(start + end))
print("Central Angle of %g radians" % central_angle)
print("Arc length distance of %g km" % km)
```
### Earth is not a sphere
The Haversine is a straight forward formula that provides an approximation for the distance between gps coordinates. The Earth of course is not spherical, and elevation changes including terrain profiles will increase actual distance traveled.
## References:
* http://www.gcmap.com/faq/gccalc#ellipsoid
* http://stackoverflow.com/questions/4913349/haversine-formula-in-python-bearing-and-distance-between-two-gps-points/4913653#4913653
* http://www.gcmap.com/mapui?P=DXB-SFO%2CBINP&PM=b%3Adisc7%2B%25U%2Cp%3Adisc7%2B%25N&MS=wls&PW=2&DU=km
| github_jupyter |
# About this file
Data normalization takes a csv file, and outputs a set of public CSV's with one column and private ordering files.
Each public file is associated with a corresponding private file.
The public file consists of a shuffled column. The first line is the column name, and the rest of the file consists of every entry in a shuffled order.
The private file is a JSON that maps each row entry to its correct position. The JSON should also contain other metadata about the public CSV file.
There is a corresponding file that reverses this and writes the changes to raw CSV file.
```
from functools import partial
import datetime
import numpy as np
import pandas as pd
import re
import random
import json
import os
import constants
# Configure any settings
pd.set_option('display.max_columns', None)
# Declare any constants
raw_file = '../data/raw.csv' # REPLACE THIS WITH MOST RECENT DATASET
rows_output_directory = '../private/rows/'
indices_output_directory = '../private/indices/'
df = pd.read_csv(raw_file)
df.columns = constants.columns
# Creates an array with length `length` that contains unique integers from 0 to length-1
def generate_shuffled_indices(length):
array = [i for i in range(0, length)]
random.shuffle(array)
return array
def get_indices_id():
# Creates a new directory in ./private/indices/. Generated by increments starting from 0.
contents = os.listdir(indices_output_directory)
dirs = []
for item in contents:
if os.path.isdir(item) and item.isdigit():
print(item)
if os.path.isdir(indices_output_directory + item) and item.isdigit():
dirs.append(int(item))
if len(dirs) == 0:
return '0/'
dirs.sort()
indices_id = str(max(dirs) + 1) + '/'
os.makedirs(indices_output_directory + indices_id)
return indices_id
# Takes the rows, and shuffles them using the shuffled_indices
def shuffle_rows(row_values, indices):
rows = [None] * len(indices)
for i, new_index in enumerate(indices):
rows[new_index] = row_values[i]
return rows
# Takes the shuffled rows, and writes them to a public folder
def write_public_column(col_name, rows, filename):
full_list = [(col_name, rows)]
df = pd.DataFrame.from_items(full_list)
df.to_csv(rows_output_directory + filename)
# Writes the shuffled indices to a file
def write_private_indices(col_name, shuffled_indices, filename):
output = {
'column': col_name,
'public_file': col_name + '.csv',
'order': shuffled_indices
}
f = open(indices_output_directory + filename, 'w')
f.write(json.dumps(output))
if not os.path.exists('../private'):
os.makedirs('../private')
if not os.path.exists('../private/rows'):
os.makedirs('../private/rows')
if not os.path.exists('../private/indices'):
os.makedirs('../private/indices')
row_count = df.shape[0]
indices_dir = get_indices_id()
# This is where the main work gets done.
for col in constants.columns_to_normalize:
# Extract the column
row_values = df[col].tolist()
# Create shuffled indices
shuffled_indices = generate_shuffled_indices(row_count)
# Shuffle the column
shuffled_rows = shuffle_rows(row_values, shuffled_indices)
# Write the column to a public csv file
write_public_column(col, shuffled_rows, col + '.csv')
# Write the shuffled indices to a private file
write_private_indices(col, shuffled_indices, indices_dir + col + '.json')
```
| github_jupyter |
# Desafio #9
### Instalação de libs requeridas
```
!pip install opencv-python imutils pandas matplotlib
# Libs de apoio
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import Image, display
import os , random , json , requests
import types
import pandas as pd
# Libs para criação de modelo DeepLearning
# ATENÇÃO - Não utilizar o Keras interno do Tensorflow por incompatibilidade com o WML
import tensorflow as tf
import keras
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout, Flatten
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import InputLayer
from keras.preprocessing import image as Kimage
from keras.preprocessing.image import img_to_array
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from keras.preprocessing.image import ImageDataGenerator
from imutils import paths
# Libs para comunicação APIs Watson IBM
from ibm_botocore.client import Config
import ibm_boto3
from watson_machine_learning_client import WatsonMachineLearningAPIClient
```
### Download do Dataset(imagens) da competição
```
!wget -O dataset102.zip https://www.dropbox.com/s/trcr1k2dfmk8b44/DatasetBHTC.zip?dl=0
!unzip -o ./dataset102.zip
!mkdir DATASET
!mv CLEAN DIRTY DATASET
```
### Definição do Modelo - Não alterar o input_shape
```
random.seed(1)
model = Sequential()
model.add(Conv2D(16, (3,3), input_shape=(96,96,3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2),strides=(2,2)))
model.add(Conv2D(16, (3,3), padding='same', activation='relu'))
model.add(Conv2D(32, (3,3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2),strides=(2,2)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(2, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.summary()
```
### Load das imagens, já com pre-processing, para utilizar no Train do modelo
```
imagePaths = sorted(list(paths.list_images('DATASET/')))
random.seed(1)
random.shuffle(imagePaths)
data = []
labels = []
for imagePath in imagePaths:
image = Kimage.load_img(imagePath,target_size=(96,96))
image = img_to_array(image)
data.append(image)
label = imagePath.split(os.path.sep)[-2]
labels.append(label)
```
### Split do dataset em Train e Test
```
classes = ['CLEAN', 'DIRTY']
lb = LabelBinarizer()
labels = lb.fit_transform(labels)
data = np.array(data, dtype="float") / 255.0
(trainX, testX, trainY, testY) = train_test_split(data, labels, test_size=0.2, random_state=1)
print("shape de X de treino :",trainX.shape)
print("shape de Y de treino :",trainY.shape)
print("shape de X de teste :",testX.shape)
print("shape de Y de teste :",testY.shape)
```
### Treinamento do modelo - utilizando DataGenerator
```
aug = ImageDataGenerator(
rotation_range=20,
zoom_range=0.15,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.15,
horizontal_flip=True,
fill_mode="nearest")
random.seed(1)
model.fit_generator(
aug.flow(trainX, trainY, batch_size=16),
validation_data=(testX, testY),
steps_per_epoch=len(trainX) // 16,
epochs=40)
```
### Gráficos para avaliação do modelo
```
H=model.history
plt.rcParams['figure.figsize'] = 16, 8
plt.figure()
N = H.epoch[-1]+1
plt.plot(np.arange(0, N), H.history["loss"], label="train_loss")
plt.plot(np.arange(0, N), H.history["val_loss"], label="val_loss")
plt.plot(np.arange(0, N), H.history["acc"], label="train_acc")
plt.plot(np.arange(0, N), H.history["val_acc"], label="val_acc")
plt.title("Training Loss and Accuracy")
plt.xlabel("Epoch #")
plt.ylabel("Loss/Accuracy")
plt.legend(loc="upper left")
```
### Save do modelo como model.h5
```
model.save("model_WSTUDIO.h5")
```
### Upload do modelo no IBM Cloud Object Storage
### Upload do modelo no IBM Watson Machine Learning - para export de API
```
# o modelo deve estar em formato .tar.gz para subir no WML
!tar -zcvf model_WSTUDIO.tar.gz model_WSTUDIO.h5
# Credenciais do Watson Machine Learning
wml_credentials = {
"apikey": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"iam_apikey_description": "Auto-generated for key XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
"iam_apikey_name": "wdp-writer",
"iam_role_crn": "crn:v1:bluemix:public:iam::::serviceRole:Writer",
"iam_serviceid_crn": "crn:v1:bluemix:public:iam-identity::a/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX::serviceid:ServiceId-XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
"instance_id": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
"url": "https://us-south.ml.cloud.ibm.com"
}
client = WatsonMachineLearningAPIClient( wml_credentials )
# Definição de metadados do modelo (versao de python, framework, libs e etc)
sample_saved_model_filename = 'model_WSTUDIO.tar.gz'
metadata = {
client.repository.ModelMetaNames.NAME : 'MY_FIRST_SUBMIT',
client.repository.ModelMetaNames.FRAMEWORK_NAME : 'tensorflow',
client.repository.ModelMetaNames.FRAMEWORK_VERSION : '1.11',
client.repository.ModelMetaNames.RUNTIME_NAME : 'python',
client.repository.ModelMetaNames.RUNTIME_VERSION : '3.6',
client.repository.ModelMetaNames.FRAMEWORK_LIBRARIES : [{"name": "keras", "version": "2.2.4"}]
}
# Conexão com o WML
model_details = client.repository.store_model( sample_saved_model_filename, meta_props=metadata, training_data=None )
# Deploy do modelo
model_id = model_details["metadata"]["guid"]
model_deployment_details = client.deployments.create( artifact_uid=model_id, name="MY FIRST SUBMIT D9 Behind The Code" )
# Retrieve da URL da API para consumo da mesma
model_endpoint_url = client.deployments.get_scoring_url( model_deployment_details )
print("A URL de chamada da sua API é : ",model_endpoint_url)
```
### Chamada da API para teste
```
# Download de imagem teste
!wget -O teste2.jpg https://www.dropbox.com/s/73fofwe566749sl/teste123.jpg?dl=0
# Montagem da estrutura de JSON para chamada da API do WML
ai_parms = { "wml_credentials" : wml_credentials, "model_endpoint_url" : model_endpoint_url }
# Load da imagem de teste e pre-processing da mesma - para entrada na rede neural convolucional
image = Kimage.load_img("teste2.jpg")
plt.imshow(image)
image = image.resize(size=(96,96))
image = img_to_array(image)
image = np.array(image, dtype="float") / 255.0
image = np.expand_dims(image,axis=0)
image = image.tolist()
# Chamada da função SCORE no modelo (inference)
model_payload = { "values" : image }
model_result = client.deployments.score( ai_parms["model_endpoint_url"], model_payload )
print(model_result)
print("\nImagem Classificada como : ", classes[model_result['values'][0][1][0]])
print("\nProbabilidades : \n\t",
classes[model_result['values'][0][1][0]]," : %.2f" %(model_result['values'][0][0][0]*100),"%\n\t",
)
```
# BOA SORTE
| github_jupyter |
# Building an image classifier using the Sequential API for Tensorflow
## Getting started with Fashion MNIST
```
import tensorflow as tf
from tensorflow import keras
import numpy as np
from sklearn.utils import shuffle
import matplotlib.pyplot as plt
import random
print(tf.__version__)
print(keras.__version__)
#Building Image classifier using Sequential API
#Load the data from the keras datasets
fashion_mnist = keras.datasets.fashion_mnist
(X_train_full,y_train_full),(X_test,y_test) = fashion_mnist.load_data()
print(X_train_full.shape)
print(X_train_full.dtype)
print(X_test.shape)
print(X_test.dtype)
# # print("range(20) output: {}".format(range(20)))
# # print("np.arange(20) output: {}".format(np.arange(20)))
# a2 = np.arange(20)
# b = np.array(random.sample(range(20), 10))
# print(a2 )
# print(np.array_equal(a2 == b)
# # print(type(np.array(b)))
# # print(type(a2))
# # print(a2)
# # print(b)
# # print(c)
# Create validation dataset
#Note, there is a way to specify the validation percent directly while compiling the model, but that takes only the last n% samples,
# and it seems that wont work if data is not shuffled
percent_samples = 20
total_samples = X_train_full.shape[0]
num_validation_samples = int(total_samples* percent_samples / 100)
print("Num validation samples to be used: {}".format(num_validation_samples))
print("Total number of samples : {}".format(total_samples))
#Get the shuffled indices
a = np.arange(total_samples)
shuffled_idx = shuffle(a)
#Create the validation indices
idx_val = shuffled_idx[:num_validation_samples]
#Create the training indices
idx_train = shuffled_idx[num_validation_samples:]
#SLice the dataset
X_valid, X_train = X_train_full[idx_val] / 255.0, X_train_full[idx_train] / 255.0
y_valid, y_train = y_train_full[idx_val], y_train_full[idx_train]
# X_valid, X_train = X_train_full[:5000] / 255.0, X_train_full[5000:] / 255.0
# y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
#Specify the class names:
class_names = ["tshirt", "trouser", "Pullover","Dress","Coat","Sandat","Shirt","Sneaker","Bag","Ankle boot"]
#Print an example to see if the code works
print(class_names[y_train[0]])
```
## Model creation and training
```
#Show some of the dataset
import random
num_show_samples = 10
idx_show = random.sample(range(X_train.shape[0]),num_show_samples)
#print(idx_show)
# X_train[0]
# plt.subplot(2,2,1)
# plt.imshow(X_train[0])
rows = 2
cols = int(np.ceil(num_show_samples / rows))
print("Showing images on grid of rows={}, and cols={}".format(rows,cols))
plt.figure(figsize=(20,10))
for i in range(num_show_samples):
ax =plt.subplot(rows,cols,i+1)
ax.imshow(X_train[idx_show[i],:])
plt.title("{}".format(class_names[y_train[idx_show[i]]]))
#Create the model using Sequential data API
# 1)
# model = keras.models.Sequential()
# model.add(keras.layers.Flatten(input_shape=[28,28]))
# model.add(keras.layers.Dense(300,activation="relu"))
# model.add(keras.layers.Dense(100,activation="relu"))
# model.add(keras.layers.Dense(10,activation="softmax"))
# 2)
# #OR create a model this way
# model = keras.Sequential([
# keras.layers.Flatten(input_shape=[28,28]),
# keras.layers.Dense(300, activation="relu"),
# keras.layers.Dense(100,activation="relu"),
# keras.layers.Dense(10,activation="softmax")
# ], name="mnist_mlp")
# 3)
#Note, you can also specify the input layer as
#keras.layers.InputLayer(input_shape=[28,28])
model = keras.Sequential([
keras.layers.InputLayer(input_shape=[28,28]),
keras.layers.Flatten(),
keras.layers.Dense(300, activation="relu"),
keras.layers.Dense(100,activation="relu"),
keras.layers.Dense(10,activation="softmax")
], name="mnist_mlp")
#Printing the model summary
model.summary()
# tf.keras.utils.plot_model(
# model, to_file='../images/model.png', show_shapes=True, show_layer_names=True,
# rankdir='TB', expand_nested=False, dpi=96
# )
#tf.keras.utils.plot_model(model, show_shapes=True)
# Print and access model layers
print("Model layers: {}".format(model.layers))
#Fetch by index
print("Layer idx 0: {}".format(model.layers[0]))
print("Layer idx 2: {}".format(model.layers[2]))
hidden1 = model.layers[1]
print(hidden1.name)
#Fetch by name
# temp_layer = model.get_layer("dense_4")
# print(temp_layer.name)
#Access parameters of a given layer
weights,biases = hidden1.get_weights()
weights.shape
#Compile the model
#Using sparse_categorical_crossentropy because the target values are just target class indices 0-9,
#not one hot encoded vectors. For one hot encoded y_train, use categorical_crossentropy
#For binary classification, use binary_crossentropy, but also change the activation of output layer to be
# sigmoid instead of the softmax activation
#keras.utils.to_categorical()
#print(keras.utils.to_categorical(y_train))
model.compile(loss="sparse_categorical_crossentropy",
optimizer = keras.optimizers.SGD(lr=0.2),
metrics = ["accuracy"])
```
## Callbacks for the model
```
# Use save Modelcheckpoint callback so that I have intermediate versions of the trained model before the training has completed.
#Byt default, this saves model at end of each epoch, but callbacks can be called at
# start/end of epoch, start/end of each batch, start/end of training
#Use EarlyStopping to reduce resource usage.
import os
model_folder_name = "mlp_basic"
model_name = model_folder_name + ".h5"
#ModelCheckpoint - Save only the best model
save_model_filepath = os.path.join(os.pardir,"models",model_folder_name,model_name)
print("ModelCheckpoint Callback saves the model in: {}".format(save_model_filepath))
checkpoint_cb = keras.callbacks.ModelCheckpoint(save_model_filepath,
save_best_only=True)
# Early stopping.
# ModelCheckpoint will save best, but wont stop training, which might go on for a long time and consume resources.
# Early stoppoing callback helps with this
# early_stopping_cb = keras.callbacks.EarlyStopping(monitor='val_loss',
# patience=10,
# restore_best_weights=True)
early_stopping_cb = keras.callbacks.EarlyStopping(patience=5,
restore_best_weights=True)
#Create a custom callback if needed. For example, display the ratio of validation loss / training_loss to detect overfitting
class PrintValTrainRatioCallback(keras.callbacks.Callback):
def on_epoch_end(self,epoch,logs):
print("\n Custom Callback - Epoch: {}, val/train loss ratio: {:.2f}".format(epoch, logs["val_loss"] / logs["loss"]))
#Instance of the custom callback
print_ratio_cb = PrintValTrainRatioCallback()
#Tensorboard callback
#Note:
#For windows, forward and back slahses are painful to get right for the tensorboard usage.
# What works is that create a windows path, and replace the \ with \\
#Save data in models folder.
root_logdir = os.path.join(os.pardir, "models", model_folder_name, "tensorboard", "my_logs")
print("Tensorboard parent logs folder: {}".format(root_logdir))
def get_run_logdir():
import time
run_id = time.strftime("run_%Y_%m_%d_%H_%M_%S")
return os.path.join(root_logdir,run_id)
run_logdir = get_run_logdir()
print("Current logdir: {}".format(run_logdir))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
# tensorboard_cb = keras.callbacks.TensorBoard("C:/Users/abhat/research/Tensorflow_2.0_tutorials/models/mlp_basic/tensorboard/my_logs")
print("Note: Unable to get launch tensorboard with right directory from the notebook on Windows."
"Instead, just start tensorboard from the command line externally")
#Note: Unable to get launch tensorboard with right directory from the notebook on Windows.
#Instead, just start tensorboard from the command line externally
# print("Launching Tensorboard for visualization...")
# %load_ext tensorboard
# %tensorboard --logdir=run_logdir --port=6006
# %tensorboard --logdir="C:\Users\abhat\research\Tensorflow_2.0_tutorials\models\mlp_basic\tensorboard\my_logs" --port=6006
```
### Tensorboard debugging notes
#### 1) Replace the \ with /. Windows path works only if the path string has \\ (eg: source\\data) , not single backslash (eg: source\data), or if the unix style path is used
#### 2) Some cases tensorboard doesnt work. Error shows duplicate entries.
Following link helps in removing this error
https://github.com/pytorch/pytorch/issues/22676
Suggestion: Fix conflicting installations
Conflicting package installations found. Depending on the order of
installations and uninstallations, behavior may be undefined. Please
uninstall ALL versions of TensorFlow and TensorBoard, then reinstall
ONLY the desired version of TensorFlow, which will transitively pull
in the proper version of TensorBoard. (If you use TensorBoard without
TensorFlow, just reinstall the appropriate version of TensorBoard
directly.)
Namely:
pip uninstall tb-nightly tensorboard tensorflow-estimator tensorflow-gpu tf-estimator-nightly
pip install tensorflow # or `tensorflow-gpu`, or `tf-nightly`, ...
```
#"!kill 15840"
#Launch tensorboard from the model
# %load_ext tensorboard
# %tensorboard --logdir=run_logdir --port=6006
#Train the model
history = model.fit(X_train,y_train,epochs=100,
validation_data=(X_valid,y_valid),
callbacks=[checkpoint_cb, early_stopping_cb, print_ratio_cb, tensorboard_cb])
#Plot the history
import pandas as pd
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1) # Set the vertical range to [0 -1]
# Note that the training and validation accuracy are still improving,
# so we can probably increase the number of training epochs.
```
## Evaluate Model
```
#Test
model.evaluate(X_test,y_test)
# Evaulate on new instances
#Get random set of indices
import random
num_show_samples = 10
idx_vals = random.sample(range(100),num_show_samples)
#idx_vals = np.arange(X_test.shape[0])
print("Indices to be tested: {}".format(idx_vals))
X_new = X_test[idx_vals]
y_new = y_test[idx_vals]
#Predict probabilities
y_probab = model.predict(X_new)
#print(y_probab.round(2))
print(y_probab)
#y_pred_class = np.argmax(y_pred,1)
#Predict class
y_pred_class = model.predict_classes(X_new)
print(y_pred_class)
print("Class Names: {}".format(class_names))
rows = 4
cols = int(np.ceil(num_show_samples / rows))
print("Showing images on grid of rows={}, and cols={}".format(rows,cols))
plt.figure(figsize=(30,20))
for i in range(num_show_samples):
print("Actual Class: {}, Pred class: {}".format(class_names[y_new[i]],class_names[y_pred_class[i]]))
ax =plt.subplot(rows,cols,i+1)
# ax.imshow(X_new[i,:],cmap='gray')
ax.imshow(X_new[i,:])
plt.title("Actual Class: {}, Pred class: {}".format(class_names[y_new[i]],class_names[y_pred_class[i]]))
#Save the model
# Might not be needed if the callbacks are used to save models
# # model_folder_name = "mlp_basic"
# # full_filepath_model_file = "../models/" + model_folder_name + ".h5"
# print(" Saving model to: {}".format(save_model_filepath))
# model.save(save_model_filepath)
# Loading a saved model
#model2 = keras.models.load_model("../models/mlp_basic.h5")
```
## Tensorboard
| github_jupyter |
```
from collections import defaultdict
from pathlib import Path
import re
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from tensorboard.backend.event_processing.event_accumulator import EventAccumulator
import toml
import tqdm
def logdir2df(logdir):
"""convert tf.events files in a logs directory into a pandas DataFrame
tf.events files are created by SummaryWriter from PyTorch or Tensorflow
Parameters
----------
logdir : str, Path
path to directory containing tfevents file(s) saved by a SummaryWriter
Returns
-------
df : pandas.Dataframe
with columns 'step', 'wall_time', and all Scalars from the tfevents file
"""
if issubclass(type(logdir), Path):
logdir = str(logdir)
ea = EventAccumulator(path=logdir)
ea.Reload() # load all data written so far
scalar_tags = ea.Tags()['scalars'] # list of tags for values written to scalar
dfs = {}
for scalar_tag in scalar_tags:
dfs[scalar_tag] = pd.DataFrame(ea.Scalars(scalar_tag),
columns=["wall_time",
"step",
scalar_tag.replace('val/', '')])
dfs[scalar_tag] = dfs[scalar_tag].set_index("step")
dfs[scalar_tag].drop("wall_time", axis=1, inplace=True)
return pd.concat([v for k, v in dfs.items()], axis=1)
def logdir2csv(logdir):
"""convert tf.events files in a logs directory into a pandas DataFrame
Parameters
----------
logdir
Returns
-------
"""
df = logdir2df(logdir)
name = list(logdir.glob('*tfevents*'))[0].name
csv_fname = name + '.csv'
df.to_csv(logdir.joinpath(csv_fname))
re_int = re.compile(r'[0-9]+')
def int_from_dir_path(dir_path):
name = dir_path.name
return int(re_int.search(name)[0])
BR_RESULTS_ROOT = Path('~/Documents/repos/coding/birdsong/tweetynet/results/BirdsongRecognition').expanduser().resolve()
BIRD_RESULTS_ROOT = BR_RESULTS_ROOT.joinpath('Bird6')
sorted(BIRD_RESULTS_ROOT.iterdir())
RESULTS_ROOT = BIRD_RESULTS_ROOT.joinpath('results_200504_004551')
train_dur_dirs = sorted(RESULTS_ROOT.glob('train_dur_*'), key=int_from_dir_path)
train_dur_dirs
train_history_dfs = {}
for train_dur_dir in train_dur_dirs:
train_dur = int_from_dir_path(train_dur_dir)
print(f'getting tf.events files for training duration: {train_dur}')
train_history_dfs[train_dur] = {}
replicate_dirs = sorted(train_dur_dir.glob('replicate_*'), key=int_from_dir_path)
for replicate_dir in replicate_dirs:
replicate_num = int_from_dir_path(replicate_dir)
print(f'\treplicate: {replicate_num}')
events_file = sorted(replicate_dir.glob('**/events*'))
assert len(events_file) == 1
events_file = events_file[0]
logdir = events_file.parent
log_df = logdir2df(logdir)
train_history_dfs[train_dur][replicate_num] = log_df
for train_dur, replicate_df_dict in train_history_dfs.items():
for replicate, df in replicate_df_dict.items():
df['avg_error/val'] = 1 - df['avg_acc/val']
n_train_durs = len(train_history_dfs)
a_train_dur = list(train_history_dfs)[0]
n_replicates = len(train_history_dfs[a_train_dur])
fig, ax = plt.subplots(n_train_durs, 5, figsize=(25, 20))
train_durs = sorted(train_history_dfs.keys())
for row_ind, train_dur in enumerate(train_durs):
replicate_df_dict = train_history_dfs[train_dur]
replicate_nums = sorted(replicate_df_dict.keys())
for replicate_num, df in replicate_df_dict.items():
sns.lineplot(x=df.index, y='loss/train', data=df, ax=ax[row_ind, 0], alpha=0.5)
sns.lineplot(x=df.index, y='avg_loss/val', data=df, ax=ax[row_ind, 1], alpha=0.5)
ax[row_ind, 1].set_ylim([0.0, 0.4])
sns.lineplot(x=df.index, y='avg_error/val', data=df, ax=ax[row_ind, 2], alpha=0.5)
ax[row_ind, 2].set_ylim([0.0, 0.2])
sns.lineplot(x=df.index, y='avg_levenshtein/val', data=df, ax=ax[row_ind, 3], alpha=0.5)
ax[row_ind, 3].set_ylim([100, 250])
sns.lineplot(x=df.index, y='avg_segment_error_rate/val', data=df, ax=ax[row_ind, 4], alpha=0.5)
ax[row_ind, 4].set_ylim([0.0, 0.1])
learncurve_df = pd.read_csv(RESULTS_ROOT.joinpath('learning_curve.csv'))
learncurve_df['avg_error'] = 1- learncurve_df['avg_acc']
fig, ax = plt.subplots(1, 4, figsize=(20, 4))
ax = ax.ravel()
sns.stripplot(x='train_set_dur', y='avg_loss', data=learncurve_df, ax=ax[0])
sns.boxplot(x='train_set_dur', y='avg_loss', data=learncurve_df, ax=ax[0])
sns.pointplot(x='train_set_dur', y='avg_loss', data=learncurve_df, ax=ax[0]);
sns.stripplot(x='train_set_dur', y='avg_error', data=learncurve_df, ax=ax[1])
sns.boxplot(x='train_set_dur', y='avg_error', data=learncurve_df, ax=ax[1])
sns.pointplot(x='train_set_dur', y='avg_error', data=learncurve_df, ax=ax[1]);
sns.stripplot(x='train_set_dur', y='avg_levenshtein', data=learncurve_df, ax=ax[2])
sns.boxplot(x='train_set_dur', y='avg_levenshtein', data=learncurve_df, ax=ax[2])
sns.pointplot(x='train_set_dur', y='avg_levenshtein', data=learncurve_df, ax=ax[2]);
sns.stripplot(x='train_set_dur', y='avg_segment_error_rate', data=learncurve_df, ax=ax[3])
sns.boxplot(x='train_set_dur', y='avg_segment_error_rate', data=learncurve_df, ax=ax[3])
sns.pointplot(x='train_set_dur', y='avg_segment_error_rate', data=learncurve_df, ax=ax[3]);
```
| github_jupyter |
# Búsqueda Tabú
La librería **Pyristic** incluye una clase llamada `TabuSearch` que facilita la implementación de una metaheurística basada en Búsqueda Tabú para resolver problemas de minimización. Para poder utilizar esta clase es necesario:
1. Definir:
* La función objetivo $f$.
* La lista de restricciones.
* Estructura de datos (opcional).
2. Crear una clase que herede de `TabuSearch`.
3. Sobreescribir las siguientes funciones de la clase `TabuSearch`:
* get_neighbors (requerido)
* encode_change (requerido)
A continuación se muestran las librerías y elementos que se deben importar. Posteriormente, se resolverán dos problemas de optimización combinatoria usando la clase `TabuSearch`.
```
import sys
import os
#library_path is the path where the Optimpy library is located.
library_path = "/home/dell/Documentos/Git_proejcts/pyristic/"
#library_path = "/Users/adrianamenchacamendez/Documentos/enes_morelia/papime/optimizacion-con-metaheuristicas/"
sys.path.append(os.path.abspath(library_path))
from pyristic.heuristic.Tabu_search import TabuSearch
from pyristic.utils.helpers import *
from pprint import pprint
import numpy as np
import copy
```
## Problema de la mochila
\begin{equation}
\label{eq:KP}
\begin{array}{rll}
\text{maximizar:} & f(\vec{x}) = \sum_{i=1}^{n} p_i \cdot x_{i} & \\
\text{donde: } & g_1(\vec{x}) = \sum_{i=1}^{n} w_i \cdot x_{i} \leq c & \\
& x_i \in \{0,1\} & i\in\{1,\ldots,n\}\\
\end{array}
\end{equation}
Consideremos la siguiente entrada:
- $n = 5$
- $p = \{5, 14, 7, 2, 23\}$
- $w = \{2, 3, 7, 5, 10\}$
- $c = 15$
Donde la mejor solución es:
$x = [1, 1, 0, 0, 1]$ , $f(x) = 42$ y $g_{1}(x) = 15$
### Función objetivo
Dado que la clase `TabuSearch` considera problemas de minimización, es necesario convertir el problema de la mochila a un problema de minimización. Para esto se multiplica el valor de la función objetivo por -1.
```
def f(x : np.ndarray) -> float:
p = np.array([5,14,7,2,23])
return -1*np.dot(x,p)
```
### Restricciones
Las restricciones se definen en funciones diferentes y se agregan a una lista.
```
def g1(x : np.ndarray) -> bool:
w = [2,3,7,5,10]
return np.dot(x,w) <= 15
constraints_list= [g1]
```
En el problema de la mochila unicamente queremos revisar que no se exceda el peso.
### Uso de `TabuSearch`
Para poder hacer uso de la metaheurística de búsqueda tabú implementada en la librería **Pyristic**, es necesario crear una clase que herede de la clase `TabuSearch`.
```
class Knapsack_solver(TabuSearch):
def __init__(self, f_ : function_type , constraints_: list):
super().__init__(f_,constraints_)
def get_neighbors(self, x : np.ndarray,**kwargs) -> list:
neighbors_list = []
for i in range(len(x)):
x[i] ^= 1 #1
neighbors_list+=[copy.deepcopy(x)]
x[i] ^= 1
return neighbors_list
def encode_change(self, neighbor : (list,np.ndarray), x : (list,np.ndarray),**kwargs) -> list: #2
x_ = [None,None]
for i in range(len(x)):
if x[i] != neighbor[i]:
return [i,neighbor[i]]
return x_
```
La nueva clase es llamada *Knapsack_solver*, donde, se han sobrescrito las funciones `get_neighbors` y `encode_change`. Si no implementamos las funciones mencionadas el algoritmo no va a funcionar.
### Ejecución de la metaheurística
Una vez definida la clase *Knapsack_solver*, se crea un objeto de tipo *Knapsack_solver* indicando en los parámetros la función objetivo y las restricciones del problema. En este caso llamamos *Knapsack* al objeto creado.
```
Knapsack = Knapsack_solver(f, [g1])
```
Finalmente, se llama a la función `optimize`. Esta función recibe tres parámetros:
* Solución inicial o función generadora de soluciones iniciales.
* El número de iteraciones.
* El tiempo donde evitaremos hacer un cambio en cierta posición (tiempo tabú).
Para este ejemplo usamos una mochila vacía ($x_0 = [0,0,0,0,0]$), $30$ iteraciones y un tiempo tabú igual a $3$.
```
init_backpack_solution = np.zeros(5,dtype=int)
'''Parameters:
Initial solution
Number of iterations
Tabu time
'''
Knapsack.optimize(init_backpack_solution,30,3)
print(Knapsack)
```
A continuación resolveremos el mismo problema para una instancia más grande.
Tenemos que definir nuevamente la función objetivo y la restricción para emplearlo para cualquier instancia del problema.
Definiremos las siguientes variables como variables globales:
* n es un número que indicará el tamaño de nuestra instancia.
* p es un arreglo que se refiere al beneficio que proporciona cada uno de los objetos.
* w es un arreglo con el peso de cada uno de los objetos.
* c es el peso máximo que puede tener nuestra mochila.
```
n = 50
p = [60, 52, 90, 57, 45, 64, 60, 45, 63, 94, 44, 90, 66, 64, 32, 39, 91, 40, 73, 61, 82, 94, 39, 68, 94, 98, 80, 79, 73, 99, 49, 56, 69, 49, 82, 99, 65, 34, 31, 85, 67, 62, 56, 38, 54, 81, 98, 63, 48, 83]
w = [38, 20, 21, 21, 37, 28, 32, 30, 33, 35, 29, 32, 35, 24, 28, 29, 22, 34, 31, 36, 36, 28, 38, 25, 38, 37, 20, 23, 39, 31, 27, 20, 38, 38, 36, 28, 39, 22, 23, 22, 21, 24, 23, 33, 31, 30, 32, 30, 22, 37]
c = 870
def f(x : np.ndarray) -> float:
global p
return -1* np.dot(x,p)
def g1(x : np.ndarray) -> bool:
global w,c
result = np.dot(x,w)
g1.__doc__="{} <= {}".format(result,c)
return result <= c
constraints_list= [g1]
```
### Solución inicial
En el ejemplo anterior, la solución inicial fue una mochila vacía. Ahora crearemos una mochila que introduce objetos de manera aleatoria, mientras no se exceda el peso de la mochila.
```
def getInitialSolution(NumObjects=5):
global n,p,w,c
#Empty backpack
x = [0 for i in range(n)]
weight_x = 0
#Random order to insert objects.
objects = list(range(n))
np.random.shuffle(objects)
for o in objects[:NumObjects]:
#Check the constraint about capacity.
if weight_x + w[o] <= c:
x[o] = 1
weight_x += w[o]
return np.array(x)
```
Definiremos nuestro objeto del tipo *Knapsack_solver* y llamaremos el método `optimize`con los siguientes parámetros:
* La función que crea la solución inicial.
* $100$ iteraciones.
* El tiempo tabú será $\frac{n}{2}$.
```
Knapsack_2 = Knapsack_solver(f, [g1])
Knapsack_2.optimize(getInitialSolution,100,n//2)
print(Knapsack_2)
```
Para revisar el comportamiento de la metaheurística en determinado problema, la librería **Pyristic** cuenta con una función llamada `get_stats`. Esta función se encuentra en **utils.helpers** y recibe como parámetros:
* El objeto creado para ejecutar la metaheurística.
* El número de veces que se quiere ejecutar la metaheurística.
* Los argumentos que recibe la función `optimize` (debe ser una tupla).
La función `get_stats` retorna un diccionario con algunas estadísticas de las ejecuciones.
```
args = (getInitialSolution,500,n//2)
statistics = get_stats(Knapsack_2, 21, args)
pprint(statistics)
```
## Problema del agente viajero
\begin{equation}
\label{eq:TSP}
\begin{array}{rll}
\text{minimizar:} & f(x) = d(x_n, x_1) + \sum_{i=1}^{n-1} d(x_i, x_{i+1}) & \\
\text{tal que: } & x_i \in \{1,2,\cdots,n\} & \\
\end{array}
\end{equation}
Donde:
* $d(x_i,x_j)$ es la distancia desde la ciudad $x_i$ a la ciudad $x_j$.
* $n$ es el número de ciudades.
* $x$ es una permutación de las $n$ ciudades.
```
import random
num_cities = 10
iterations = 100
dist_matrix = \
[\
[0,49,30,53,72,19,76,87,45,48],\
[49,0,19,38,32,31,75,69,61,25],\
[30,19,0,41,98,56,6,6,45,53],\
[53,38,41,0,52,29,46,90,23,98],\
[72,32,98,52,0,63,90,69,50,82],\
[19,31,56,29,63,0,60,88,41,95],\
[76,75,6,46,90,60,0,61,92,10],\
[87,69,6,90,69,88,61,0,82,73],\
[45,61,45,23,50,41,92,82,0,5],\
[48,25,53,98,82,95,10,73,5,0],\
]
def f_salesman(x : np.ndarray) -> float:
global dist_matrix
total_dist = 0
for i in range(1,len(x)):
u,v = x[i], x[i-1]
total_dist+= dist_matrix[u][v]
total_dist += dist_matrix[x[-1]][0]
return total_dist
def g_salesman(x : np.ndarray) -> bool:
"""
Xi in {1,2, ... , N}
"""
size = len(x)
size_ = len(np.unique(x))
return size == size_
```
En este ejemplo mostraremos la forma de definir nuestra lista tabú para el problema del agente viajero para emplearla en nuestra búsqueda `TabuSearch`. Es necesario que nuestra lista tabú contenga los siguientes métodos:
- `reset`
- `update`
- `push`
- `find`
```
class Tabu_Salesman_list:
def __init__(self,timer):
self.__TB = {}
self.timer = timer
def reset(self,timer) -> None:
self.__TB = {}
self.timer = timer
def update(self) -> None:
to_pop = []
for key in self.__TB:
if self.__TB[key]-1 == 0:
to_pop.append(key)
else:
self.__TB[key]-=1
for key in to_pop:
self.__TB.pop(key)
@checkargs
#x has [p,v,step], we are only interested in v (value)
def push(self, x : list ) -> None:
self.__TB[x[1]] = self.timer
@checkargs
def find(self, x : list) -> bool:
return x[1] in self.__TB
class TravellingSalesman_solver(TabuSearch):
def __init__(self, f_ : function_type , constraints_: list, TabuStorage):
super().__init__(f_,constraints_,TabuStorage)
@checkargs
def get_neighbors(self, x : np.ndarray,**kwargs) -> list:
neighbors_list = []
ind = random.randint(1,len(x)-1)
while self.TL.find([-1,x[ind]]):
ind = random.randint(1,len(x)-1)
v = x[ind]
x_tmp = list(x[v != x])
for i in range(1, len(x)):
if ind == i:
continue
neighbors_list += [ x_tmp[:i] + [v] + x_tmp[i:]]
return neighbors_list
@checkargs
def encode_change(self, neighbor : (list,np.ndarray), x : (list,np.ndarray),**kwargs) -> list: #2
x_p ={x[i] : i for i in range(len(x))}
n_p = {neighbor[i]: i for i in range(len(x))}
ind = -1
max_dist = -1
value = -1
for i in range(1, len(x)):
v = x[i]
dist = abs(x_p[v] - n_p[v])
if dist > max_dist:
ind = i
max_dist = dist
value = v
return [ind , value]
```
### Solución inicial
En este caso, creamos la solución inicial utilizando una estrategia voraz.
```
def getInitialSolutionTS(distance_matrix, total_cities):
Solution = [0]
remaining_cities = list(range(1,total_cities))
while len(remaining_cities) != 0:
from_ =Solution[-1]
to_ = remaining_cities[0]
dist = distance_matrix[from_][to_]
for i in range(1, len(remaining_cities)):
distance = distance_matrix[from_][remaining_cities[i]]
if distance < dist:
to_ = remaining_cities[i]
dist = distance
Solution.append(to_)
ind = remaining_cities.index(to_)
remaining_cities.pop(ind)
return Solution
TravellingSalesman = TravellingSalesman_solver(f_salesman,[g_salesman],Tabu_Salesman_list(num_cities//2))
init_path = np.array(getInitialSolutionTS(dist_matrix,num_cities))
print("Initialize search with this initial point {} \n f(x) = {}".format(init_path, f_salesman(init_path)))
TravellingSalesman.optimize(init_path, iterations, num_cities//2)
print(TravellingSalesman)
args = (init_path, iterations, num_cities//2)
statistics = get_stats(TravellingSalesman, 30, args)
pprint(statistics)
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#0.1-Motivation" data-toc-modified-id="0.1-Motivation-1"><span class="toc-item-num">1 </span>0.1 Motivation</a></span></li><li><span><a href="#0.3-Open-Source" data-toc-modified-id="0.3-Open-Source-2"><span class="toc-item-num">2 </span>0.3 Open Source</a></span></li><li><span><a href="#0.4-What-are-parallel-passages?" data-toc-modified-id="0.4-What-are-parallel-passages?-3"><span class="toc-item-num">3 </span>0.4 What are parallel passages?</a></span></li><li><span><a href="#0.5-Authors" data-toc-modified-id="0.5-Authors-4"><span class="toc-item-num">4 </span>0.5 Authors</a></span></li><li><span><a href="#0.6-Status" data-toc-modified-id="0.6-Status-5"><span class="toc-item-num">5 </span>0.6 Status</a></span></li><li><span><a href="#2.1-Assessing-the-outcomes" data-toc-modified-id="2.1-Assessing-the-outcomes-6"><span class="toc-item-num">6 </span>2.1 Assessing the outcomes</a></span><ul class="toc-item"><li><span><a href="#2.1.1-Assessment-criteria" data-toc-modified-id="2.1.1-Assessment-criteria-6.1"><span class="toc-item-num">6.1 </span>2.1.1 Assessment criteria</a></span></li></ul></li><li><span><a href="#3.1-Similarity" data-toc-modified-id="3.1-Similarity-7"><span class="toc-item-num">7 </span>3.1 Similarity</a></span><ul class="toc-item"><li><span><a href="#3.1.1-SET" data-toc-modified-id="3.1.1-SET-7.1"><span class="toc-item-num">7.1 </span>3.1.1 SET</a></span></li><li><span><a href="#3.1.2-LCS" data-toc-modified-id="3.1.2-LCS-7.2"><span class="toc-item-num">7.2 </span>3.1.2 LCS</a></span></li></ul></li><li><span><a href="#3.2-Performance" data-toc-modified-id="3.2-Performance-8"><span class="toc-item-num">8 </span>3.2 Performance</a></span></li><li><span><a href="#4.1-Chunking" data-toc-modified-id="4.1-Chunking-9"><span class="toc-item-num">9 </span>4.1 Chunking</a></span><ul class="toc-item"><li><span><a href="#4.1.1-Fixed-chunking" data-toc-modified-id="4.1.1-Fixed-chunking-9.1"><span class="toc-item-num">9.1 </span>4.1.1 Fixed chunking</a></span></li><li><span><a href="#4.1.2-Object-chunking" data-toc-modified-id="4.1.2-Object-chunking-9.2"><span class="toc-item-num">9.2 </span>4.1.2 Object chunking</a></span></li></ul></li><li><span><a href="#4.2-Preparing" data-toc-modified-id="4.2-Preparing-10"><span class="toc-item-num">10 </span>4.2 Preparing</a></span></li><li><span><a href="#4.3-Cliques" data-toc-modified-id="4.3-Cliques-11"><span class="toc-item-num">11 </span>4.3 Cliques</a></span><ul class="toc-item"><li><span><a href="#4.3.1-Organizing-the-cliques" data-toc-modified-id="4.3.1-Organizing-the-cliques-11.1"><span class="toc-item-num">11.1 </span>4.3.1 Organizing the cliques</a></span></li><li><span><a href="#4.3.2-Evaluating-clique-sets" data-toc-modified-id="4.3.2-Evaluating-clique-sets-11.2"><span class="toc-item-num">11.2 </span>4.3.2 Evaluating clique sets</a></span></li></ul></li><li><span><a href="#5.1-Loading-the-feature-data" data-toc-modified-id="5.1-Loading-the-feature-data-12"><span class="toc-item-num">12 </span>5.1 Loading the feature data</a></span></li><li><span><a href="#5.2-Configuration" data-toc-modified-id="5.2-Configuration-13"><span class="toc-item-num">13 </span>5.2 Configuration</a></span></li><li><span><a href="#5.3-Experiment-settings" data-toc-modified-id="5.3-Experiment-settings-14"><span class="toc-item-num">14 </span>5.3 Experiment settings</a></span></li><li><span><a href="#5.4-Chunking" data-toc-modified-id="5.4-Chunking-15"><span class="toc-item-num">15 </span>5.4 Chunking</a></span></li><li><span><a href="#5.5-Preparing" data-toc-modified-id="5.5-Preparing-16"><span class="toc-item-num">16 </span>5.5 Preparing</a></span><ul class="toc-item"><li><span><a href="#5.5.1-Preparing-for-SET-comparison" data-toc-modified-id="5.5.1-Preparing-for-SET-comparison-16.1"><span class="toc-item-num">16.1 </span>5.5.1 Preparing for SET comparison</a></span></li><li><span><a href="#5.5.2-Preparing-for-LCS-comparison" data-toc-modified-id="5.5.2-Preparing-for-LCS-comparison-16.2"><span class="toc-item-num">16.2 </span>5.5.2 Preparing for LCS comparison</a></span></li></ul></li><li><span><a href="#5.6-Similarity-computation" data-toc-modified-id="5.6-Similarity-computation-17"><span class="toc-item-num">17 </span>5.6 Similarity computation</a></span><ul class="toc-item"><li><span><a href="#5.6.1-SET-similarity" data-toc-modified-id="5.6.1-SET-similarity-17.1"><span class="toc-item-num">17.1 </span>5.6.1 SET similarity</a></span></li><li><span><a href="#5.6.2-LCS-similarity" data-toc-modified-id="5.6.2-LCS-similarity-17.2"><span class="toc-item-num">17.2 </span>5.6.2 LCS similarity</a></span></li></ul></li><li><span><a href="#5.7-Cliques" data-toc-modified-id="5.7-Cliques-18"><span class="toc-item-num">18 </span>5.7 Cliques</a></span></li><li><span><a href="#5.7.1-Selecting-passages" data-toc-modified-id="5.7.1-Selecting-passages-19"><span class="toc-item-num">19 </span>5.7.1 Selecting passages</a></span></li><li><span><a href="#5.7.2-Growing-cliques" data-toc-modified-id="5.7.2-Growing-cliques-20"><span class="toc-item-num">20 </span>5.7.2 Growing cliques</a></span></li><li><span><a href="#5.8-Output" data-toc-modified-id="5.8-Output-21"><span class="toc-item-num">21 </span>5.8 Output</a></span><ul class="toc-item"><li><span><a href="#5.8.1-Format-definitions" data-toc-modified-id="5.8.1-Format-definitions-21.1"><span class="toc-item-num">21.1 </span>5.8.1 Format definitions</a></span></li><li><span><a href="#5.8.2-Formatting-clique-lists" data-toc-modified-id="5.8.2-Formatting-clique-lists-21.2"><span class="toc-item-num">21.2 </span>5.8.2 Formatting clique lists</a></span></li><li><span><a href="#5.8.3-Compiling-the-table-of-experiments" data-toc-modified-id="5.8.3-Compiling-the-table-of-experiments-21.3"><span class="toc-item-num">21.3 </span>5.8.3 Compiling the table of experiments</a></span></li><li><span><a href="#5.8.4-High-level-formatting-functions" data-toc-modified-id="5.8.4-High-level-formatting-functions-21.4"><span class="toc-item-num">21.4 </span>5.8.4 High level formatting functions</a></span></li></ul></li><li><span><a href="#5.9-Running-experiments" data-toc-modified-id="5.9-Running-experiments-22"><span class="toc-item-num">22 </span>5.9 Running experiments</a></span></li><li><span><a href="#Discussion" data-toc-modified-id="Discussion-23"><span class="toc-item-num">23 </span>Discussion</a></span></li></ul></div>
<img align="right" src="images/dans-small.png"/>
<img align="right" src="images/tf-small.png"/>
<img align="right" src="images/etcbc.png"/>
# Parallel Passages in the MT
# 0. Introduction
## 0.1 Motivation
We want to make a list of **all** parallel passages in the Masoretic Text (MT) of the Hebrew Bible.
Here is a quote that triggered Dirk to write this notebook:
> Finally, the Old Testament Parallels module in Accordance is a helpful resource that enables the researcher to examine 435 sets of parallel texts, or in some cases very similar wording in different texts, in both the MT and translation, but the large number of sets of texts in this database should not fool one to think it is complete or even nearly complete for all parallel writings in the Hebrew Bible.
Robert Rezetko and Ian Young.
Historical linguistics & Biblical Hebrew. Steps Toward an Integrated Approach.
*Ancient Near East Monographs, Number9*. SBL Press Atlanta. 2014.
[PDF Open access available](https://www.google.nl/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0CCgQFjAB&url=http%3A%2F%2Fwww.sbl-site.org%2Fassets%2Fpdfs%2Fpubs%2F9781628370461_OA.pdf&ei=2QSdVf-vAYSGzAPArJeYCg&usg=AFQjCNFA3TymYlsebQ0MwXq2FmJCSHNUtg&sig2=LaXuAC5k3V7fSXC6ZVx05w&bvm=bv.96952980,d.bGQ)
<img align="right" width="50%" src="parallel.png"/>
## 0.3 Open Source
This is an IPython notebook.
It contains a working program to carry out the computations needed to obtain the results reported here.
You can download this notebook and run it on your computer, provided you have
[Text-Fabric](https://github.com/Dans-labs/text-fabric) installed.
It is a pity that we cannot compare our results with the Accordance resource mentioned above,
since that resource has not been published in an accessible manner.
We also do not have the information how this resource has been constructed on the basis of the raw data.
In contrast with that, we present our results in a completely reproducible manner.
This notebook itself can serve as the method of replication,
provided you have obtained the necessary resources.
See [sources](https://github.com/ETCBC/shebanq/wiki/Sources), which are all Open Access.
## 0.4 What are parallel passages?
The notion of *parallel passage* is not a simple, straightforward one.
There are parallels on the basis of lexical content in the passages on the one hand,
but on the other hand there are also correspondences in certain syntactical structures,
or even in similarities in text structure.
In this notebook we do select a straightforward notion of parallel, based on lexical content only.
We investigate two measures of similarity, one that ignores word order completely,
and one that takes word order into account.
Two kinds of short-comings of this approach must be mentioned:
1. We will not find parallels based on non-lexical criteria (unless they are also lexical parallels)
1. We will find too many parallels: certain short sentences (and he said), or formula like passages (and the word of God came to Moses) occur so often that they have a more subtle bearing on whether there is a common text history.
For a more full treatment of parallel passages, see
**Wido Th. van Peursen and Eep Talstra**:
Computer-Assisted Analysis of Parallel Texts in the Bible -
The Case of 2 Kings xviii-xix and its Parallels in Isaiah and Chronicles.
*Vetus Testamentum* 57, pp. 45-72.
2007, Brill, Leiden.
Note that our method fails to identify any parallels with Chronica_II 32.
Van Peursen and Talstra state about this chapter and 2 Kings 18:
> These chapters differ so much, that it is sometimes impossible to establish
which verses should be considered parallel.
In this notebook we produce a set of *cliques*,
a clique being a set of passages that are *quite* similar, based on lexical information.
## 0.5 Authors
This notebook is by Dirk Roorda and owes a lot to discussions with Martijn Naaijer.
[Dirk Roorda](mailto:dirk.roorda@dans.knaw.nl) while discussing ideas with
[Martijn Naaijer](mailto:m.naaijer@vu.nl).
## 0.6 Status
* **modified: 2017-09-28** Is now part of a pipeline for transferring data from the ETCBC to Text-Fabric.
* **modified: 2016-03-03** Added experiments based on chapter chunks and lower similarities.
165 experiments have been carried out, of which 18 with promising results.
All results can be easily inspected, just by clicking in your browser.
One of the experiments has been chosen as the basis for
[crossref](https://shebanq.ancient-data.org/hebrew/note?version=4b&id=Mnxjcm9zc3JlZg__&tp=txt_tb1&nget=v)
annotations in SHEBANQ.
# 1. Results
Click in a green cell to see interesting results. The numbers in the cell indicate
* the number of passages that have a variant elsewhere
* the number of *cliques* they form (cliques are sets of similar passages)
* the number of passages in the biggest clique
Below the results is an account of the method that we used, followed by the actual code to produce these results.
# Pipeline
See [operation](https://github.com/ETCBC/pipeline/blob/master/README.md#operation)
for how to run this script in the pipeline.
The pipeline comes in action in Section [6a](#6a) below: TF features.
# Caveat
This notebook makes use of a new feature of text-fabric, first present in 2.3.15.
Make sure to upgrade first.
```
sudo -H pip3 install --upgrade text-fabric
```
```
import sys
import os
import re
import collections
import pickle
import math
import difflib
import yaml
from difflib import SequenceMatcher
from IPython.display import HTML
import matplotlib.pyplot as plt
from tf.core.helpers import formatMeta
```
pip3 install python-Levenshtein
```
from Levenshtein import ratio
import utils
from tf.fabric import Fabric
get_ipython().run_line_magic("load_ext", "autoreload") # noqa F821
get_ipython().run_line_magic("autoreload", "2") # noqa F821
get_ipython().run_line_magic("matplotlib", "inline") # noqa F821
```
In[2]:
```
if "SCRIPT" not in locals():
# SCRIPT = False
SCRIPT = False
FORCE = True
FORCE_MATRIX = False
LANG_FEATURE = "languageISO"
OCC_FEATURE = "g_cons"
LEX_FEATURE = "lex"
TEXT_FEATURE = "g_word_utf8"
TRAILER_FEATURE = "trailer_utf8"
CORE_NAME = "bhsa"
NAME = "parallels"
VERSION = "2021"
def stop(good=False):
if SCRIPT:
sys.exit(0 if good else 1)
```
In[3]:
```
# run this cell after all other cells
if False and not SCRIPT:
HTML(other_exps)
```
# 2. Experiments
We have conducted 165 experiments, all corresponding to a specific choice of parameters.
Every experiment is an attempt to identify variants and collect them in *cliques*.
The table gives an overview of the experiments conducted.
Every *row* corresponds to a particular way of chunking and a method of measuring the similarity.
There are *columns* for each similarity *threshold* that we have tried.
The idea is that chunks are similar if their similarity is above the threshold.
The outcomes of one experiment have been added to SHEBANQ as the note set
[crossref](https://shebanq.ancient-data.org/hebrew/note?version=4b&id=Mnxjcm9zc3JlZg__&tp=txt_tb1&nget=v).
The experiment chosen for this is currently
* *chunking*: **object verse**
* *similarity method*: **SET**
* *similarity threshold*: **65**
## 2.1 Assessing the outcomes
Not all experiments lead to useful results.
We have indicated the value of a result by a color coding, based on objective characteristics,
such as the number of parallel passages, the number of cliques, the size of the greatest clique, and the way of chunking.
These numbers are shown in the cells.
### 2.1.1 Assessment criteria
If the method is based on *fixed* chunks, we deprecated the method and the results.
Because two perfectly similar verses could be missed if a 100-word wide window that shifts over the text aligns differently with both verses, which will usually be the case.
Otherwise, we consider the *ll*, the length of the longest clique, and *nc*, the number of cliques.
We set three quality parameters:
* `REC_CLIQUE_RATIO` = 5 : recommended clique ratio
* `DUB_CLIQUE_RATIO` = 15 : dubious clique ratio
* `DEP_CLIQUE_RATIO` = 25 : deprecated clique ratio
where the *clique ratio* is $100 (ll/nc)$,
i.e. the length of the longest clique divided by the number of cliques as percentage.
An experiment is *recommended* if its clique ratio is between the recommended and dubious clique ratios.
It is *dubious* if its clique ratio is between the dubious and deprecated clique ratios.
It is *deprecated* if its clique ratio is above the deprecated clique ratio.
# 2.2 Inspecting results
If you click on the hyperlink in the cell, you are taken to a page that gives you
all the details of the results:
1. A link to a file with all *cliques* (which are the sets of similar passages)
1. A list of links to chapter-by-chapter diff files (for cliques with just two members), and only for
experiments with outcomes that are labeled as *promising* or *unassessed quality* or *mixed results*.
To get into the variants quickly, inspect the list (2) and click through
to see the actual variant material in chapter context.
Not all variants occur here, so continue with (1) to see the remaining cliques.
Sometimes in (2) a chapter diff file does not indicate clearly the relevant common part of both chapters.
In that case you have to consult the big list (1)
All these results can be downloaded from the
[SHEBANQ github repo](https://github.com/ETCBC/shebanq/tree/master/static/docs/tools/parallel/files)
After downloading the whole directory, open ``experiments.html`` in your browser.
# 3. Method
Here we discuss the method we used to arrive at a list of parallel passages
in the Masoretic Text (MT) of the Hebrew Bible.
## 3.1 Similarity
We have to find passages in the MT that are *similar*.
Therefore we *chunk* the text in some way, and then compute the similarities between pairs of chunks.
There are many ways to define and compute similarity between texts.
Here, we have tried two methods ``SET`` and ``LCS``.
Both methods define similarity as the fraction of common material with respect to the total material.
### 3.1.1 SET
The ``SET`` method reduces textual chunks to *sets* of *lexemes*.
This method abstracts from the order and number of occurrences of words in chunks.
We use as measure for the similarity of chunks $C_1$ and $C_2$ (taken as sets):
$$ s_{\rm set}(C_1, C_2) = {\vert C_1 \cap C_2\vert \over \vert C_1 \cup C_2 \vert} $$
where $\vert X \vert$ is the number of elements in set $X$.
### 3.1.2 LCS
The ``LCS`` method is less reductive: chunks are *strings* of *lexemes*,
so the order and number of occurrences of words is retained.
We use as measure for the similarity of chunks $C_1$ and $C_2$ (taken as strings):
$$ s_{\rm lcs}(C_1, C_2) = {\vert {\rm LCS}(C_1,C_2)\vert \over \vert C_1\vert + \vert C_2 \vert -
\vert {\rm LCS}(C_1,C_2)\vert} $$
where ${\rm LCS}(C_1, C_2)$ is the
[longest common subsequence](https://en.wikipedia.org/wiki/Longest_common_subsequence_problem)
of $C_1$ and $C_2$ and
$\vert X\vert$ is the length of sequence $X$.
It remains to be seen whether we need the extra sophistication of ``LCS``.
The risk is that ``LCS`` could fail to spot related passages when there is a large amount of transposition going on.
The results should have the last word.
We need to compute the LCS efficiently, and for this we used the python ``Levenshtein`` module:
``pip install python-Levenshtein``
whose documentation is
[here](http://www.coli.uni-saarland.de/courses/LT1/2011/slides/Python-Levenshtein.html).
## 3.2 Performance
Similarity computation is the part where the heavy lifting occurs.
It is basically quadratic in the number of chunks, so if you have verses as chunks (~ 23,000),
you need to do ~ 270,000,000 similarity computations, and if you use sentences (~ 64,000),
you need to do ~ 2,000,000,000 ones!
The computation of a single similarity should be *really* fast.
Besides that, we use two ways to economize:
* after having computed a matrix for a specific set of parameter values, we save the matrix to disk;
new runs can load the matrix from disk in a matter of seconds;
* we do not store low similarity values in the matrix, low being < ``MATRIX_THRESHOLD``.
The ``LCS`` method is more complicated.
We have tried the ``ratio`` method from the ``difflib`` package that is present in the standard python distribution.
This is unbearably slow for our purposes.
The ``ratio`` method in the ``Levenshtein`` package is much quicker.
See the table for an indication of the amount of work to create the similarity matrix
and the performance per similarity method.
The *matrix threshold* is the lower bound of similarities that are stored in the matrix.
If a pair of chunks has a lower similarity, no entry will be made in the matrix.
The computing has been done on a Macbook Air (11", mid 2012, 1.7 GHz Intel Core i5, 8GB RAM).
|chunk type |chunk size|similarity method|matrix threshold|# of comparisons|size of matrix (KB)|computing time (min)|
|:----------|---------:|----------------:|---------------:|---------------:|------------------:|-------------------:|
|fixed |100 |LCS |60 | 9,003,646| 7| ? |
|fixed |100 |SET |50 | 9,003,646| 7| ? |
|fixed |50 |LCS |60 | 36,197,286| 37| ? |
|fixed |50 |SET |50 | 36,197,286| 18| ? |
|fixed |20 |LCS |60 | 227,068,705| 2,400| ? |
|fixed |20 |SET |50 | 227,068,705| 113| ? |
|fixed |10 |LCS |60 | 909,020,841| 59,000| ? |
|fixed |10 |SET |50 | 909,020,841| 1,800| ? |
|object |verse |LCS |60 | 269,410,078| 2,300| 31|
|object |verse |SET |50 | 269,410,078| 509| 14|
|object |half_verse|LCS |60 | 1,016,396,241| 40,000| 50|
|object |half_verse|SET |50 | 1,016,396,241| 3,600| 41|
|object |sentence |LCS |60 | 2,055,975,750| 212,000| 68|
|object |sentence |SET |50 | 2,055,975,750| 82,000| 63|
# 4. Workflow
## 4.1 Chunking
There are several ways to chunk the text:
* fixed chunks of approximately ``CHUNK_SIZE`` words
* by object, such as verse, sentence and even chapter
After chunking, we prepare the chunks for similarity measuring.
### 4.1.1 Fixed chunking
Fixed chunking is unnatural, but if the chunk size is small, it can yield fair results.
The results are somewhat difficult to inspect, because they generally do not respect constituent boundaries.
It is to be expected that fixed chunks in variant passages will be mutually *out of phase*,
meaning that the chunks involved in these passages are not aligned with each other.
So they will have a lower similarity than they could have if they were aligned.
This is a source of artificial noise in the outcome and/or missed cases.
If the chunking respects "natural" boundaries in the text, there is far less misalignment.
### 4.1.2 Object chunking
We can also chunk by object, such as verse, half_verse or sentence.
Chunking by *verse* is very much like chunking in fixed chunks of size 20, performance-wise.
Chunking by *half_verse* is comparable to fixed chunks of size 10.
Chunking by *sentence* will generate an enormous amount of
false positives, because there are very many very short sentences (down to 1-word) in the text.
Besides that, the performance overhead is huge.
The *half_verses* seem to be a very interesting candidate.
They are smaller than verses, but there are less *degenerate cases* compared to with sentences.
From the table above it can be read that half verses require only half as many similarity computations as sentences.
## 4.2 Preparing
We prepare the chunks for the application of the chosen method of similarity computation (``SET`` or ``LCS``).
In both cases we reduce the text to a sequence of transliterated consonantal *lexemes* without disambiguation.
In fact, we go one step further: we remove the consonants (aleph, wav, yod) that are often silent.
For ``SET``, we represent each chunk as the set of its reduced lexemes.
For ``LCS``, we represent each chunk as the string obtained by joining its reduced lexemes separated by white spaces.
## 4.3 Cliques
After having computed a sufficient part of the similarity matrix, we set a value for ``SIMILARITY_THRESHOLD``.
All pairs of chunks having at least that similarity are deemed *interesting*.
We organize the members of such pairs in *cliques*, groups of chunks of which each member is
similar (*similarity* > ``SIMILARITY_THRESHOLD``) to at least one other member.
We start with no cliques and walk through the pairs whose similarity is above ``SIMILARITY_THRESHOLD``,
and try to put each member into a clique.
If there is not yet a clique, we make the member in question into a new singleton clique.
If there are cliques, we find the cliques that have a member similar to the member in question.
If we find several, we merge them all into one clique.
If there is no such clique, we put the member in a new singleton clique.
NB: Cliques may *drift*, meaning that they contain members that are completely different from each other.
They are in the same clique, because there is a path of pairwise similar members leading from the one chunk to the other.
### 4.3.1 Organizing the cliques
In order to handle cases where there are many corresponding verses in corresponding chapters, we produce
chapter-by-chapter diffs in the following way.
We make a list of all chapters that are involved in cliques.
This yields a list of chapter cliques.
For all *binary* chapters cliques, we generate a colorful diff rendering (as HTML) for the complete two chapters.
We only do this for *promising* experiments.
### 4.3.2 Evaluating clique sets
Not all clique sets are equally worth while.
For example, if we set the ``SIMILARITY_THRESHOLD`` too low, we might get one gigantic clique, especially
in combination with a fine-grained chunking. In other words: we suffer from *clique drifting*.
We detect clique drifting by looking at the size of the largest clique.
If that is large compared to the total number of chunks, we deem the results unsatisfactory.
On the other hand, when the ``SIMILARITY_THRESHOLD`` is too high, you might miss a lot of correspondences,
especially when chunks are large, or when we have fixed-size chunks that are out of phase.
We deem the results of experiments based on a partitioning into fixed length chunks as unsatisfactory, although it
might be interesting to inspect what exactly the damage is.
At the moment, we have not yet analyzed the relative merits of the similarity methods ``SET`` and ``LCS``.
# 5. Implementation
The rest is code. From here we fire up the engines and start computing.
```
PICKLE_PROTOCOL = 3
```
# Setting up the context: source file and target directories
The conversion is executed in an environment of directories, so that sources, temp files and
results are in convenient places and do not have to be shifted around.
In[5]:
```
repoBase = os.path.expanduser("~/github/etcbc")
coreRepo = "{}/{}".format(repoBase, CORE_NAME)
thisRepo = "{}/{}".format(repoBase, NAME)
coreTf = "{}/tf/{}".format(coreRepo, VERSION)
allTemp = "{}/_temp".format(thisRepo)
thisTemp = "{}/_temp/{}".format(thisRepo, VERSION)
thisTempTf = "{}/tf".format(thisTemp)
thisTf = "{}/tf/{}".format(thisRepo, VERSION)
thisNotes = "{}/shebanq/{}".format(thisRepo, VERSION)
```
In[6]:
```
notesFile = "crossrefNotes.csv"
if not os.path.exists(thisNotes):
os.makedirs(thisNotes)
```
# Test
Check whether this conversion is needed in the first place.
Only when run as a script.
In[7]:
```
if SCRIPT:
(good, work) = utils.mustRun(
None, "{}/.tf/{}.tfx".format(thisTf, "crossref"), force=FORCE
)
if not good:
stop(good=False)
if not work:
stop(good=True)
```
## 5.1 Loading the feature data
We load the features we need from the BHSA core database.
In[8]:
```
utils.caption(4, "Load the existing TF dataset")
TF = Fabric(locations=coreTf, modules=[""])
```
In[9]:
```
api = TF.load(
"""
otype
{} {} {}
book chapter verse number
""".format(
LEX_FEATURE,
TEXT_FEATURE,
TRAILER_FEATURE,
)
)
api.makeAvailableIn(globals())
```
## 5.2 Configuration
Here are the parameters on which the results crucially depend.
There are also parameters that control the reporting of the results, such as file locations.
In[10]:
```
# chunking
CHUNK_LABELS = {True: "fixed", False: "object"}
CHUNK_LBS = {True: "F", False: "O"}
CHUNK_SIZES = (100, 50, 20, 10)
CHUNK_OBJECTS = ("chapter", "verse", "half_verse", "sentence")
# preparing
EXCLUDED_CONS = r"[>WJ=/\[]" # weed out weak consonants
EXCLUDED_PAT = re.compile(EXCLUDED_CONS)
# similarity
MATRIX_THRESHOLD = 50
SIM_METHODS = ("SET", "LCS")
SIMILARITIES = (100, 95, 90, 85, 80, 75, 70, 65, 60, 55, 50, 45, 40, 35, 30)
# printing
DEP_CLIQUE_RATIO = 25
DUB_CLIQUE_RATIO = 15
REC_CLIQUE_RATIO = 5
LARGE_CLIQUE_SIZE = 50
CLIQUES_PER_FILE = 50
# assessing results
VALUE_LABELS = dict(
mis="no results available",
rec="promising results: recommended",
dep="messy results: deprecated",
dub="mixed quality: take care",
out="method deprecated",
nor="unassessed quality: inspection needed",
lr="this experiment is the last one run",
)
```
note that the TF_TABLE and LOCAL_BASE_COMP are deliberately
located in the version independent
part of the tempdir.
Here the results of expensive calculations are stored,
to be used by all versions
```
# crossrefs for TF
TF_TABLE = "{}/parallelTable.tsv".format(allTemp)
# crossrefs for SHEBANQ
SHEBANQ_MATRIX = (False, "verse", "SET")
SHEBANQ_SIMILARITY = 65
SHEBANQ_TOOL = "parallel"
CROSSREF_STATUS = "!"
CROSSREF_KEYWORD = "crossref"
# progress indication
VERBOSE = False
MEGA = 1000000
KILO = 1000
SIMILARITY_PROGRESS = 5 * MEGA
CLIQUES_PROGRESS = 1 * KILO
# locations and hyperlinks
LOCAL_BASE_COMP = "{}/calculus".format(allTemp)
LOCAL_BASE_OUTP = "files"
EXPERIMENT_DIR = "experiments"
EXPERIMENT_FILE = "experiments"
EXPERIMENT_PATH = "{}/{}.txt".format(LOCAL_BASE_OUTP, EXPERIMENT_FILE)
EXPERIMENT_HTML = "{}/{}.html".format(LOCAL_BASE_OUTP, EXPERIMENT_FILE)
NOTES_FILE = "crossref"
NOTES_PATH = "{}/{}.csv".format(LOCAL_BASE_OUTP, NOTES_FILE)
STORED_CLIQUE_DIR = "stored/cliques"
STORED_MATRIX_DIR = "stored/matrices"
STORED_CHUNK_DIR = "stored/chunks"
CHAPTER_DIR = "chapters"
CROSSREF_DB_FILE = "crossrefdb.csv"
CROSSREF_DB_PATH = "{}/{}".format(LOCAL_BASE_OUTP, CROSSREF_DB_FILE)
```
## 5.3 Experiment settings
For each experiment we have to adapt the configuration settings to the parameters that define the experiment.
In[11]:
```
def reset_params():
global CHUNK_FIXED, CHUNK_SIZE, CHUNK_OBJECT, CHUNK_LB, CHUNK_DESC
global SIMILARITY_METHOD, SIMILARITY_THRESHOLD, MATRIX_THRESHOLD
global meta
meta = collections.OrderedDict()
# chunking
CHUNK_FIXED = None # kind of chunking: fixed size or by object
CHUNK_SIZE = None # only relevant for CHUNK_FIXED = True
CHUNK_OBJECT = (
None # only relevant for CHUNK_FIXED = False; see CHUNK_OBJECTS in next cell
)
CHUNK_LB = None # computed from CHUNK_FIXED, CHUNK_SIZE, CHUNK_OBJ
CHUNK_DESC = None # computed from CHUNK_FIXED, CHUNK_SIZE, CHUNK_OBJ
# similarity
MATRIX_THRESHOLD = (
None # minimal similarity used to fill the matrix of similarities
)
SIMILARITY_METHOD = None # see SIM_METHODS in next cell
SIMILARITY_THRESHOLD = (
None # minimal similarity used to put elements together in cliques
)
meta = collections.OrderedDict()
def set_matrix_threshold(sim_m=None, chunk_o=None):
global MATRIX_THRESHOLD
the_sim_m = SIMILARITY_METHOD if sim_m is None else sim_m
the_chunk_o = CHUNK_OBJECT if chunk_o is None else chunk_o
MATRIX_THRESHOLD = 50 if the_sim_m == "SET" else 60
if the_sim_m == "SET":
if the_chunk_o == "chapter":
MATRIX_THRESHOLD = 30
else:
MATRIX_THRESHOLD = 50
else:
if the_chunk_o == "chapter":
MATRIX_THRESHOLD = 55
else:
MATRIX_THRESHOLD = 60
def do_params_chunk(chunk_f, chunk_i):
global CHUNK_FIXED, CHUNK_SIZE, CHUNK_OBJECT, CHUNK_LB, CHUNK_DESC
do_chunk = False
if (
chunk_f != CHUNK_FIXED
or (chunk_f and chunk_i != CHUNK_SIZE)
or (not chunk_f and chunk_i != CHUNK_OBJECT)
):
do_chunk = True
CHUNK_FIXED = chunk_f
if chunk_f:
CHUNK_SIZE = chunk_i
else:
CHUNK_OBJECT = chunk_i
CHUNK_LB = CHUNK_LBS[CHUNK_FIXED]
CHUNK_DESC = CHUNK_SIZE if CHUNK_FIXED else CHUNK_OBJECT
for p in (
"{}/{}".format(LOCAL_BASE_OUTP, EXPERIMENT_DIR),
"{}/{}".format(LOCAL_BASE_COMP, STORED_CHUNK_DIR),
):
if not os.path.exists(p):
os.makedirs(p)
return do_chunk
def do_params(chunk_f, chunk_i, sim_m, sim_thr):
global CHUNK_FIXED, CHUNK_SIZE, CHUNK_OBJECT, CHUNK_LB, CHUNK_DESC
global SIMILARITY_METHOD, SIMILARITY_THRESHOLD, MATRIX_THRESHOLD
global meta
do_chunk = False
do_prep = False
do_sim = False
do_clique = False
meta = collections.OrderedDict()
if (
chunk_f != CHUNK_FIXED
or (chunk_f and chunk_i != CHUNK_SIZE)
or (not chunk_f and chunk_i != CHUNK_OBJECT)
):
do_chunk = True
do_prep = True
do_sim = True
do_clique = True
CHUNK_FIXED = chunk_f
if chunk_f:
CHUNK_SIZE = chunk_i
else:
CHUNK_OBJECT = chunk_i
if sim_m != SIMILARITY_METHOD:
do_prep = True
do_sim = True
do_clique = True
SIMILARITY_METHOD = sim_m
if sim_thr != SIMILARITY_THRESHOLD:
do_clique = True
SIMILARITY_THRESHOLD = sim_thr
set_matrix_threshold()
if SIMILARITY_THRESHOLD < MATRIX_THRESHOLD:
return (False, False, False, False, True)
CHUNK_LB = CHUNK_LBS[CHUNK_FIXED]
CHUNK_DESC = CHUNK_SIZE if CHUNK_FIXED else CHUNK_OBJECT
meta["CHUNK TYPE"] = (
"FIXED {}".format(CHUNK_SIZE)
if CHUNK_FIXED
else "OBJECT {}".format(CHUNK_OBJECT)
)
meta["MATRIX THRESHOLD"] = MATRIX_THRESHOLD
meta["SIMILARITY METHOD"] = SIMILARITY_METHOD
meta["SIMILARITY THRESHOLD"] = SIMILARITY_THRESHOLD
for p in (
"{}/{}".format(LOCAL_BASE_OUTP, EXPERIMENT_DIR),
"{}/{}".format(LOCAL_BASE_OUTP, CHAPTER_DIR),
"{}/{}".format(LOCAL_BASE_COMP, STORED_CLIQUE_DIR),
"{}/{}".format(LOCAL_BASE_COMP, STORED_MATRIX_DIR),
"{}/{}".format(LOCAL_BASE_COMP, STORED_CHUNK_DIR),
):
if not os.path.exists(p):
os.makedirs(p)
return (do_chunk, do_prep, do_sim, do_clique, False)
reset_params()
```
## 5.4 Chunking
We divide the text into chunks to be compared. The result is ``chunks``,
which is a list of lists.
Every chunk is a list of word nodes.
In[12]:
```
def chunking(do_chunk):
global chunks, book_rank
if not do_chunk:
TF.info(
"CHUNKING ({} {}): already chunked into {} chunks".format(
CHUNK_LB, CHUNK_DESC, len(chunks)
)
)
meta["# CHUNKS"] = len(chunks)
return
chunk_path = "{}/{}/chunk_{}_{}".format(
LOCAL_BASE_COMP,
STORED_CHUNK_DIR,
CHUNK_LB,
CHUNK_DESC,
)
if os.path.exists(chunk_path):
with open(chunk_path, "rb") as f:
chunks = pickle.load(f)
TF.info(
"CHUNKING ({} {}): Loaded: {:>5} chunks".format(
CHUNK_LB,
CHUNK_DESC,
len(chunks),
)
)
else:
TF.info("CHUNKING ({} {})".format(CHUNK_LB, CHUNK_DESC))
chunks = []
book_rank = {}
for b in F.otype.s("book"):
book_name = F.book.v(b)
book_rank[book_name] = b
words = L.d(b, otype="word")
nwords = len(words)
if CHUNK_FIXED:
nchunks = nwords // CHUNK_SIZE
if nchunks == 0:
nchunks = 1
common_incr = nwords
special_incr = 0
else:
rem = nwords % CHUNK_SIZE
common_incr = rem // nchunks
special_incr = rem % nchunks
word_in_chunk = -1
cur_chunk = -1
these_chunks = []
for w in words:
word_in_chunk += 1
if word_in_chunk == 0 or (
word_in_chunk
>= CHUNK_SIZE
+ common_incr
+ (1 if cur_chunk < special_incr else 0)
):
word_in_chunk = 0
these_chunks.append([])
cur_chunk += 1
these_chunks[-1].append(w)
else:
these_chunks = [
L.d(c, otype="word") for c in L.d(b, otype=CHUNK_OBJECT)
]
chunks.extend(these_chunks)
chunkvolume = sum(len(c) for c in these_chunks)
if VERBOSE:
TF.info(
"CHUNKING ({} {}): {:<20s} {:>5} words; {:>5} chunks; sizes {:>5} to {:>5}; {:>5}".format(
CHUNK_LB,
CHUNK_DESC,
book_name,
nwords,
len(these_chunks),
min(len(c) for c in these_chunks),
max(len(c) for c in these_chunks),
"OK" if chunkvolume == nwords else "ERROR",
)
)
with open(chunk_path, "wb") as f:
pickle.dump(chunks, f, protocol=PICKLE_PROTOCOL)
TF.info("CHUNKING ({} {}): Made {} chunks".format(CHUNK_LB, CHUNK_DESC, len(chunks)))
meta["# CHUNKS"] = len(chunks)
```
## 5.5 Preparing
In order to compute similarities between chunks, we have to compile each chunk into the information that really matters for the comparison. This is dependent on the chosen method of similarity computing.
### 5.5.1 Preparing for SET comparison
We reduce words to their lexemes (dictionary entries) and from them we also remove the aleph, wav, and yods.
The lexeme feature also contains characters (`/ [ =`) to disambiguate homonyms. We also remove these.
If we end up with something empty, we skip it.
Eventually, we take the set of these reduced word lexemes, so that we effectively ignore order and multiplicity of words. In other words: the resulting similarity will be based on lexeme content.
### 5.5.2 Preparing for LCS comparison
Again, we reduce words to their lexemes as for the SET preparation, and we do the same weeding of consonants and empty strings. But then we concatenate everything, separated by a space. So we preserve order and multiplicity.
In[13]:
```
def preparing(do_prepare):
global chunk_data
if not do_prepare:
TF.info(
"PREPARING ({} {} {}): Already prepared".format(
CHUNK_LB, CHUNK_DESC, SIMILARITY_METHOD
)
)
return
TF.info("PREPARING ({} {} {})".format(CHUNK_LB, CHUNK_DESC, SIMILARITY_METHOD))
chunk_data = []
if SIMILARITY_METHOD == "SET":
for c in chunks:
words = (
EXCLUDED_PAT.sub("", Fs(LEX_FEATURE).v(w).replace("<", "O")) for w in c
)
clean_words = (w for w in words if w != "")
this_data = frozenset(clean_words)
chunk_data.append(this_data)
else:
for c in chunks:
words = (
EXCLUDED_PAT.sub("", Fs(LEX_FEATURE).v(w).replace("<", "O")) for w in c
)
clean_words = (w for w in words if w != "")
this_data = " ".join(clean_words)
chunk_data.append(this_data)
TF.info(
"PREPARING ({} {} {}): Done {} chunks.".format(
CHUNK_LB, CHUNK_DESC, SIMILARITY_METHOD, len(chunk_data)
)
)
```
## 5.6 Similarity computation
Here we implement our two ways of similarity computation.
Both need a massive amount of work, especially for experiments with many small chunks.
The similarities are stored in a ``matrix``, a data structure that stores a similarity number for each pair of chunk indexes.
Most pair of chunks will be dissimilar. In order to save space, we do not store similarities below a certain threshold.
We store matrices for re-use.
### 5.6.1 SET similarity
The core is an operation on the sets, associated with the chunks by the prepare step. We take the cardinality of the intersection divided by the cardinality of the union.
Intuitively, we compute the proportion of what two chunks have in common against their total material.
In case the union is empty (both chunks have yielded an empty set), we deem the chunks not to be interesting as a parallel pair, and we set the similarity to 0.
### 5.6.2 LCS similarity
The core is the method `ratio()`, taken from the Levenshtein module.
Remember that the preparation step yielded a space separated string of lexemes, and these strings are compared on the basis of edit distance.
In[14]:
```
def similarity_post():
nequals = len({x for x in chunk_dist if chunk_dist[x] >= 100})
cmin = min(chunk_dist.values()) if len(chunk_dist) else "!empty set!"
cmax = max(chunk_dist.values()) if len(chunk_dist) else "!empty set!"
meta["LOWEST AVAILABLE SIMILARITY"] = cmin
meta["HIGHEST AVAILABLE SIMILARITY"] = cmax
meta["# EQUAL COMPARISONS"] = nequals
TF.info(
"SIMILARITY ({} {} {} M>{}): similarities between {} and {}. {} are 100%".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
cmin,
cmax,
nequals,
)
)
def similarity(do_sim):
global chunk_dist
total_chunks = len(chunks)
total_distances = total_chunks * (total_chunks - 1) // 2
meta["# SIMILARITY COMPARISONS"] = total_distances
SIMILARITY_PROGRESS = total_distances // 100
if SIMILARITY_PROGRESS >= MEGA:
sim_unit = MEGA
sim_lb = "M"
else:
sim_unit = KILO
sim_lb = "K"
if not do_sim:
TF.info(
"SIMILARITY ({} {} {} M>{}): Using {:>5} {} ({}) comparisons with {} entries in matrix".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
total_distances // sim_unit,
sim_lb,
total_distances,
len(chunk_dist),
)
)
meta["# STORED SIMILARITIES"] = len(chunk_dist)
similarity_post()
return
matrix_path = "{}/{}/matrix_{}_{}_{}_{}".format(
LOCAL_BASE_COMP,
STORED_MATRIX_DIR,
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
)
if os.path.exists(matrix_path):
with open(matrix_path, "rb") as f:
chunk_dist = pickle.load(f)
TF.info(
"SIMILARITY ({} {} {} M>{}): Loaded: {:>5} {} ({}) comparisons with {} entries in matrix".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
total_distances // sim_unit,
sim_lb,
total_distances,
len(chunk_dist),
)
)
meta["# STORED SIMILARITIES"] = len(chunk_dist)
similarity_post()
return
TF.info(
"SIMILARITY ({} {} {} M>{}): Computing {:>5} {} ({}) comparisons and saving entries in matrix".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
total_distances // sim_unit,
sim_lb,
total_distances,
)
)
chunk_dist = {}
wc = 0
wt = 0
if SIMILARITY_METHOD == "SET":
# method SET: all chunks have been reduced to sets, ratio between lengths of intersection and union
for i in range(total_chunks):
c_i = chunk_data[i]
for j in range(i + 1, total_chunks):
c_j = chunk_data[j]
u = len(c_i | c_j)
# HERE COMES THE SIMILARITY COMPUTATION
d = 100 * len(c_i & c_j) / u if u != 0 else 0
# HERE WE STORE THE OUTCOME
if d >= MATRIX_THRESHOLD:
chunk_dist[(i, j)] = d
wc += 1
wt += 1
if wc == SIMILARITY_PROGRESS:
wc = 0
TF.info(
"SIMILARITY ({} {} {} M>{}): Computed {:>5} {} comparisons and saved {} entries in matrix".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
wt // sim_unit,
sim_lb,
len(chunk_dist),
)
)
elif SIMILARITY_METHOD == "LCS":
# method LCS: chunks are sequence aligned, ratio between length of all common parts and total length
for i in range(total_chunks):
c_i = chunk_data[i]
for j in range(i + 1, total_chunks):
c_j = chunk_data[j]
# HERE COMES THE SIMILARITY COMPUTATION
d = 100 * ratio(c_i, c_j)
# HERE WE STORE THE OUTCOME
if d >= MATRIX_THRESHOLD:
chunk_dist[(i, j)] = d
wc += 1
wt += 1
if wc == SIMILARITY_PROGRESS:
wc = 0
TF.info(
"SIMILARITY ({} {} {} M>{}): Computed {:>5} {} comparisons and saved {} entries in matrix".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
wt // sim_unit,
sim_lb,
len(chunk_dist),
)
)
with open(matrix_path, "wb") as f:
pickle.dump(chunk_dist, f, protocol=PICKLE_PROTOCOL)
TF.info(
"SIMILARITY ({} {} {} M>{}): Computed {:>5} {} ({}) comparisons and saved {} entries in matrix".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
wt // sim_unit,
sim_lb,
wt,
len(chunk_dist),
)
)
meta["# STORED SIMILARITIES"] = len(chunk_dist)
similarity_post()
```
## 5.7 Cliques
Based on the value for the ``SIMILARITY_THRESHOLD`` we use the similarity matrix to pick the *interesting*
similar pairs out of it. From these pairs we lump together our cliques.
Our list of experiments will select various values for ``SIMILARITY_THRESHOLD``, which will result
in various types of clique behavior.
We store computed cliques for re-use.
## 5.7.1 Selecting passages
We take all pairs from the similarity matrix which are above the threshold, and add both members to a list of passages.
## 5.7.2 Growing cliques
We inspect all passages in our set, and try to add them to the cliques we are growing.
We start with an empty set of cliques.
Each passage is added to a clique with which it has *enough familiarity*, otherwise it is added to a new clique.
*Enough familiarity means*: the passage is similar to at least one member of the clique, and the similarity is at least ``SIMILARITY_THRESHOLD``.
It is possible that a passage is thus added to more than one clique. In that case, those cliques are merged.
This may lead to growing very large cliques if ``SIMILARITY_THRESHOLD`` is too low.
In[15]:
```
def key_chunk(i):
c = chunks[i]
w = c[0]
return (
-len(c),
L.u(w, otype="book")[0],
L.u(w, otype="chapter")[0],
L.u(w, otype="verse")[0],
)
def meta_clique_pre():
global similars, passages
TF.info(
"CLIQUES ({} {} {} M>{} S>{}): inspecting the similarity matrix".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
)
)
similars = {x for x in chunk_dist if chunk_dist[x] >= SIMILARITY_THRESHOLD}
passage_set = set()
for (i, j) in similars:
passage_set.add(i)
passage_set.add(j)
passages = sorted(passage_set, key=key_chunk)
meta["# SIMILAR COMPARISONS"] = len(similars)
meta["# SIMILAR PASSAGES"] = len(passages)
def meta_clique_pre2():
TF.info(
"CLIQUES ({} {} {} M>{} S>{}): {} relevant similarities between {} passages".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
len(similars),
len(passages),
)
)
def meta_clique_post():
global l_c_l
meta["# CLIQUES"] = len(cliques)
scliques = collections.Counter()
for c in cliques:
scliques[len(c)] += 1
l_c_l = max(scliques.keys()) if len(scliques) > 0 else 0
totmn = 0
totcn = 0
for (ln, n) in sorted(scliques.items(), key=lambda x: x[0]):
totmn += ln * n
totcn += n
if VERBOSE:
TF.info(
"CLIQUES ({} {} {} M>{} S>{}): {:>4} cliques of length {:>4}".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
n,
ln,
)
)
meta["# CLIQUES of LENGTH {:>4}".format(ln)] = n
TF.info(
"CLIQUES ({} {} {} M>{} S>{}): {} members in {} cliques".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
totmn,
totcn,
)
)
def cliqueing(do_clique):
global cliques
if not do_clique:
TF.info(
"CLIQUES ({} {} {} M>{} S>{}): Already loaded {} cliques out of {} candidates from {} comparisons".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
len(cliques),
len(passages),
len(similars),
)
)
meta_clique_pre2()
meta_clique_post()
return
TF.info(
"CLIQUES ({} {} {} M>{} S>{}): fetching similars and chunk candidates".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
)
)
meta_clique_pre()
meta_clique_pre2()
clique_path = "{}/{}/clique_{}_{}_{}_{}_{}".format(
LOCAL_BASE_COMP,
STORED_CLIQUE_DIR,
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
)
if os.path.exists(clique_path):
with open(clique_path, "rb") as f:
cliques = pickle.load(f)
TF.info(
"CLIQUES ({} {} {} M>{} S>{}): Loaded: {:>5} cliques out of {:>6} chunks from {} comparisons".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
len(cliques),
len(passages),
len(similars),
)
)
meta_clique_post()
return
TF.info(
"CLIQUES ({} {} {} M>{} S>{}): Composing cliques out of {:>6} chunks from {} comparisons".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
len(passages),
len(similars),
)
)
cliques_unsorted = []
np = 0
npc = 0
for i in passages:
added = None
removable = set()
for (k, c) in enumerate(cliques_unsorted):
origc = tuple(c)
for j in origc:
d = (
chunk_dist.get((i, j), 0)
if i < j
else chunk_dist.get((j, i), 0)
if j < i
else 0
)
if d >= SIMILARITY_THRESHOLD:
if (
added is None
): # the passage has not been added to any clique yet
c.add(i)
added = k # remember that we added the passage to this clique
else: # the passage has alreay been added to another clique:
# we merge this clique with that one
cliques_unsorted[added] |= c
removable.add(
k
) # we remember that we have merged this clicque into another one,
# so we can throw away this clicque later
break
if added is None:
cliques_unsorted.append({i})
else:
if len(removable):
cliques_unsorted = [
c for (k, c) in enumerate(cliques_unsorted) if k not in removable
]
np += 1
npc += 1
if npc == CLIQUES_PROGRESS:
npc = 0
TF.info(
"CLIQUES ({} {} {} M>{} S>{}): Composed {:>5} cliques out of {:>6} chunks".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
len(cliques_unsorted),
np,
)
)
cliques = sorted([tuple(sorted(c, key=key_chunk)) for c in cliques_unsorted])
with open(clique_path, "wb") as f:
pickle.dump(cliques, f, protocol=PICKLE_PROTOCOL)
meta_clique_post()
TF.info(
"CLIQUES ({} {} {} M>{} S>{}): Composed and saved {:>5} cliques out of {:>6} chunks from {} comparisons".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
len(cliques),
len(passages),
len(similars),
)
)
```
## 5.8 Output
We deliver the output of our experiments in various ways, all in HTML.
We generate chapter based diff outputs with color-highlighted differences between the chapters for every pair of chapters that merit it.
For every (*good*) experiment, we produce a big list of its cliques, and for
every such clique, we produce a diff-view of its members.
Big cliques will be split into several files.
Clique listings will also contain metadata: the value of the experiment parameters.
### 5.8.1 Format definitions
Here are the definitions for formatting the (HTML) output.
In[16]:
```
# clique lists
css = """
td.vl {
font-family: Verdana, Arial, sans-serif;
font-size: small;
text-align: right;
color: #aaaaaa;
width: 10%;
direction: ltr;
border-left: 2px solid #aaaaaa;
border-right: 2px solid #aaaaaa;
}
td.ht {
font-family: Ezra SIL, SBL Hebrew, Verdana, sans-serif;
font-size: x-large;
line-height: 1.7;
text-align: right;
direction: rtl;
}
table.ht {
width: 100%;
direction: rtl;
border-collapse: collapse;
}
td.ht {
border-left: 2px solid #aaaaaa;
border-right: 2px solid #aaaaaa;
}
tr.ht.tb {
border-top: 2px solid #aaaaaa;
border-left: 2px solid #aaaaaa;
border-right: 2px solid #aaaaaa;
}
tr.ht.bb {
border-bottom: 2px solid #aaaaaa;
border-left: 2px solid #aaaaaa;
border-right: 2px solid #aaaaaa;
}
span.m {
background-color: #aaaaff;
}
span.f {
background-color: #ffaaaa;
}
span.x {
background-color: #ffffaa;
color: #bb0000;
}
span.delete {
background-color: #ffaaaa;
}
span.insert {
background-color: #aaffaa;
}
span.replace {
background-color: #ffff00;
}
"""
# chapter diffs
diffhead = """
<head>
<meta http-equiv="Content-Type"
content="text/html; charset=UTF-8" />
<title></title>
<style type="text/css">
table.diff {
font-family: Ezra SIL, SBL Hebrew, Verdana, sans-serif;
font-size: x-large;
text-align: right;
}
.diff_header {background-color:#e0e0e0}
td.diff_header {text-align:right}
.diff_next {background-color:#c0c0c0}
.diff_add {background-color:#aaffaa}
.diff_chg {background-color:#ffff77}
.diff_sub {background-color:#ffaaaa}
</style>
</head>
"""
# table of experiments
ecss = """
<style type="text/css">
.mis {background-color: #cccccc;}
.rec {background-color: #aaffaa;}
.dep {background-color: #ffaaaa;}
.dub {background-color: #ffddaa;}
.out {background-color: #ffddff;}
.nor {background-color: #fcfcff;}
.ps {font-weight: normal;}
.mx {font-style: italic;}
.cl {font-weight: bold;}
.lr {font-weight: bold; background-color: #ffffaa;}
p,td {font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: small;}
td {border: 1pt solid #000000; padding: 4pt;}
table {border: 1pt solid #000000; border-collapse: collapse;}
</style>
"""
legend = """
<table>
<tr><td class="mis">{mis}</td></tr>
<tr><td class="rec">{rec}</td></tr>
<tr><td class="dep">{dep}</td></tr>
<tr><td class="dub">{dub}</td></tr>
<tr><td class="out">{out}</td></tr>
<tr><td class="nor">{nor}</td></tr>
</table>
""".format(
**VALUE_LABELS
)
```
### 5.8.2 Formatting clique lists
In[17]:
```
def xterse_chunk(i):
chunk = chunks[i]
fword = chunk[0]
book = L.u(fword, otype="book")[0]
chapter = L.u(fword, otype="chapter")[0]
return (book, chapter)
def xterse_clique(ii):
return tuple(sorted({xterse_chunk(i) for i in ii}))
def terse_chunk(i):
chunk = chunks[i]
fword = chunk[0]
book = L.u(fword, otype="book")[0]
chapter = L.u(fword, otype="chapter")[0]
verse = L.u(fword, otype="verse")[0]
return (book, chapter, verse)
def terse_clique(ii):
return tuple(sorted({terse_chunk(i) for i in ii}))
def verse_chunk(i):
(bk, ch, vs) = i
book = F.book.v(bk)
chapter = F.chapter.v(ch)
verse = F.verse.v(vs)
text = "".join(
"{}{}".format(Fs(TEXT_FEATURE).v(w), Fs(TRAILER_FEATURE).v(w))
for w in L.d(vs, otype="word")
)
verse_label = '<td class="vl">{} {}:{}</td>'.format(book, chapter, verse)
htext = '{}<td class="ht">{}</td>'.format(verse_label, text)
return '<tr class="ht">{}</tr>'.format(htext)
def verse_clique(ii):
return '<table class="ht">{}</table>\n'.format(
"".join(verse_chunk(i) for i in sorted(ii))
)
def condense(vlabels):
cnd = ""
(cur_b, cur_c) = (None, None)
for (b, c, v) in vlabels:
c = str(c)
v = str(v)
sep = (
""
if cur_b is None
else ". "
if cur_b != b
else "; "
if cur_c != c
else ", "
)
show_b = b + " " if cur_b != b else ""
show_c = c + ":" if cur_b != b or cur_c != c else ""
(cur_b, cur_c) = (b, c)
cnd += "{}{}{}{}".format(sep, show_b, show_c, v)
return cnd
def print_diff(a, b):
arep = ""
brep = ""
for (lb, ai, aj, bi, bj) in SequenceMatcher(
isjunk=None, a=a, b=b, autojunk=False
).get_opcodes():
if lb == "equal":
arep += a[ai:aj]
brep += b[bi:bj]
elif lb == "delete":
arep += '<span class="{}">{}</span>'.format(lb, a[ai:aj])
elif lb == "insert":
brep += '<span class="{}">{}</span>'.format(lb, b[bi:bj])
else:
arep += '<span class="{}">{}</span>'.format(lb, a[ai:aj])
brep += '<span class="{}">{}</span>'.format(lb, b[bi:bj])
return (arep, brep)
def print_chunk_fine(prev, text, verse_labels, prevlabels):
if prev is None:
return """
<tr class="ht tb bb"><td class="vl">{}</td><td class="ht">{}</td></tr>
""".format(
condense(verse_labels),
text,
)
else:
(prevline, textline) = print_diff(prev, text)
return """
<tr class="ht tb"><td class="vl">{}</td><td class="ht">{}</td></tr>
<tr class="ht bb"><td class="vl">{}</td><td class="ht">{}</td></tr>
""".format(
condense(prevlabels) if prevlabels is not None else "previous",
prevline,
condense(verse_labels),
textline,
)
def print_chunk_coarse(text, verse_labels):
return """
<tr class="ht tb bb"><td class="vl">{}</td><td class="ht">{}</td></tr>
""".format(
condense(verse_labels),
text,
)
def print_clique(ii, ncliques):
return (
print_clique_fine(ii)
if len(ii) < ncliques * DEP_CLIQUE_RATIO / 100
else print_clique_coarse(ii)
)
def print_clique_fine(ii):
condensed = collections.OrderedDict()
for i in sorted(ii, key=lambda c: (-len(chunks[c]), c)):
chunk = chunks[i]
fword = chunk[0]
book = F.book.v(L.u(fword, otype="book")[0])
chapter = F.chapter.v(L.u(fword, otype="chapter")[0])
verse = F.verse.v(L.u(fword, otype="verse")[0])
text = "".join(
"{}{}".format(Fs(TEXT_FEATURE).v(w), Fs(TRAILER_FEATURE).v(w))
for w in chunk
)
condensed.setdefault(text, []).append((book, chapter, verse))
result = []
nv = len(condensed.items())
prev = None
for (text, verse_labels) in condensed.items():
if prev is None:
if nv == 1:
result.append(print_chunk_fine(None, text, verse_labels, None))
else:
prev = text
prevlabels = verse_labels
continue
else:
result.append(print_chunk_fine(prev, text, verse_labels, prevlabels))
prev = text
prevlabels = None
return '<table class="ht">{}</table>\n'.format("".join(result))
def print_clique_coarse(ii):
condensed = collections.OrderedDict()
for i in sorted(ii, key=lambda c: (-len(chunks[c]), c))[0:LARGE_CLIQUE_SIZE]:
chunk = chunks[i]
fword = chunk[0]
book = F.book.v(L.u(fword, otype="book")[0])
chapter = F.chapter.v(L.u(fword, otype="chapter")[0])
verse = F.verse.v(L.u(fword, otype="verse")[0])
text = "".join(
"{}{}".format(Fs(TEXT_FEATURE).v(w), Fs(TRAILER_FEATURE).v(w))
for w in chunk
)
condensed.setdefault(text, []).append((book, chapter, verse))
result = []
for (text, verse_labels) in condensed.items():
result.append(print_chunk_coarse(text, verse_labels))
if len(ii) > LARGE_CLIQUE_SIZE:
result.append(
print_chunk_coarse("+ {} ...".format(len(ii) - LARGE_CLIQUE_SIZE), [])
)
return '<table class="ht">{}</table>\n'.format("".join(result))
def index_clique(bnm, n, ii, ncliques):
return (
index_clique_fine(bnm, n, ii)
if len(ii) < ncliques * DEP_CLIQUE_RATIO / 100
else index_clique_coarse(bnm, n, ii)
)
def index_clique_fine(bnm, n, ii):
verse_labels = []
for i in sorted(ii, key=lambda c: (-len(chunks[c]), c)):
chunk = chunks[i]
fword = chunk[0]
book = F.book.v(L.u(fword, otype="book")[0])
chapter = F.chapter.v(L.u(fword, otype="chapter")[0])
verse = F.verse.v(L.u(fword, otype="verse")[0])
verse_labels.append((book, chapter, verse))
reffl = "{}_{}".format(bnm, n // CLIQUES_PER_FILE)
return '<p><b>{}</b> <a href="{}.html#c_{}">{}</a></p>'.format(
n,
reffl,
n,
condense(verse_labels),
)
def index_clique_coarse(bnm, n, ii):
verse_labels = []
for i in sorted(ii, key=lambda c: (-len(chunks[c]), c))[0:LARGE_CLIQUE_SIZE]:
chunk = chunks[i]
fword = chunk[0]
book = F.book.v(L.u(fword, otype="book")[0])
chapter = F.chapter.v(L.u(fword, otype="chapter")[0])
verse = F.verse.v(L.u(fword, otype="verse")[0])
verse_labels.append((book, chapter, verse))
reffl = "{}_{}".format(bnm, n // CLIQUES_PER_FILE)
extra = (
"+ {} ...".format(len(ii) - LARGE_CLIQUE_SIZE)
if len(ii) > LARGE_CLIQUE_SIZE
else ""
)
return '<p><b>{}</b> <a href="{}.html#c_{}">{}{}</a></p>'.format(
n,
reffl,
n,
condense(verse_labels),
extra,
)
def lines_chapter(c):
lines = []
for v in L.d(c, otype="verse"):
vl = F.verse.v(v)
text = "".join(
"{}{}".format(Fs(TEXT_FEATURE).v(w), Fs(TRAILER_FEATURE).v(w))
for w in L.d(v, otype="word")
)
lines.append("{} {}".format(vl, text.replace("\n", " ")))
return lines
def compare_chapters(c1, c2, lb1, lb2):
dh = difflib.HtmlDiff(wrapcolumn=80)
table_html = dh.make_table(
lines_chapter(c1),
lines_chapter(c2),
fromdesc=lb1,
todesc=lb2,
context=False,
numlines=5,
)
htext = """<html>{}<body>{}</body></html>""".format(diffhead, table_html)
return htext
```
### 5.8.3 Compiling the table of experiments
Here we generate the table of experiments, complete with the coloring according to their assessments.
In[18]:
```
# generate the table of experiments
def gen_html(standalone=False):
global other_exps
TF.info(
"EXPERIMENT: Generating html report{}".format(
"(standalone)" if standalone else ""
)
)
stats = collections.Counter()
pre = (
"""
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
{}
</head>
<body>
""".format(
ecss
)
if standalone
else ""
)
post = (
"""
</body></html>
"""
if standalone
else ""
)
experiments = """
{}
{}
<table>
<tr><th>chunk type</th><th>chunk size</th><th>similarity method</th>{}</tr>
""".format(
pre, legend, "".join("<th>{}</th>".format(sim_thr) for sim_thr in SIMILARITIES)
)
for chunk_f in (True, False):
if chunk_f:
chunk_items = CHUNK_SIZES
else:
chunk_items = CHUNK_OBJECTS
chunk_lb = CHUNK_LBS[chunk_f]
for chunk_i in chunk_items:
for sim_m in SIM_METHODS:
set_matrix_threshold(sim_m=sim_m, chunk_o=chunk_i)
these_outputs = outputs.get(MATRIX_THRESHOLD, {})
experiments += "<tr><td>{}</td><td>{}</td><td>{}</td>".format(
CHUNK_LABELS[chunk_f],
chunk_i,
sim_m,
)
for sim_thr in SIMILARITIES:
okey = (chunk_lb, chunk_i, sim_m, sim_thr)
values = these_outputs.get(okey)
if values is None:
result = '<td class="mis"> </td>'
stats["mis"] += 1
else:
(npassages, ncliques, longest_clique_len) = values
cls = assess_exp(
chunk_f, npassages, ncliques, longest_clique_len
)
stats[cls] += 1
(lr_el, lr_lb) = ("", "")
if (
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
SIMILARITY_THRESHOLD,
) == (
chunk_lb,
chunk_i,
sim_m,
sim_thr,
):
lr_el = '<span class="lr">*</span>'
lr_lb = VALUE_LABELS["lr"]
result = """
<td class="{}" title="{}">{}
<span class="ps">{}</span><br/>
<a target="_blank" href="{}{}/{}_{}_{}_M{}_S{}.html"><span class="cl">{}</span></a><br/>
<span class="mx">{}</span>
</td>""".format(
cls,
lr_lb,
lr_el,
npassages,
"" if standalone else LOCAL_BASE_OUTP + "/",
EXPERIMENT_DIR,
chunk_lb,
chunk_i,
sim_m,
MATRIX_THRESHOLD,
sim_thr,
ncliques,
longest_clique_len,
)
experiments += result
experiments += "</tr>\n"
experiments += "</table>\n{}".format(post)
if standalone:
with open(EXPERIMENT_HTML, "w") as f:
f.write(experiments)
else:
other_exps = experiments
for stat in sorted(stats):
TF.info("EXPERIMENT: {:>3} {}".format(stats[stat], VALUE_LABELS[stat]))
TF.info("EXPERIMENT: Generated html report")
```
### 5.8.4 High level formatting functions
Here everything concerning output is brought together.
In[19]:
```
def assess_exp(cf, np, nc, ll):
return (
"out"
if cf
else "rec"
if ll > nc * REC_CLIQUE_RATIO / 100 and ll <= nc * DUB_CLIQUE_RATIO / 100
else "dep"
if ll > nc * DEP_CLIQUE_RATIO / 100
else "dub"
if ll > nc * DUB_CLIQUE_RATIO / 100
else "nor"
)
def printing():
global outputs, bin_cliques, base_name
TF.info(
"PRINT ({} {} {} M>{} S>{}): sorting out cliques".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
)
)
xt_cliques = {
xterse_clique(c) for c in cliques
} # chapter cliques as tuples of (b, ch) tuples
bin_cliques = {
c for c in xt_cliques if len(c) == 2
} # chapter cliques with exactly two chapters
# all chapters that occur in binary chapter cliques
meta["# BINARY CHAPTER DIFFS"] = len(bin_cliques)
# We generate one kind of info for binary chapter cliques (the majority of cases).
# The remaining cases are verse cliques that do not occur in such chapters, e.g. because they
# have member chunks in the same chapter, or in multiple (more than two) chapters.
ncliques = len(cliques)
chapters_ok = assess_exp(CHUNK_FIXED, len(passages), ncliques, l_c_l) in {
"rec",
"nor",
"dub",
}
cdoing = "involving" if chapters_ok else "skipping"
TF.info(
"PRINT ({} {} {} M>{} S>{}): formatting {} cliques {} {} binary chapter diffs".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
ncliques,
cdoing,
len(bin_cliques),
)
)
meta_html = "\n".join("{:<40} : {:>10}".format(k, str(meta[k])) for k in meta)
base_name = "{}_{}_{}_M{}_S{}".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
)
param_spec = """
<table>
<tr><th>chunking method</th><td>{}</td></tr>
<tr><th>chunking description</th><td>{}</td></tr>
<tr><th>similarity method</th><td>{}</td></tr>
<tr><th>similarity threshold</th><td>{}</td></tr>
</table>
""".format(
CHUNK_LABELS[CHUNK_FIXED],
CHUNK_DESC,
SIMILARITY_METHOD,
SIMILARITY_THRESHOLD,
)
param_lab = "chunk-{}-{}-sim-{}-m{}-s{}".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
)
index_name = base_name
all_name = "{}_{}".format("all", base_name)
cliques_name = "{}_{}".format("clique", base_name)
clique_links = []
clique_links.append(
("{}/{}.html".format(base_name, all_name), "Big list of all cliques")
)
nexist = 0
nnew = 0
if chapters_ok:
chapter_diffs = []
TF.info(
"PRINT ({} {} {} M>{} S>{}): Chapter diffs needed: {}".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
len(bin_cliques),
)
)
bcc_text = "<p>These results look good, so a binary chapter comparison has been generated</p>"
for cl in sorted(bin_cliques):
lb1 = "{} {}".format(F.book.v(cl[0][0]), F.chapter.v(cl[0][1]))
lb2 = "{} {}".format(F.book.v(cl[1][0]), F.chapter.v(cl[1][1]))
hfilename = "{}_vs_{}.html".format(lb1, lb2).replace(" ", "_")
hfilepath = "{}/{}/{}".format(LOCAL_BASE_OUTP, CHAPTER_DIR, hfilename)
chapter_diffs.append(
(
lb1,
cl[0][1],
lb2,
cl[1][1],
"{}/{}/{}/{}".format(
SHEBANQ_TOOL,
LOCAL_BASE_OUTP,
CHAPTER_DIR,
hfilename,
),
)
)
if not os.path.exists(hfilepath):
htext = compare_chapters(cl[0][1], cl[1][1], lb1, lb2)
with open(hfilepath, "w") as f:
f.write(htext)
if VERBOSE:
TF.info(
"PRINT ({} {} {} M>{} S>{}): written {}".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
hfilename,
)
)
nnew += 1
else:
nexist += 1
clique_links.append(
(
"../{}/{}".format(CHAPTER_DIR, hfilename),
"{} versus {}".format(lb1, lb2),
)
)
TF.info(
"PRINT ({} {} {} M>{} S>{}): Chapter diffs: {} newly created and {} already existing".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
nnew,
nexist,
)
)
else:
bcc_text = "<p>These results look dubious at best, so no binary chapter comparison has been generated</p>"
allgeni_html = (
index_clique(cliques_name, i, c, ncliques) for (i, c) in enumerate(cliques)
)
allgen_htmls = []
allgen_html = ""
for (i, c) in enumerate(cliques):
if i % CLIQUES_PER_FILE == 0:
if i > 0:
allgen_htmls.append(allgen_html)
allgen_html = ""
allgen_html += '<h3><a name="c_{}">Clique {}</a></h3>\n{}'.format(
i, i, print_clique(c, ncliques)
)
allgen_htmls.append(allgen_html)
index_html_tpl = """
{}
<h1>Binary chapter comparisons</h1>
{}
{}
"""
content_file_tpl = """<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>{}</title>
<style type="text/css">
{}
</style>
</head>
<body>
<h1>{}</h1>
{}
<p><a href="#meta">more parameters and stats</a></p>
{}
<h1><a name="meta">Parameters and stats</a></h1>
<pre>{}</pre>
</body>
</html>"""
a_tpl_file = '<p><a target="_blank" href="{}">{}</a></p>'
index_html_file = index_html_tpl.format(
a_tpl_file.format(*clique_links[0]),
bcc_text,
"\n".join(a_tpl_file.format(*c) for c in clique_links[1:]),
)
listing_html = "{}\n".format(
"\n".join(allgeni_html),
)
for (subdir, fname, content_html, tit) in (
(None, index_name, index_html_file, "Index " + param_lab),
(base_name, all_name, listing_html, "Listing " + param_lab),
(base_name, cliques_name, allgen_htmls, "Cliques " + param_lab),
):
subdir = "" if subdir is None else (subdir + "/")
subdirabs = "{}/{}/{}".format(LOCAL_BASE_OUTP, EXPERIMENT_DIR, subdir)
if not os.path.exists(subdirabs):
os.makedirs(subdirabs)
if type(content_html) is list:
for (i, c_h) in enumerate(content_html):
fn = "{}_{}".format(fname, i)
t = "{}_{}".format(tit, i)
with open(
"{}/{}/{}{}.html".format(
LOCAL_BASE_OUTP, EXPERIMENT_DIR, subdir, fn
),
"w",
) as f:
f.write(
content_file_tpl.format(t, css, t, param_spec, c_h, meta_html)
)
else:
with open(
"{}/{}/{}{}.html".format(
LOCAL_BASE_OUTP, EXPERIMENT_DIR, subdir, fname
),
"w",
) as f:
f.write(
content_file_tpl.format(
tit, css, tit, param_spec, content_html, meta_html
)
)
destination = outputs.setdefault(MATRIX_THRESHOLD, {})
destination[(CHUNK_LB, CHUNK_DESC, SIMILARITY_METHOD, SIMILARITY_THRESHOLD)] = (
len(passages),
len(cliques),
l_c_l,
)
TF.info(
"PRINT ({} {} {} M>{} S>{}): formatted {} cliques ({} files) {} {} binary chapter diffs".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
len(cliques),
len(allgen_htmls),
cdoing,
len(bin_cliques),
)
)
```
## 5.9 Running experiments
The workflows of doing a single experiment, and then all experiments, are defined.
In[20]:
```
outputs = {}
def writeoutputs():
global outputs
with open(EXPERIMENT_PATH, "wb") as f:
pickle.dump(outputs, f, protocol=PICKLE_PROTOCOL)
def readoutputs():
global outputs
if not os.path.exists(EXPERIMENT_PATH):
outputs = {}
else:
with open(EXPERIMENT_PATH, "rb") as f:
outputs = pickle.load(f)
def do_experiment(chunk_f, chunk_i, sim_m, sim_thr, do_index):
if do_index:
readoutputs()
(do_chunk, do_prep, do_sim, do_clique, skip) = do_params(
chunk_f, chunk_i, sim_m, sim_thr
)
if skip:
return
chunking(do_chunk)
preparing(do_prep)
similarity(do_sim)
cliqueing(do_clique)
printing()
if do_index:
writeoutputs()
gen_html()
def do_only_chunk(chunk_f, chunk_i):
do_chunk = do_params_chunk(chunk_f, chunk_i)
chunking(do_chunk)
def reset_experiments():
global outputs
readoutputs()
outputs = {}
reset_params()
writeoutputs()
gen_html()
def do_all_experiments(no_fixed=False, only_object=None):
global outputs
reset_experiments()
for chunk_f in (False,) if no_fixed else (True, False):
if chunk_f:
chunk_items = CHUNK_SIZES
else:
chunk_items = CHUNK_OBJECTS if only_object is None else (only_object,)
for chunk_i in chunk_items:
for sim_m in SIM_METHODS:
for sim_thr in SIMILARITIES:
do_experiment(chunk_f, chunk_i, sim_m, sim_thr, False)
writeoutputs()
gen_html()
gen_html(standalone=True)
def do_all_chunks(no_fixed=False, only_object=None):
global outputs
reset_experiments()
for chunk_f in (False,) if no_fixed else (True, False):
if chunk_f:
chunk_items = CHUNK_SIZES
else:
chunk_items = CHUNK_OBJECTS if only_object is None else (only_object,)
for chunk_i in chunk_items:
do_only_chunk(chunk_f, chunk_i)
def show_all_experiments():
readoutputs()
gen_html()
gen_html(standalone=True)
```
# 6a
# TF features
Based on selected similarity matrices, we produce an
edge features between verses, containing weighted links to parallel verses.
The features to deliver are called `crossrefSET` and `crossrefLCS` and `crossref`.
These are edge feature, both are symmetric, and hence redundant.
For every node, the *from* and *to* edges are identical.
The `SET` variant consists of set based similarity, the `LCS` one on longest common subsequence
similarity.
The `crossref` feature takes the union of both methods, with the average confidence.
The weight is the similarity as percentage integer as it comes from the similarity matrix.
## Discussion
We only produce the results of the similarity computation (the matrix), we do not do the cliqueing.
There are many ways to make cliques, and that can easily be done by users of the data, once the
matrix results are in place.
We also do not produce pretty outputs, chapter diffs and other goodies.
Just the raw similarity data.
The matrix computation is expensive.
We use fixed settings:
* verse chunks
* `SET` method / `LCS` method,
* matrix threshold 50 / 60
* similarity threshold 75
That is, we compute a matrix that contains all pairs with similarity above 50 or 60
depending on whether we do the `SET` method or the `LCS` method.
From that matrix, we only use the similarities above 75.
This gives us room to play without recomputing the matrix.
We do not want to redo this computation if it can be avoided.
Verse similarity is not something that is very sensitive to change in the encoding.
It is very likely that similar verses in one version of the data agree with similar
verses in all other versions.
However, the node numbers of verses may change from version to version, so that part
must be done again for each version.
This is how we proceed:
* the matrix computation gives us triples (v1, v2, w), where v1, v2 are verse nodes and d is there similarity
* we store the result of the matrix computation in a csv file with the following fields:
* method, v1, v1Ref, v2, v2Ref, d, where v1Ref and v2Ref are verse references,
each containing exactly 3 fields: book, chapter, verse
* NB: the similarity table has only one entry for each pair of similar verses per method.
If (v1, v2) is in the table, (v2, v1) is not in the table, per method.
When we run this notebook for the pipeline, we check for the presence of this file.
If it is present, we uses the vRefs in it to compute the verse nodes that are valid for the
version we are going to produce.
That gives us all the data we need, so we can skip the matrix computation.
If the file is not present, we have to compute the matrix.
There will be a parameter, called FORCE_MATRIX, which can enforce a re-computation of the matrix.
We need some utility function geared to TF feature production.
The `get_verse()` function is simpler, and we do not have to run full experiments.
In[21]:
```
def writeSimTable(similars):
with open(TF_TABLE, "w") as h:
for entry in similars:
h.write("{}\n".format("\t".join(str(x) for x in entry)))
def readSimTable():
similars = []
stats = set()
with open(TF_TABLE) as h:
for line in h:
(
method,
v1,
v2,
sim,
book1,
chapter1,
verse1,
book2,
chapter2,
verse2,
) = line.rstrip("\n").split("\t")
verseNode1 = T.nodeFromSection((book1, int(chapter1), int(verse1)))
verseNode2 = T.nodeFromSection((book2, int(chapter2), int(verse2)))
if verseNode1 != int(v1):
stats.add(verseNode1)
if verseNode2 != int(v2):
stats.add(verseNode2)
similars.append(
(
method,
verseNode1,
verseNode2,
int(sim),
book1,
int(chapter1),
int(verse1),
book2,
int(chapter2),
int(verse2),
)
)
nStats = len(stats)
if nStats:
utils.caption(
0,
"\t\tINFO: {} verse nodes have been changed between versions".format(
nStats
),
)
utils.caption(0, "\t\tINFO: We will save and use the recomputed ones")
writeSimTable(similars)
else:
utils.caption(
0, "\t\tINFO: All verse nodes are the same as in the previous version"
)
return similars
def makeSimTable():
similars = []
for (method, similarityCutoff) in (
("SET", 75),
("LCS", 75),
):
(do_chunk, do_prep, do_sim, do_clique, skip) = do_params(
False, "verse", method, similarityCutoff
)
chunking(do_chunk)
preparing(do_prep)
similarity(do_sim or FORCE_MATRIX)
theseSimilars = []
for ((chunk1, chunk2), sim) in sorted(
(x, d) for (x, d) in chunk_dist.items() if d >= similarityCutoff
):
verseNode1 = L.u(chunks[chunk1][0], otype="verse")[0]
verseNode2 = L.u(chunks[chunk2][0], otype="verse")[0]
simInt = int(round(sim))
heading1 = T.sectionFromNode(verseNode1)
heading2 = T.sectionFromNode(verseNode2)
theseSimilars.append(
(method, verseNode1, verseNode2, simInt, *heading1, *heading2)
)
utils.caption(
0,
"\tMethod {}: found {} similar pairs of verses".format(
method, len(theseSimilars)
),
)
similars.extend(theseSimilars)
writeSimTable(similars)
return similars
```
In[22]:
```
utils.caption(4, "CROSSREFS: Fetching crossrefs")
xTable = os.path.exists(TF_TABLE)
if FORCE_MATRIX:
utils.caption(
0,
"\t{} requested of {}".format(
"Recomputing" if xTable else "computing",
TF_TABLE,
),
)
else:
if xTable:
utils.caption(0, "\tReading existing {}".format(TF_TABLE))
else:
utils.caption(0, "\tComputing missing {}".format(TF_TABLE))
if FORCE_MATRIX or not xTable:
similars = makeSimTable()
else:
similars = readSimTable()
```
In[23]:
```
if not SCRIPT:
print("\n".join(sorted(repr(sim) for sim in similars if sim[0] == "LCS")[0:10]))
print("\n".join(sorted(repr(sim) for sim in similars if sim[0] == "SET")[0:10]))
crossrefData = {}
otherMethod = dict(LCS="SET", SET="LCS")
for (method, v1, v2, sim, *x) in similars:
crossrefData.setdefault(method, {}).setdefault(v1, {})[v2] = sim
crossrefData.setdefault(method, {}).setdefault(v2, {})[v1] = sim
omethod = otherMethod[method]
otherSim = crossrefData.get(omethod, {}).get(v1, {}).get(v2, None)
thisSim = sim if otherSim is None else int(round((otherSim + sim) / 2))
crossrefData.setdefault("", {}).setdefault(v1, {})[v2] = thisSim
crossrefData.setdefault("", {}).setdefault(v2, {})[v1] = thisSim
```
# Generating parallels module for Text-Fabric
We generate the feature `crossref`.
It is an edge feature between verse nodes, with the similarity as weight.
```
utils.caption(4, "Writing TF parallel features")
newFeatureStr = "crossref crossrefSET crossrefLCS"
newFeatures = newFeatureStr.strip().split()
genericMetaPath = f"{thisRepo}/yaml/generic.yaml"
parallelsMetaPath = f"{thisRepo}/yaml/parallels.yaml"
with open(genericMetaPath) as fh:
genericMeta = yaml.load(fh, Loader=yaml.FullLoader)
genericMeta["version"] = VERSION
with open(parallelsMetaPath) as fh:
parallelsMeta = formatMeta(yaml.load(fh, Loader=yaml.FullLoader))
metaData = {"": genericMeta, **parallelsMeta}
nodeFeatures = dict()
edgeFeatures = dict()
for method in [""] + list(otherMethod):
edgeFeatures["crossref{}".format(method)] = crossrefData[method]
for newFeature in newFeatures:
metaData[newFeature]["valueType"] = "int"
metaData[newFeature]["edgeValues"] = True
TF = Fabric(locations=thisTempTf, silent=True)
TF.save(nodeFeatures=nodeFeatures, edgeFeatures=edgeFeatures, metaData=metaData)
```
# Generating simple crossref notes for SHEBANQ
We base them on the average of both methods, we supply the confidence.
In[33]:
```
MAX_REFS = 10
def condenseX(vlabels):
cnd = []
(cur_b, cur_c) = (None, None)
for (b, c, v, d) in vlabels:
sep = (
""
if cur_b is None
else ". "
if cur_b != b
else "; "
if cur_c != c
else ", "
)
show_b = b + " " if cur_b != b else ""
show_c = str(c) + ":" if cur_b != b or cur_c != c else ""
(cur_b, cur_c) = (b, c)
cnd.append("{}[{}{}{}{}]".format(sep, show_b, show_c, v, d))
return cnd
crossrefBase = crossrefData[""]
refsGrouped = []
nCrossrefs = 0
for (x, refs) in crossrefBase.items():
vys = sorted(refs.keys())
nCrossrefs += len(vys)
currefs = []
for vy in vys:
nr = len(currefs)
if nr == MAX_REFS:
refsGrouped.append((x, tuple(currefs)))
currefs = []
currefs.append(vy)
if len(currefs):
refsGrouped.append((x, tuple(currefs)))
refsCompiled = []
for (x, vys) in refsGrouped:
vysd = [
(*T.sectionFromNode(vy, lang="la"), " ~{}%".format(crossrefBase[x][vy]))
for vy in vys
]
vysl = condenseX(vysd)
these_refs = []
for (i, vy) in enumerate(vysd):
link_text = vysl[i]
link_target = "{} {}:{}".format(vy[0], vy[1], vy[2])
these_refs.append("{}({})".format(link_text, link_target))
refsCompiled.append((x, " ".join(these_refs)))
utils.caption(
0,
"Compiled {} cross references into {} notes".format(nCrossrefs, len(refsCompiled)),
)
```
In[34]:
```
sfields = """
version
book
chapter
verse
clause_atom
is_shared
is_published
status
keywords
ntext
""".strip().split()
sfields_fmt = ("{}\t" * (len(sfields) - 1)) + "{}\n"
ofs = open("{}/{}".format(thisNotes, notesFile), "w")
ofs.write("{}\n".format("\t".join(sfields)))
for (v, refs) in refsCompiled:
firstWord = L.d(v, otype="word")[0]
ca = F.number.v(L.u(firstWord, otype="clause_atom")[0])
(bk, ch, vs) = T.sectionFromNode(v, lang="la")
ofs.write(
sfields_fmt.format(
VERSION,
bk,
ch,
vs,
ca,
"T",
"",
CROSSREF_STATUS,
CROSSREF_KEYWORD,
refs,
)
)
utils.caption(0, "Generated {} notes".format(len(refsCompiled)))
ofs.close()
```
# Diffs
Check differences with previous versions.
In[35]:
```
utils.checkDiffs(thisTempTf, thisTf, only=set(newFeatures))
```
# Deliver
Copy the new TF feature from the temporary location where it has been created to its final destination.
In[36]:
```
utils.deliverDataset(thisTempTf, thisTf)
```
# Compile TF
In[38]:
```
utils.caption(4, "Load and compile the new TF features")
TF = Fabric(locations=[coreTf, thisTf], modules=[""])
api = TF.load(newFeatureStr)
api.makeAvailableIn(globals())
```
# Examples
We list all the crossrefs that the verses of Genesis 10 are involved in.
In[39]:
```
utils.caption(4, "Test: crossrefs of Genesis 10")
chapter = ("Genesis", 10)
chapterNode = T.nodeFromSection(chapter)
startVerses = {}
for method in ["", "SET", "LCS"]:
utils.caption(0, "\tMethod {}".format(method))
for verseNode in L.d(chapterNode, otype="verse"):
crossrefs = Es("crossref{}".format(method)).f(verseNode)
if crossrefs:
startVerses[T.sectionFromNode(verseNode)] = crossrefs
utils.caption(0, "\t\t{} start verses".format(len(startVerses)))
for (start, crossrefs) in sorted(startVerses.items()):
utils.caption(0, "\t\t{} {}:{}".format(*start), continuation=True)
for (target, confidence) in crossrefs:
utils.caption(
0,
"\t\t{:>20} {:<20} confidende {:>3}%".format(
"-" * 10 + ">",
"{} {}:{}".format(*T.sectionFromNode(target)),
confidence,
),
)
```
In[29]:
```
if SCRIPT:
stop(good=True)
```
# 6b. SHEBANQ annotations
The code below generates extensive crossref notes for `4b`, including clique overviews and chapter diffs.
But since the pipeline in October 2017, we generate much simpler notes.
That code is above.
We retain this code here, in case we want to expand the crossref functionality in the future again.
Based on selected similarity matrices, we produce a SHEBANQ note set of cross references for similar passages.
In[30]:
```
def get_verse(i, ca=False):
return get_verse_w(chunks[i][0], ca=ca)
def get_verse_o(o, ca=False):
return get_verse_w(L.d(o, otype="word")[0], ca=ca)
def get_verse_w(w, ca=False):
book = F.book.v(L.u(w, otype="book")[0])
chapter = F.chapter.v(L.u(w, otype="chapter")[0])
verse = F.verse.v(L.u(w, otype="verse")[0])
if ca:
ca = F.number.v(L.u(w, otype="clause_atom")[0])
return (book, chapter, verse, ca) if ca else (book, chapter, verse)
def key_verse(x):
return (book_rank[x[0]], int(x[1]), int(x[2]))
MAX_REFS = 10
def condensex(vlabels):
cnd = []
(cur_b, cur_c) = (None, None)
for (b, c, v, d) in vlabels:
sep = (
""
if cur_b is None
else ". "
if cur_b != b
else "; "
if cur_c != c
else ", "
)
show_b = b + " " if cur_b != b else ""
show_c = c + ":" if cur_b != b or cur_c != c else ""
(cur_b, cur_c) = (b, c)
cnd.append("{}{}{}{}{}".format(sep, show_b, show_c, v, d))
return cnd
dfields = """
book1
chapter1
verse1
book2
chapter2
verse2
similarity
""".strip().split()
dfields_fmt = ("{}\t" * (len(dfields) - 1)) + "{}\n"
def get_crossrefs():
global crossrefs
TF.info("CROSSREFS: Fetching crossrefs")
crossrefs_proto = {}
crossrefs = {}
(chunk_f, chunk_i, sim_m) = SHEBANQ_MATRIX
sim_thr = SHEBANQ_SIMILARITY
(do_chunk, do_prep, do_sim, do_clique, skip) = do_params(
chunk_f, chunk_i, sim_m, sim_thr
)
if skip:
return
TF.info(
"CROSSREFS ({} {} {} S>{})".format(CHUNK_LBS[chunk_f], chunk_i, sim_m, sim_thr)
)
crossrefs_proto = {x for x in chunk_dist.items() if x[1] >= sim_thr}
TF.info(
"CROSSREFS ({} {} {} S>{}): found {} pairs".format(
CHUNK_LBS[chunk_f],
chunk_i,
sim_m,
sim_thr,
len(crossrefs_proto),
)
)
f = open(CROSSREF_DB_PATH, "w")
f.write("{}\n".format("\t".join(dfields)))
for ((x, y), d) in crossrefs_proto:
vx = get_verse(x)
vy = get_verse(y)
rd = int(round(d))
crossrefs.setdefault(x, {})[vy] = rd
crossrefs.setdefault(y, {})[vx] = rd
f.write(dfields_fmt.format(*(vx + vy + (rd,))))
total = sum(len(x) for x in crossrefs.values())
f.close()
TF.info(
"CROSSREFS: Found {} crossreferences and wrote {} pairs".format(
total, len(crossrefs_proto)
)
)
def get_specific_crossrefs(chunk_f, chunk_i, sim_m, sim_thr, write_to):
(do_chunk, do_prep, do_sim, do_clique, skip) = do_params(
chunk_f, chunk_i, sim_m, sim_thr
)
if skip:
return
chunking(do_chunk)
preparing(do_prep)
similarity(do_sim)
TF.info("CROSSREFS: Fetching crossrefs")
crossrefs_proto = {}
crossrefs = {}
(do_chunk, do_prep, do_sim, do_clique, skip) = do_params(
chunk_f, chunk_i, sim_m, sim_thr
)
if skip:
return
TF.info(
"CROSSREFS ({} {} {} S>{})".format(CHUNK_LBS[chunk_f], chunk_i, sim_m, sim_thr)
)
crossrefs_proto = {x for x in chunk_dist.items() if x[1] >= sim_thr}
TF.info(
"CROSSREFS ({} {} {} S>{}): found {} pairs".format(
CHUNK_LBS[chunk_f],
chunk_i,
sim_m,
sim_thr,
len(crossrefs_proto),
)
)
f = open("files/{}".format(write_to), "w")
f.write("{}\n".format("\t".join(dfields)))
for ((x, y), d) in crossrefs_proto:
vx = get_verse(x)
vy = get_verse(y)
rd = int(round(d))
crossrefs.setdefault(x, {})[vy] = rd
crossrefs.setdefault(y, {})[vx] = rd
f.write(dfields_fmt.format(*(vx + vy + (rd,))))
total = sum(len(x) for x in crossrefs.values())
f.close()
TF.info(
"CROSSREFS: Found {} crossreferences and wrote {} pairs".format(
total, len(crossrefs_proto)
)
)
def compile_refs():
global refs_compiled
refs_grouped = []
for x in sorted(crossrefs):
refs = crossrefs[x]
vys = sorted(refs.keys(), key=key_verse)
currefs = []
for vy in vys:
nr = len(currefs)
if nr == MAX_REFS:
refs_grouped.append((x, tuple(currefs)))
currefs = []
currefs.append(vy)
if len(currefs):
refs_grouped.append((x, tuple(currefs)))
refs_compiled = []
for (x, vys) in refs_grouped:
vysd = [(vy[0], vy[1], vy[2], " ~{}%".format(crossrefs[x][vy])) for vy in vys]
vysl = condensex(vysd)
these_refs = []
for (i, vy) in enumerate(vysd):
link_text = vysl[i]
link_target = "{} {}:{}".format(vy[0], vy[1], vy[2])
these_refs.append("[{}]({})".format(link_text, link_target))
refs_compiled.append((x, " ".join(these_refs)))
TF.info(
"CROSSREFS: Compiled cross references into {} notes".format(len(refs_compiled))
)
def get_chapter_diffs():
global chapter_diffs
chapter_diffs = []
for cl in sorted(bin_cliques):
lb1 = "{} {}".format(F.book.v(cl[0][0]), F.chapter.v(cl[0][1]))
lb2 = "{} {}".format(F.book.v(cl[1][0]), F.chapter.v(cl[1][1]))
hfilename = "{}_vs_{}.html".format(lb1, lb2).replace(" ", "_")
chapter_diffs.append(
(
lb1,
cl[0][1],
lb2,
cl[1][1],
"{}/{}/{}/{}".format(
SHEBANQ_TOOL,
LOCAL_BASE_OUTP,
CHAPTER_DIR,
hfilename,
),
)
)
TF.info("CROSSREFS: Added {} chapter diffs".format(2 * len(chapter_diffs)))
def get_clique_refs():
global clique_refs
clique_refs = []
for (i, c) in enumerate(cliques):
for j in c:
seq = i // CLIQUES_PER_FILE
clique_refs.append(
(
j,
i,
"{}/{}/{}/{}/clique_{}_{}.html#c_{}".format(
SHEBANQ_TOOL,
LOCAL_BASE_OUTP,
EXPERIMENT_DIR,
base_name,
base_name,
seq,
i,
),
)
)
TF.info("CROSSREFS: Added {} clique references".format(len(clique_refs)))
sfields = """
version
book
chapter
verse
clause_atom
is_shared
is_published
status
keywords
ntext
""".strip().split()
sfields_fmt = ("{}\t" * (len(sfields) - 1)) + "{}\n"
def generate_notes():
with open(NOTES_PATH, "w") as f:
f.write("{}\n".format("\t".join(sfields)))
x = next(F.otype.s("word"))
(bk, ch, vs, ca) = get_verse(x, ca=True)
f.write(
sfields_fmt.format(
VERSION,
bk,
ch,
vs,
ca,
"T",
"",
CROSSREF_STATUS,
CROSSREF_KEYWORD,
"""The crossref notes are the result of a computation without manual tweaks.
Parameters: chunk by verse, similarity method SET with threshold 65.
[Here](tool=parallel) is an account of the generation method.""".replace(
"\n", " "
),
)
)
for (lb1, ch1, lb2, ch2, fl) in chapter_diffs:
(bk1, ch1, vs1, ca1) = get_verse_o(ch1, ca=True)
(bk2, ch2, vs2, ca2) = get_verse_o(ch2, ca=True)
f.write(
sfields_fmt.format(
VERSION,
bk1,
ch1,
vs1,
ca1,
"T",
"",
CROSSREF_STATUS,
CROSSREF_KEYWORD,
"[chapter diff with {}](tool:{})".format(lb2, fl),
)
)
f.write(
sfields_fmt.format(
VERSION,
bk2,
ch2,
vs2,
ca2,
"T",
"",
CROSSREF_STATUS,
CROSSREF_KEYWORD,
"[chapter diff with {}](tool:{})".format(lb1, fl),
)
)
for (x, refs) in refs_compiled:
(bk, ch, vs, ca) = get_verse(x, ca=True)
f.write(
sfields_fmt.format(
VERSION,
bk,
ch,
vs,
ca,
"T",
"",
CROSSREF_STATUS,
CROSSREF_KEYWORD,
refs,
)
)
for (chunk, clique, fl) in clique_refs:
(bk, ch, vs, ca) = get_verse(chunk, ca=True)
f.write(
sfields_fmt.format(
VERSION,
bk,
ch,
vs,
ca,
"T",
"",
CROSSREF_STATUS,
CROSSREF_KEYWORD,
"[all variants (clique {})](tool:{})".format(clique, fl),
)
)
TF.info(
"CROSSREFS: Generated {} notes".format(
1 + len(refs_compiled) + 2 * len(chapter_diffs) + len(clique_refs)
)
)
def crossrefs2shebanq():
expr = SHEBANQ_MATRIX + (SHEBANQ_SIMILARITY,)
do_experiment(*(expr + (True,)))
get_crossrefs()
compile_refs()
get_chapter_diffs()
get_clique_refs()
generate_notes()
```
# 7. Main
In the cell below you can select the experiments you want to carry out.
The previous cells contain just definitions and parameters.
The next cell will do work.
If none of the matrices and cliques have been computed before on the system where this runs, doing all experiments might take multiple hours (4-8).
In[ ]:
```
reset_params()
```
do_experiment(False, 'sentence', 'LCS', 60, False)
```
do_all_experiments()
```
do_all_experiments(no_fixed=True, only_object='chapter')
crossrefs2shebanq()
show_all_experiments()
get_specific_crossrefs(False, 'verse', 'LCS', 60, 'crossrefs_lcs_db.txt')
do_all_chunks()
In[ ]:
```
HTML(ecss)
```
# 8. Overview of the similarities
Here are the plots of two similarity matrices
* with verses as chunks and SET as similarity method
* with verses as chunks and LCS as similarity method
Horizontally you see the degree of similarity from 0 to 100%, vertically the number of pairs that have that (rounded) similarity. This axis is logarithmic.
In[ ]:
```
do_experiment(False, "verse", "SET", 60, False)
distances = collections.Counter()
for (x, d) in chunk_dist.items():
distances[int(round(d))] += 1
x = range(MATRIX_THRESHOLD, 101)
fig = plt.figure(figsize=[15, 4])
plt.plot(x, [math.log(max((1, distances[y]))) for y in x], "b-")
plt.axis([MATRIX_THRESHOLD, 101, 0, 15])
plt.xlabel("similarity as %")
plt.ylabel("log # similarities")
plt.xticks(x, x, rotation="vertical")
plt.margins(0.2)
plt.subplots_adjust(bottom=0.15)
plt.title("distances")
```
In[ ]:
```
do_experiment(False, "verse", "LCS", 60, False)
distances = collections.Counter()
for (x, d) in chunk_dist.items():
distances[int(round(d))] += 1
x = range(MATRIX_THRESHOLD, 101)
fig = plt.figure(figsize=[15, 4])
plt.plot(x, [math.log(max((1, distances[y]))) for y in x], "b-")
plt.axis([MATRIX_THRESHOLD, 101, 0, 15])
plt.xlabel("similarity as %")
plt.ylabel("log # similarities")
plt.xticks(x, x, rotation="vertical")
plt.margins(0.2)
plt.subplots_adjust(bottom=0.15)
plt.title("distances")
```
In[ ]:
| github_jupyter |
# Vinicius Augusto de Souza - RA: 1997530
-------------------------------------------------------------------------------------------------------------------------------
```
import tensorflow as tf
import numpy as np
import pandas as pd
from sklearn.metrics import classification_report, confusion_matrix
import matplotlib.pyplot as plt
import seaborn as sn
import os
epochs = 100 # quantidade de vezes a ser executado o algoritmo, uma epoch é quanto todo o conjunto de treino foi utilizado
batch = 16 # número de amostras que será carregado a cada execução
#carrega o modelo da ResNet50V2 com os pesos aprendidos no treino da ImageNet sem a camada densa (include_top=False)
base_model = tf.keras.applications.ResNet50V2(weights='imagenet', include_top=False)
#O restante do modelo e suas camadas são discutidos a seguir
#x recebe o final da ResNet50V2
x=base_model.output
#Nova configuração para o modelo
#adiciona apos x uma camada AveragePooling2D e atribui este no a x novamente (logo x e o topo novamente)
x=tf.keras.layers.GlobalAveragePooling2D()(x)
#adiciona apos x uma camada densa com 32 neuronios com funcao de ativacao relu. Atribui este no a x novamente
x=tf.keras.layers.Dense(128,activation='relu')(x)
#adiciona apos x uma camada densa com 64 neuronios com funcao de ativacao relu. Atribui este no a x novamente
x=tf.keras.layers.Dense(64,activation='relu')(x)
#adiciona apos x uma camada densa com 128 neuronios com funcao de ativacao relu. Atribui este no a x novamente
x=tf.keras.layers.Dense(32,activation='relu')(x)
#adiciona após x os neurônios que devem ser utilizados, nesse caso foram desligados 20% dos neuronios
x=tf.keras.layers.Dropout(0.2)(x)
#adiciona apos x uma camada densa com 7 neuronios (sete classes) com funcao de ativacao softmax (distribuicao de probabilidade). Atribui este no a preds
preds=tf.keras.layers.Dense(7,activation='softmax')(x)
#definindo modelo final
model=tf.keras.models.Model(inputs=base_model.input,outputs=preds)
#mostrando modelo final e sua estrutura
model.summary()
#congelando os neuronios já treinados na ImageNet, queremos retreinar somente a ultima camada
for l in model.layers:
if l.name.split('_')[0] != 'dense':
l.trainable=False
else:
l.trainable=True
#iniciando objeto que apanhara todas as imagens de treino, processando as imagens com o metodo da ResNet50V2
train_data_gen = tf.keras.preprocessing.image.ImageDataGenerator(preprocessing_function=tf.keras.applications.resnet_v2.preprocess_input)
#iniciando objeto que apanhara todas as imagens de teste, processando as imagens com o metodo da ResNet50V2
test_data_gen = tf.keras.preprocessing.image.ImageDataGenerator(preprocessing_function=tf.keras.applications.resnet_v2.preprocess_input)
# Stop training when a monitored metric has stopped improving.
callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=10)
#CARREGANDO PRÓPRIO DATASET PARA USO
#definindo gerador de imagens de treino
train_generator = train_data_gen.flow_from_directory('data_tt_g/train',
target_size=(128, 128), # tamanho da imagem para o generator
batch_size=batch,
class_mode='categorical',
shuffle=True)
#definindo gerador de imagens de teste
test_generator = test_data_gen.flow_from_directory('data_tt_g/test',
target_size=(128, 128), # tamanho da imagem para o generator
batch_size=batch,
class_mode='categorical',
shuffle=True)
lr = tf.keras.optimizers.Adam(learning_rate=0.0001)#estabelecendo taxa de otimização
model.compile(optimizer=lr, loss='categorical_crossentropy', metrics=['accuracy'])
#definicao dos steps
step_size_train = train_generator.n//train_generator.batch_size
step_size_test = test_generator.n//test_generator.batch_size
#treinando e testando o modelo
history = model.fit_generator(generator=train_generator,
steps_per_epoch=step_size_train,
epochs=epochs,
validation_data=test_generator,
validation_steps=step_size_test,
callbacks=[callback])
#Avaliando o modelo
loss_train, train_acc = model.evaluate_generator(train_generator, steps=step_size_train)
loss_test, test_acc = model.evaluate_generator(test_generator, steps=step_size_test)
print('Train: %.3f, Test: %.3f' % (train_acc, test_acc))
#Apresentando resultados em graficos
plt.title('Loss')
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='test')
plt.legend()
plt.show()
# Criando graficos para visualização dos resultados
print()
print()
plt.title('Accuracy')
plt.plot(history.history['accuracy'], label='train')
plt.plot(history.history['val_accuracy'], label='test')
plt.legend()
plt.show()
print('Criando classificações..')
labels = os.listdir('Database')
print('Rótulos', labels)
#criando estruturas para métricas de avaliação, processo um pouco mais demorado
Y_pred = model.predict_generator(test_generator)
print('Preds Created')
y_pred = np.argmax(Y_pred, axis=1)
print('Preds 1D created')
classification = classification_report(test_generator.classes, y_pred, target_names=labels)
print('----------------CLASSIFICATION--------------')
print(classification)
matrix = confusion_matrix(test_generator.classes, y_pred)
df_cm = pd.DataFrame(matrix, index = [i for i in range(7)],
columns = [i for i in range(7)])
plt.figure(figsize = (10,7))
sn.heatmap(df_cm, annot=True, linewidths=2.5)
```
| github_jupyter |
```
# import libraries
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sb
from matplotlib import rcParams
%matplotlib inline
rcParams['figure.figsize'] = 5, 4
sb.set_style('whitegrid')
# take a look at 10 head of data file
data = pd.read_csv("~/Downloads/creditcard.csv")
data.head(10)
# get insight what's inside of the data file
data.info()
data['Hour'] = data['Time'].apply(lambda x: np.round(float(x)/3600))
data.pivot_table(values='Amount',index='Hour',columns='Class',aggfunc='count')
plt.figure(figsize=(12,2*2))
Fraud = data[data["Class"]==1]
Not_Fraud= data[data["Class"]==0]
color = ['salmon']
plt.subplot(121)
Fraud.Amount.plot.hist(title="Fraud Transacation", color=color)
plt.xlabel('Amount'); plt.ylabel('Number of Frauds');
plt.subplot(122)
Not_Fraud.Amount.plot.hist(title="Not_Fraud Transactions")
plt.xlabel('Amount'); plt.ylabel('Number of Norn_Fraud Transactions');
TotalFraud=data['Class'].value_counts()[1]
TotalNonFraud=data['Class'].value_counts()[0]
Num= TotalFraud/(TotalFraud+TotalNonFraud)
Percentage=Num*100
print(Percentage)
import matplotlib.gridspec as gridspec
from scipy import stats
plt.figure(figsize=(12,28*5))
Tabfeatures = data.iloc[:,1:29].columns
f= gridspec.GridSpec(28, 1)
for i, cn in enumerate(data[Tabfeatures]):
ax = plt.subplot(f[i])
sb.distplot(data[cn][data.Class == 1], kde=False, fit=stats.norm)
sb.distplot(data[cn][data.Class == 0], kde=False, fit=stats.norm)
ax.set_xlabel('')
ax.set_title('feature hist: ' + str(cn))
plt.show()
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
features = pd.concat([data.loc[:,'V11':'Amount'],data.loc[:,'Hour']],axis=1)
target = data['Class']
X_train, X_test, y_train, y_test = train_test_split(features,target, stratify=target,test_size = 0.3, random_state = 0)
#randomizing the data
X_train = shuffle(X_train)
X_test = shuffle(X_test)
print("# of train dataset in class 1 and 0 : ", len(X_train))
print("# of test dataset in class 1 and 0: ", len(X_test))
print("Total # of transactionsin class 1 and 0: ", len(X_train)+len(X_test))
print('-------------------------------------------------------')
print('y_train values')
print(y_train.value_counts())
print('')
print('y_test values')
print(y_test.value_counts())
from sklearn.linear_model import LogisticRegression
RegModel = LogisticRegression()
RegModel.fit(X_train,y_train)
predicted = RegModel.predict(X_test)
print(predicted)
```
### Why all zeros?
```
pred = pd.DataFrame({'Class': predicted})
pred.Class.value_counts()
```
### My Python pip requirements.txt:
altair==1.2.0
beautifulsoup4==4.6.0
bleach==2.0.0
boto3==1.4.4
botocore==1.5.57
certifi==2017.4.17
chardet==3.0.3
click==6.7
cloudpickle==0.3.0
cycler==0.10.0
Cython==0.25.2
dask==0.14.3
decorator==4.0.11
distributed==1.16.3
docutils==0.13.1
entrypoints==0.2.2
fbprophet==0.1.1
HeapDict==1.0.0
html5lib==0.999999999
idna==2.5
ipykernel==4.5.2
ipython==5.3.0
ipython-genutils==0.2.0
ipywidgets==6.0.0
jedi==0.10.2
Jinja2==2.9.5
jmespath==0.9.3
jsonschema==2.6.0
jupyter==1.0.0
jupyter-client==5.0.0
jupyter-console==5.1.0
jupyter-contrib-core==0.3.0
jupyter-contrib-nbextensions==0.2.6
jupyter-core==4.3.0
jupyter-highlight-selected-word==0.0.10
jupyter-latex-envs==1.3.8.2
jupyter-nbextensions-configurator==0.2.4
Keras==2.0.3
locket==0.2.0
lxml==3.7.3
MarkupSafe==1.0
matplotlib==2.0.2
mistune==0.7.4
msgpack-python==0.4.8
nbconvert==5.1.1
nbformat==4.3.0
notebook==4.4.1
numpy==1.12.1
olefile==0.44
pandas==0.20.1
pandocfilters==1.4.1
partd==0.3.8
patsy==0.4.1
pexpect==4.2.1
pickleshare==0.7.4
Pillow==4.1.1
prompt-toolkit==1.0.14
protobuf==3.2.0
psutil==5.2.1
ptyprocess==0.5.1
pydataset==0.2.0
Pygments==2.2.0
pyparsing==2.2.0
pystan==2.15.0.1
python-dateutil==2.6.0
pytz==2017.2
PyYAML==3.12
pyzmq==16.0.2
qtconsole==4.2.1
requests==2.17.3
s3fs==0.1.0
s3transfer==0.1.10
scikit-learn==0.18.1
scipy==0.19.0
seaborn==0.7.1
simplegeneric==0.8.1
six==1.10.0
sklearn==0.0
sortedcollections==0.5.3
sortedcontainers==1.5.7
speedml==0.9.1
statsmodels==0.8.0
tblib==1.3.2
tensorflow==1.0.1
terminado==0.6
testpath==0.3
Theano==0.9.0
thonny==2.1.8
toolz==0.8.2
tornado==4.4.2
traitlets==4.3.2
urllib3==1.21.1
vega==0.4.4
wcwidth==0.1.7
webencodings==0.5
widgetsnbextension==2.0.0
wordcloud==1.3.1
xgboost==0.6a2
zict==0.1.2
| github_jupyter |
## Final Notebook Submission
Please fill out:
* Student name:
* Student pace: self paced / part time / full time
* Scheduled project review date/time:
* Instructor name:
* Blog post URL:
```
import pandas as pd
import numpy as np
import seaborn as sns
from sklearn.preprocessing import OneHotEncoder
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import cross_val_score
from statsmodels.formula.api import ols
from sklearn.impute import SimpleImputer
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OrdinalEncoder
from sklearn.dummy import DummyRegressor
import matplotlib.pyplot as plt
import statsmodels.api as sm
%matplotlib inline
# Loading in the given data set into variable housing_df
housing_df = pd.read_csv('data/kc_house_data.csv')
housing_df.info()
# Checking to see if any columns in the dataframe have any NaN values
housing_df.isna().sum()
```
We see waterfront, view and yr_renocated columns are the only columns with NaN values.
```
# Grabbing the age of the home
housing_df['datetime'] = pd.to_datetime(housing_df['date'])
housing_df['soldyear'] = housing_df['datetime'].dt.year
housing_df['age_when_sold'] = housing_df['soldyear'] - housing_df['yr_built']
# Just getting the numeric values of Graade
housing_df['grade_num'] = housing_df['grade'].str.split()
housing_df['grade_num'] = housing_df['grade_num'].str[0]
housing_df['grade_num'] = housing_df['grade_num'].astype(int)
# Creating a new column 'Basement' that will have a value of 1 or 0 in reference to whther a house has a basement or not.
housing_df['sqft_basement'] = housing_df['sqft_living'] - housing_df['sqft_above']
housing_df['Basement'] = None
housing_df['Basement'] = housing_df['sqft_basement'].map(lambda x: False if x == 0 else True)
housing_df['Basement'] = housing_df['Basement'].astype(int)
housing_df.condition.value_counts()
# Changing all the values in the 'condition' column to integers.
replace_dict2 = {'Poor': 1, 'Fair': 2, 'Average': 3, 'Good': 4, 'Very Good': 5}
housing_df['condition'] = housing_df['condition'].replace(replace_dict2)
wtr_col = housing_df[['waterfront']]
imputer = SimpleImputer(strategy='constant', fill_value = 'NO')
imputer.fit(wtr_col)
waterfront_imputed = imputer.transform(wtr_col)
housing_df.waterfront = waterfront_imputed
wtr_col = housing_df[['waterfront']]
encoder_wtr = OrdinalEncoder()
encoder_wtr.fit(wtr_col)
encoded_wtr = encoder_wtr.transform(wtr_col)
encoded_wtr = encoded_wtr.flatten()
housing_df.waterfront = encoded_wtr
view_col = housing_df[['view']]
imputer = SimpleImputer(strategy='constant', fill_value = 'NONE')
imputer.fit(view_col)
view_imputed = imputer.transform(view_col)
housing_df.view = view_imputed
housing_df.isna().sum()
```
We don't see much usage in certain columns or we have already used the column information to its max and will no longer be needing them.
```
housing_ols = housing_df.drop(['datetime', 'date', 'soldyear', 'yr_built', 'lat', 'long',
'sqft_above', 'sqft_lot15', 'sqft_living15', 'grade', 'sqft_above', 'sqft_basement', 'yr_renovated'], axis = 1)
housing_ols
```
With new working dataframe we want to see how the remaining columns correlates to our target,price, as well as to other columns to check for collinearity
```
housing_ols['bathrooms'] = housing_ols['bathrooms'][housing_ols['bathrooms'].between(housing_ols['bathrooms'].quantile(.025), housing_ols['bathrooms'].quantile(.975), inclusive=True)]
housing_ols['floors'] = housing_ols['floors'][housing_ols['floors'].between(housing_ols['floors'].quantile(.025), housing_ols['floors'].quantile(.975), inclusive=True)]
housing_ols['waterfront'] = housing_ols['waterfront'][housing_ols['waterfront'].between(housing_ols['waterfront'].quantile(.025), housing_ols['waterfront'].quantile(.975), inclusive=True)]
housing_ols['condition'] = housing_ols['condition'][housing_ols['condition'].between(housing_ols['condition'].quantile(.025), housing_ols['condition'].quantile(.975), inclusive=True)]
housing_ols['age_when_sold'] = housing_ols['age_when_sold'][housing_ols['age_when_sold'].between(housing_ols['age_when_sold'].quantile(.025), housing_ols['age_when_sold'].quantile(.975), inclusive=True)]
housing_ols['grade_num'] = housing_ols['grade_num'][housing_ols['grade_num'].between(housing_ols['grade_num'].quantile(.025), housing_ols['grade_num'].quantile(.975), inclusive=True)]
housing_ols['Basement'] = housing_ols['Basement'][housing_ols['Basement'].between(housing_ols['Basement'].quantile(.025), housing_ols['Basement'].quantile(.975), inclusive=True)]
housing_ols.corr()
# Visually inspecting all columns association with price
fig, (axes1, axes2, axes3, axes4) = plt.subplots(nrows=4, ncols=3, figsize=(15,15))
for xcol, ax in zip(['bedrooms', 'bathrooms', 'sqft_living'], axes1):
housing_ols.plot(kind='scatter', x=xcol, y='price', ax=ax, alpha=0.4, color='b')
for xcols, axs in zip(['sqft_lot', 'floors', 'view'], axes2):
housing_ols.plot(kind='scatter', x=xcols, y='price', ax=axs, alpha=0.4, color='b')
for xcolss, axss in zip(['condition', 'age_when_sold', 'grade_num'], axes3):
housing_ols.plot(kind='scatter', x=xcolss, y='price', ax=axss, alpha=0.4, color='b')
for xcolss, axss in zip(['waterfront', 'Basement'], axes4):
housing_ols.plot(kind='scatter', x=xcolss, y='price', ax=axss, alpha=0.4, color='b')
```
Many other columns are deemed as categorical values so with OneHotEncoder we will make columns for each of the categories found within each column and concat them all into one dataframe.
```
housing_pred = housing_ols.copy()
bedrooms_ohe = housing_pred[['bedrooms']]
ohe_bedrooms = OneHotEncoder(categories ='auto', sparse =False)
ohe_bedrooms.fit(bedrooms_ohe)
ohe_bedrooms_encoded = ohe_bedrooms.transform(bedrooms_ohe)
bedrooms_encoded_ohe = pd.DataFrame(ohe_bedrooms_encoded, columns = ohe_bedrooms.get_feature_names(['bedrooms']), index = housing_pred.index )
housing_pred1 = pd.concat([housing_pred, bedrooms_encoded_ohe ], axis =1)
grade_num_ohe = housing_pred[['grade_num']]
ohe_grade_num = OneHotEncoder(categories ='auto', sparse =False)
ohe_grade_num.fit(grade_num_ohe)
ohe_grade_num_encoded = ohe_grade_num.transform(grade_num_ohe)
grade_num_encoded_ohe = pd.DataFrame(ohe_grade_num_encoded, columns = ohe_grade_num.get_feature_names(['grade_num']), index = housing_pred.index )
housing_pred2 = pd.concat([housing_pred1, grade_num_encoded_ohe], axis =1)
condition_ohe = housing_pred[['condition']]
ohe = OneHotEncoder(categories="auto", sparse=False)
cond_encoded_ohe = pd.DataFrame (ohe.fit_transform(condition_ohe), index=housing_pred.index)
cond_encoded_ohe.columns = ohe.get_feature_names(['condition'])
housing_pred3 = pd.concat([housing_pred2, cond_encoded_ohe], axis = 1)
bathrooms_ohe = housing_pred[['bathrooms']]
ohe = OneHotEncoder(categories='auto', sparse=False, handle_unknown='ignore')
bathrooms_transform = ohe.fit_transform(bathrooms_ohe)
bathrooms_encoded_ohe = pd.DataFrame(bathrooms_transform, columns=ohe.get_feature_names(['bathrooms']), index=housing_pred.index)
housing_pred4 = pd.concat([housing_pred3, bathrooms_encoded_ohe], axis = 1)
view_ohe = housing_pred[['view']]
ohe = OneHotEncoder(categories="auto", sparse=False)
ohe.fit(view_ohe)
view_encoded = ohe.transform(view_ohe)
view_encoded_ohe =pd.DataFrame(view_encoded, columns=ohe.get_feature_names(['view']), index=housing_pred.index)
housing_pred5 = pd.concat([housing_pred4, view_encoded_ohe], axis = 1)
floors_ohe = housing_pred[['floors']]
ohe_floors = OneHotEncoder(categories ='auto', sparse =False)
ohe_floors.fit(floors_ohe)
ohe_floors_encoded = ohe_floors.transform(floors_ohe)
floors_encoded_ohe = pd.DataFrame(ohe_floors_encoded, columns = ohe_floors.get_feature_names(['floors']), index = housing_pred.index )
housing_pred6 = pd.concat([housing_pred5, floors_encoded_ohe ], axis =1)
zipcode_ohe = housing_pred[['zipcode']]
ohe = OneHotEncoder(categories="auto", sparse=False)
ohe.fit(zipcode_ohe)
zipcode_encoded = ohe.transform(zipcode_ohe)
zipcode_encoded_ohe =pd.DataFrame(zipcode_encoded, columns=ohe.get_feature_names(['zipcode']), index=housing_pred.index)
housing_pred_final = pd.concat([housing_pred6, zipcode_encoded_ohe ], axis =1)
housing_pred_final.columns
# Final dataframe after dealing with the catgoricals and dropping the original columns.
housing_pred_final.drop(['id', 'floors', 'bedrooms', 'bathrooms', 'view', 'condition', 'zipcode', 'grade_num'], axis = 1, inplace=True)
```
After creating new columns from OneHotEncoding many of the original columns we want to see if this will impact the correlation chart in reference to price.
```
housing_pred_final.corr()
```
Since it is a far bigger dataframe can't visually inspect all the values to search for collinearity with the other predictors. Therefore have to colinear pairs another way. Any of the predictors with correlation of.75 or higher will be something that will have to be addressed.
```
# housing_ols['bathrooms'] = housing_ols['bathrooms'][housing_ols['bathrooms'].between(housing_ols['bathrooms'].quantile(.025), housing_ols['bathrooms'].quantile(.975), inclusive=True)]
# housing_ols['floors'] = housing_ols['floors'][housing_ols['floors'].between(housing_ols['floors'].quantile(.025), housing_ols['floors'].quantile(.975), inclusive=True)]
# housing_ols['waterfront'] = housing_ols['waterfront'][housing_ols['waterfront'].between(housing_ols['waterfront'].quantile(.025), housing_ols['waterfront'].quantile(.975), inclusive=True)]
# housing_ols['condition'] = housing_ols['condition'][housing_ols['condition'].between(housing_ols['condition'].quantile(.025), housing_ols['condition'].quantile(.975), inclusive=True)]
# housing_ols['age_when_sold'] = housing_ols['age_when_sold'][housing_ols['age_when_sold'].between(housing_ols['age_when_sold'].quantile(.025), housing_ols['age_when_sold'].quantile(.975), inclusive=True)]
# housing_ols['grade_num'] = housing_ols['grade_num'][housing_ols['grade_num'].between(housing_ols['grade_num'].quantile(.025), housing_ols['grade_num'].quantile(.975), inclusive=True)]
# housing_ols['Basement'] = housing_ols['Basement'][housing_ols['Basement'].between(housing_ols['Basement'].quantile(.025), housing_ols['Basement'].quantile(.975), inclusive=True)]
# Breaking down the correlation table to find pairings of .75 or higher that aren't to our target price
df=housing_pred_final.corr().abs().stack().reset_index().sort_values(0, ascending=False)
# zipping the variable name columns in a new column named "pairs"
df['pairs'] = list(zip(df.level_0, df.level_1))
# setting index to pairs
df.set_index(['pairs'], inplace = True)
#dropping level columns
df.drop(columns=['level_1', 'level_0'], inplace = True)
# renaming correlation column as cc rather than 0
df.columns = ['cc']
# drop duplicates.
df.drop_duplicates(inplace=True)
df[(df.cc>.75) & (df.cc <1)]
```
condition_4, condition_3 have correlation number of .812294 while floors_1, floors_2 have a correlation value of .775976
Finally we want to log transform our target variable, price, in order to account for heteroscesdsitdy in the data
```
# Creating a new column that takes the log of price
housing_pred_final['log_price'] = np.log(housing_pred_final['price'])
```
With our dataframe now dealt with we can now created our dummy regressor
```
# Putting our target in the y_dummy variable and the rest of the dataframe without price in the X_dummy variable
X_dummy = housing_pred_final.drop('price', axis = 1)
y_dummy = housing_pred_final['log_price']
# Splitting up our dummy variables into two subsets: for training data and testing data
X_train, X_test, y_train, y_test = train_test_split(X_dummy, y_dummy, test_size=0.2, random_state=42)
# Setting up our dummy regressor
dummy_regr = DummyRegressor(strategy="mean")
dummy_regr.fit(X_train, y_train)
print(dummy_regr.score(X_train, y_train))
print(dummy_regr.score(X_test, y_test))
```
We will now make a simple linear regressor model
```
# We want to find what feature correlates with price after it has been log transformed
correlation = housing_pred_final.corr().abs()
# Unstacking the correlation
u_correlation = correlation.unstack()
#Sorting the correlation
sorted_correlation = u_correlation.sort_values(kind="quicksort")
# The n-144th position is the last occurance where correlation upon itself was done and on the n-155th position we find
# the highest correlation to log_price and that is to sqft_living
print (sorted_correlation[-155:-144])
```
We see from above that sqft_living has the highest correlation to log_price therefore it will be used as the feature in the simple model.
```
# y variable will always be our target which is price and X variable will contain our housing features
# Here in our simple model, X is only sqft_living
simple_model_df = pd.concat([housing_pred_final['log_price'], housing_pred_final['sqft_living']], axis = 1)
X_simple = simple_model_df.drop('log_price', axis = 1)
y_simple = simple_model_df['log_price']
# Splitting up our dummy variables into two subsets: for training data and testing data
X_train_s, X_test_s, y_train_s, y_test_s = train_test_split(X_simple, y_simple, test_size=0.2, random_state=42)
# Scaling sqft_living(standardscaler)
cols_scale = ['sqft_living']
scaler = StandardScaler()
scaled_fit = scaler.fit(X_train_s[cols_scale])
scaled_columns = scaled_fit.transform(X_train_s[cols_scale])
scaled_columns = pd.DataFrame(scaled_columns, index = X_train_s.index)
scaled_columns.columns = cols_scale
X_train_s['sqft_living'] = scaled_columns['sqft_living']
# Setting up simple linear regression
simple_reg = LinearRegression()
simple_reg.fit(X_train_s, y_train_s)
print(simple_reg.score(X_train_s, y_train_s))
# Scaling test set
scale_test = X_test_s[cols_scale]
scaled_columns_test = scaled_fit.transform(scale_test)
scaled_columns_test = pd.DataFrame(scaled_columns_test, index = X_test_s.index)
scaled_columns_test.columns = cols_scale
X_test_s['sqft_living'] = scaled_columns_test['sqft_living']
print(simple_reg.score(X_test_s, y_test_s))
# Calculating predicted value of our model
# determine the mean squared error for train and test data
y_hat_train_s = simple_reg.predict(X_train_s)
y_hat_test_s = simple_reg.predict(X_test_s)
train_mse_s = mean_squared_error(y_train_s, y_hat_train_s)
test_mse_s = mean_squared_error(y_test_s, y_hat_test_s)
print('Train Mean Squarred Error:', train_mse_s)
print('Test Mean Squarred Error:', test_mse_s)
# y variable will always be our target which is price and X variable will contain our housing features
# This is our first muliple linear regression model with just sqft_living and view
multi_model_1 = pd.concat([housing_pred_final['log_price'], housing_pred_final['sqft_living'], view_encoded_ohe], axis = 1)
X_multi1 = multi_model_1.drop('log_price', axis = 1)
y_multi1 = multi_model_1['log_price']
# Splitting up our dummy variables into two subsets: for training data and testing data
X_train_m1, X_test_m1, y_train_m1, y_test_m1 = train_test_split(X_multi1, y_multi1, test_size=0.2, random_state=42)
# Scaling sqft_living(standardscaler)
cols_scale = ['sqft_living']
scaler = StandardScaler()
scaled_fit = scaler.fit(X_train_m1[cols_scale])
scaled_columns = scaled_fit.transform(X_train_m1[cols_scale])
scaled_columns = pd.DataFrame(scaled_columns, index = X_train_m1.index)
scaled_columns.columns = cols_scale
X_train_m1['sqft_living'] = scaled_columns['sqft_living']
# Setting up our first multiple linear regression
multi1_reg = LinearRegression()
multi1_reg.fit(X_train_m1, y_train_m1)
print(multi1_reg.score(X_train_m1, y_train_m1))
# Scaling test set
scale_test = X_test_m1[cols_scale]
scaled_columns_test = scaled_fit.transform(scale_test)
scaled_columns_test = pd.DataFrame(scaled_columns_test, index = X_test_m1.index)
scaled_columns_test.columns = cols_scale
X_test_m1['sqft_living'] = scaled_columns_test['sqft_living']
print(multi1_reg.score(X_test_m1, y_test_m1))
# Calculating predicted value of our model
# determine the mean squared error for train and test data
y_hat_train_m1 = multi1_reg.predict(X_train_m1)
y_hat_test_m1 = multi1_reg.predict(X_test_m1)
train_mse_m1 = mean_squared_error(y_train_m1, y_hat_train_m1)
test_mse_m1 = mean_squared_error(y_test_m1, y_hat_test_m1)
print('Train Mean Squarred Error:', train_mse_m1)
print('Test Mean Squarred Error:', test_mse_m1)
# Above Steps repeated with different combinations of housing features to see if it improves our model
# Multiple Linear Regression Model number 2
multi_model_2 = pd.concat([housing_pred_final['log_price'], housing_pred_final['sqft_living'], view_encoded_ohe, cond_encoded_ohe], axis = 1)
X_multi2 = multi_model_2.drop('log_price', axis = 1)
y_multi2 = multi_model_2['log_price']
X_train_m2, X_test_m2, y_train_m2, y_test_m2 = train_test_split(X_multi2, y_multi2, test_size=0.2, random_state=42)
# Scaling sqft_living(standardscaler)
cols_scale = ['sqft_living']
scaler = StandardScaler()
scaled_fit = scaler.fit(X_train_m2[cols_scale])
scaled_columns = scaled_fit.transform(X_train_m2[cols_scale])
scaled_columns = pd.DataFrame(scaled_columns, index = X_train_m2.index)
scaled_columns.columns = cols_scale
X_train_m2['sqft_living'] = scaled_columns['sqft_living']
multi2_reg = LinearRegression()
multi2_reg.fit(X_train_m2, y_train_m2)
print(multi2_reg.score(X_train_m2, y_train_m2))
# Scaling test set
scale_test = X_test_m2[cols_scale]
scaled_columns_test = scaled_fit.transform(scale_test)
scaled_columns_test = pd.DataFrame(scaled_columns_test, index = X_test_m2.index)
scaled_columns_test.columns = cols_scale
X_test_m2['sqft_living'] = scaled_columns_test['sqft_living']
print(multi2_reg.score(X_test_m2, y_test_m2))
y_hat_train_m2 = multi2_reg.predict(X_train_m2)
y_hat_test_m2 = multi2_reg.predict(X_test_m2)
train_mse_m2 = mean_squared_error(y_train_m2, y_hat_train_m2)
test_mse_m2 = mean_squared_error(y_test_m2, y_hat_test_m2)
print('Train Mean Squarred Error:', train_mse_m2)
print('Test Mean Squarred Error:', test_mse_m2)
# Above Steps repeated with different combinations of housing features to see if it improves our model
# Multiple Linear Regression Model number 3
multi_model_3 = pd.concat([housing_pred_final['log_price'], housing_pred_final['sqft_living'], zipcode_encoded_ohe], axis = 1)
X_multi3 = multi_model_3.drop('log_price', axis = 1)
y_multi3 = multi_model_3['log_price']
X_train_m3, X_test_m3, y_train_m3, y_test_m3 = train_test_split(X_multi3, y_multi3, test_size=0.2, random_state=42)
# Scaling sqft_living(standardscaler)
cols_scale = ['sqft_living']
scaler = StandardScaler()
scaled_fit = scaler.fit(X_train_m3[cols_scale])
scaled_columns = scaled_fit.transform(X_train_m3[cols_scale])
scaled_columns = pd.DataFrame(scaled_columns, index = X_train_m3.index)
scaled_columns.columns = cols_scale
X_train_m3['sqft_living'] = scaled_columns['sqft_living']
multi3_reg = LinearRegression()
multi3_reg.fit(X_train_m3, y_train_m3)
print(multi3_reg.score(X_train_m3, y_train_m3))
# Scaling test set
scale_test = X_test_m3[cols_scale]
scaled_columns_test = scaled_fit.transform(scale_test)
scaled_columns_test = pd.DataFrame(scaled_columns_test, index = X_test_m3.index)
scaled_columns_test.columns = cols_scale
X_test_m3['sqft_living'] = scaled_columns_test['sqft_living']
print(multi3_reg.score(X_test_m3, y_test_m3))
y_hat_train_m3 = multi3_reg.predict(X_train_m3)
y_hat_test_m3 = multi3_reg.predict(X_test_m3)
train_mse_m3 = mean_squared_error(y_train_m3, y_hat_train_m3)
test_mse_m3 = mean_squared_error(y_test_m3, y_hat_test_m3)
print('Train Mean Squarred Error:', train_mse_m3)
print('Test Mean Squarred Error:', test_mse_m3)
# Above Steps repeated with different combinations of housing features to see if it improves our model
# Multiple Linear Regression Model number 4
multi_model_4 = pd.concat([housing_pred_final['log_price'], housing_pred_final['sqft_living'], zipcode_encoded_ohe, bathrooms_encoded_ohe], axis = 1)
X_multi4 = multi_model_4.drop('log_price', axis = 1)
y_multi4 = multi_model_4['log_price']
X_train_m4, X_test_m4, y_train_m4, y_test_m4 = train_test_split(X_multi4, y_multi4, test_size=0.2, random_state=42)
# Scaling sqft_living(standardscaler)
cols_scale = ['sqft_living']
scaler = StandardScaler()
scaled_fit = scaler.fit(X_train_m4[cols_scale])
scaled_columns = scaled_fit.transform(X_train_m4[cols_scale])
scaled_columns = pd.DataFrame(scaled_columns, index = X_train_m4.index)
scaled_columns.columns = cols_scale
X_train_m4['sqft_living'] = scaled_columns['sqft_living']
multi4_reg = LinearRegression()
multi4_reg.fit(X_train_m4, y_train_m4)
print(multi4_reg.score(X_train_m4, y_train_m4))
# Scaling test set
scale_test = X_test_m4[cols_scale]
scaled_columns_test = scaled_fit.transform(scale_test)
scaled_columns_test = pd.DataFrame(scaled_columns_test, index = X_test_m4.index)
scaled_columns_test.columns = cols_scale
X_test_m4['sqft_living'] = scaled_columns_test['sqft_living']
print(multi4_reg.score(X_test_m4, y_test_m4))
# Bathrooms lead to overfit so omitted the y_hat value and mean square error calculation
# Above Steps repeated with different combinations of housing features to see if it improves our model
# Multiple Linear Regression Model number 5
multi_model_5 = pd.concat([housing_pred_final['log_price'], housing_pred_final['sqft_living'], zipcode_encoded_ohe, bedrooms_encoded_ohe], axis = 1)
X_multi5 = multi_model_5.drop('log_price', axis = 1)
y_multi5 = multi_model_5['log_price']
X_train_m5, X_test_m5, y_train_m5, y_test_m5 = train_test_split(X_multi5, y_multi5, test_size=0.2, random_state=42)
# Scaling sqft_living(standardscaler)
cols_scale = ['sqft_living']
scaler = StandardScaler()
scaled_fit = scaler.fit(X_train_m5[cols_scale])
scaled_columns = scaled_fit.transform(X_train_m5[cols_scale])
scaled_columns = pd.DataFrame(scaled_columns, index = X_train_m5.index)
scaled_columns.columns = cols_scale
X_train_m5['sqft_living'] = scaled_columns['sqft_living']
multi5_reg = LinearRegression()
multi5_reg.fit(X_train_m5, y_train_m5)
print(multi5_reg.score(X_train_m5, y_train_m5))
# Scaling test set
scale_test = X_test_m5[cols_scale]
scaled_columns_test = scaled_fit.transform(scale_test)
scaled_columns_test = pd.DataFrame(scaled_columns_test, index = X_test_m5.index)
scaled_columns_test.columns = cols_scale
X_test_m5['sqft_living'] = scaled_columns_test['sqft_living']
print(multi5_reg.score(X_test_m5, y_test_m5))
y_hat_train_m5 = multi5_reg.predict(X_train_m5)
y_hat_test_m5 = multi5_reg.predict(X_test_m5)
train_mse_m5 = mean_squared_error(y_train_m5, y_hat_train_m5)
test_mse_m5 = mean_squared_error(y_test_m5, y_hat_test_m5)
print('Train Mean Squarred Error:', train_mse_m5)
print('Test Mean Squarred Error:', test_mse_m5)
# Above Steps repeated with different combinations of housing features to see if it improves our model
# Multiple Linear Regression Model number 6
multi_model_6 = pd.concat([housing_pred_final['log_price'], housing_pred_final['sqft_living'], zipcode_encoded_ohe, view_encoded_ohe], axis = 1)
X_multi6 = multi_model_6.drop('log_price', axis = 1)
y_multi6 = multi_model_6['log_price']
X_train_m6, X_test_m6, y_train_m6, y_test_m6 = train_test_split(X_multi6, y_multi6, test_size=0.2, random_state=42)
# Scaling sqft_living(standardscaler)
cols_scale = ['sqft_living']
scaler = StandardScaler()
scaled_fit = scaler.fit(X_train_m6[cols_scale])
scaled_columns = scaled_fit.transform(X_train_m6[cols_scale])
scaled_columns = pd.DataFrame(scaled_columns, index = X_train_m6.index)
scaled_columns.columns = cols_scale
X_train_m6['sqft_living'] = scaled_columns['sqft_living']
multi6_reg = LinearRegression()
multi6_reg.fit(X_train_m6, y_train_m6)
print(multi6_reg.score(X_train_m6, y_train_m6))
# Scaling test set
scale_test = X_test_m6[cols_scale]
scaled_columns_test = scaled_fit.transform(scale_test)
scaled_columns_test = pd.DataFrame(scaled_columns_test, index = X_test_m6.index)
scaled_columns_test.columns = cols_scale
X_test_m6['sqft_living'] = scaled_columns_test['sqft_living']
print(multi6_reg.score(X_test_m6, y_test_m6))
y_hat_train_m6 = multi6_reg.predict(X_train_m6)
y_hat_test_m6 = multi6_reg.predict(X_test_m6)
train_mse_m6 = mean_squared_error(y_train_m6, y_hat_train_m6)
test_mse_m6 = mean_squared_error(y_test_m6, y_hat_test_m6)
print('Train Mean Squarred Error:', train_mse_m6)
print('Test Mean Squarred Error:', test_mse_m6)
# Above Steps repeated with different combinations of housing features to see if it improves our model
# Multiple Linear Regression Model number 7
multi_model_7 = pd.concat([housing_pred_final['log_price'], housing_pred_final['sqft_living'], housing_pred_final['waterfront'], zipcode_encoded_ohe, view_encoded_ohe], axis = 1)
X_multi7 = multi_model_7.drop('log_price', axis = 1)
y_multi7 = multi_model_7['log_price']
X_train_m7, X_test_m7, y_train_m7, y_test_m7 = train_test_split(X_multi7, y_multi7, test_size=0.2, random_state=42)
# Scaling sqft_living(standardscaler)
cols_scale = ['sqft_living']
scaler = StandardScaler()
scaled_fit = scaler.fit(X_train_m7[cols_scale])
scaled_columns = scaled_fit.transform(X_train_m7[cols_scale])
scaled_columns = pd.DataFrame(scaled_columns, index = X_train_m7.index)
scaled_columns.columns = cols_scale
X_train_m7['sqft_living'] = scaled_columns['sqft_living']
multi7_reg = LinearRegression()
multi7_reg.fit(X_train_m7, y_train_m7)
print(multi7_reg.score(X_train_m7, y_train_m7))
# Scaling test set
scale_test = X_test_m7[cols_scale]
scaled_columns_test = scaled_fit.transform(scale_test)
scaled_columns_test = pd.DataFrame(scaled_columns_test, index = X_test_m7.index)
scaled_columns_test.columns = cols_scale
X_test_m7['sqft_living'] = scaled_columns_test['sqft_living']
print(multi7_reg.score(X_test_m7, y_test_m7))
y_hat_train_m7 = multi7_reg.predict(X_train_m7)
y_hat_test_m7 = multi7_reg.predict(X_test_m7)
train_mse_m7 = mean_squared_error(y_train_m7, y_hat_train_m7)
test_mse_m7 = mean_squared_error(y_test_m7, y_hat_test_m7)
print('Train Mean Squarred Error:', train_mse_m7)
print('Test Mean Squarred Error:', test_mse_m7)
# Above Steps repeated with different combinations of housing features to see if it improves our model
# Multiple Linear Regression Model number 8
multi_model_8 = pd.concat([housing_pred_final['log_price'], housing_pred_final['sqft_living'], housing_pred_final['waterfront'],
housing_pred_final['Basement'], zipcode_encoded_ohe, view_encoded_ohe,
cond_encoded_ohe, floors_encoded_ohe], axis = 1)
X_multi8 = multi_model_8.drop('log_price', axis = 1)
y_multi8 = multi_model_8['log_price']
X_train_m8, X_test_m8, y_train_m8, y_test_m8 = train_test_split(X_multi8, y_multi8, test_size=0.2, random_state=42)
# Scaling sqft_living(standardscaler)
cols_scale = ['sqft_living']
scaler = StandardScaler()
scaled_fit = scaler.fit(X_train_m8[cols_scale])
scaled_columns = scaled_fit.transform(X_train_m8[cols_scale])
scaled_columns = pd.DataFrame(scaled_columns, index = X_train_m8.index)
scaled_columns.columns = cols_scale
X_train_m8['sqft_living'] = scaled_columns['sqft_living']
multi8_reg = LinearRegression()
multi8_reg.fit(X_train_m8, y_train_m8)
print(multi8_reg.score(X_train_m8, y_train_m8))
# Scaling test set
scale_test = X_test_m8[cols_scale]
scaled_columns_test = scaled_fit.transform(scale_test)
scaled_columns_test = pd.DataFrame(scaled_columns_test, index = X_test_m8.index)
scaled_columns_test.columns = cols_scale
X_test_m8['sqft_living'] = scaled_columns_test['sqft_living']
print(multi8_reg.score(X_test_m8, y_test_m8))
y_hat_train_m8 = multi8_reg.predict(X_train_m8)
y_hat_test_m8 = multi8_reg.predict(X_test_m8)
train_mse_m8 = mean_squared_error(y_train_m8, y_hat_train_m8)
test_mse_m8 = mean_squared_error(y_test_m8, y_hat_test_m8)
print('Train Mean Squarred Error:', train_mse_m8)
print('Test Mean Squarred Error:', test_mse_m8)
cross_val_score(multi8_reg, X_train_m8, y_train_m8, cv=5)
regression_score = pd.DataFrame(index=['Simple', 'M-1', 'M-3', 'M-6', 'M-8'], columns=['test R^2'])
regression_score['test R^2'] = [simple_reg.score(X_test_s, y_test_s),
multi1_reg.score(X_test_m1, y_test_m1),
multi3_reg.score(X_test_m3, y_test_m3),
multi6_reg.score(X_test_m6, y_test_m6),
multi8_reg.score(X_test_m8, y_test_m8)]
model_score_fig, ax = plt.subplots(figsize=(10,6))
sns.set_style("dark")
ax.bar(regression_score.index, regression_score['test R^2'], color=["SkyBlue"])
plt.title('Model Progression')
plt.xlabel('Model Iteration')
plt.ylabel('Test R^2');
formula = 'price ~ sqft_living + waterfront + C(zipcode) + C(view)'
model_ols_8 = ols(formula=formula, data=housing_ols).fit()
model_summary = model_ols_8.summary()
model_summary
wtr_df = housing_ols.groupby(by = 'waterfront').mean().reset_index()
wtr_fig, ax = plt.subplots(figsize=(10,6))
sns.set_style("dark")
plt.bar(wtr_df['waterfront'], wtr_df['price'], color = 'green')
plt.xlabel('Waterfront(No/Yes)')
plt.ylabel('Average Price')
plt.title('Presence of Waterfront')
ax.ticklabel_format(style='plain', axis = 'y');
view_df = housing_ols.groupby(by = 'view').mean().reset_index()
view_df.sort_values(by = 'price', ascending = True, inplace = True)
view_fig, ax = plt.subplots(figsize=(10,6))
sns.set_style("dark")
color = ['green', 'darkblue', 'grey', 'green', 'green']
plt.bar(view_df['view'], view_df['price'], color = color)
plt.ylabel('Average Price')
plt.xlabel('Quality of View')
plt.title('Avg Price Based on View')
ax.ticklabel_format(style='plain', axis = 'y')
plt.annotate('Green: Sign. Grey: Not Sign. Blue: Reference', (0.02, .95), backgroundcolor='w', xycoords='axes fraction');
heat_map_price = heat_map_price.reset_index().sort_values(by='price', ascending=True)
heat_map_price = heat_map_price[(heat_map_price['index']=='sqft_living')|
(heat_map_price['index']=='grade_num')|
(heat_map_price['index']=='bathrooms')|
(heat_map_price['index']=='bedrooms')|
(heat_map_price['index']=='waterfront')|
(heat_map_price['index']=='floors')]
regression_score = regression_score.reset_index()
fig, (ax1, ax2) = plt.subplots(figsize=(13,6), ncols=2, dpi=200)
x_cor = heat_map_price['index']
y_cor = heat_map_price['price']
clrs=['grey' if (x != 'sqft_living') else 'red' for x in x_cor]
sns.barplot(x=x_cor, y=y_cor, palette=clrs, ax=ax1)
ax1.tick_params(axis='x', rotation=45)
x_reg = regression_score['index']
y_reg = regression_score['test R^2']
clrs=['grey' if x != 'M-8' else 'red' for x in x_reg]
sns.barplot(x=x_reg, y=y_reg, palette=clrs, ax=ax2);
```
| github_jupyter |
# Results: XXXX Scaled
<b> MIL </b> <i>stratified k fold Validation</i> is performed.
Metrics: <br>
- AUC
- Accuracie
### Import Libraries
```
import sys,os
import warnings
os.chdir('/Users/josemiguelarrieta/Documents/MILpy')
sys.path.append(os.path.realpath('..'))
from sklearn.utils import shuffle
import random as rand
import numpy as np
from data import load_data
warnings.filterwarnings('ignore')
from MILpy.functions.mil_cross_val import mil_cross_val
#Import Algorithms
from MILpy.Algorithms.simpleMIL import simpleMIL
from MILpy.Algorithms.MILBoost import MILBoost
from MILpy.Algorithms.maxDD import maxDD
from MILpy.Algorithms.CKNN import CKNN
from MILpy.Algorithms.EMDD import EMDD
from MILpy.Algorithms.MILES import MILES
from MILpy.Algorithms.BOW import BOW
```
### Load data
```
bags,labels,X = load_data('XXXXX_scaled')
folds = 5
runs = 5
```
#### Simple MIL [max]
```
SMILa = simpleMIL()
parameters_smil = {'type': 'max'}
print '\n========= SIMPLE MIL RESULT [MAX] ========='
AUC = []
ACCURACIE=[]
for i in range(runs):
print '\n run #'+ str(i)
#Shuffle Data
bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100))
accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels.ravel(), model=SMILa, folds=folds, parameters=parameters_smil, timer = True)
print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2))
AUC.append(auc)
ACCURACIE.append(accuracie)
print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE))
```
#### Simple MIL [min]
```
parameters_smil = {'type': 'min'}
print '\n========= SIMPLE MIL RESULT [MIN] ========='
AUC = []
ACCURACIE=[]
for i in range(runs):
print '\n run #'+ str(i)
bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100))
accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels.ravel(), model=SMILa, folds=folds,parameters=parameters_smil, timer=True)
print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2))
AUC.append(auc)
ACCURACIE.append(accuracie)
print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE))
```
#### Simple MIL [extreme]
```
parameters_smil = {'type': 'extreme'}
print '\n========= SIMPLE MIL RESULT [MIN] ========='
AUC = []
ACCURACIE=[]
for i in range(runs):
print '\n run #'+ str(i)
#Shuffle Data
bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100))
accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels.ravel(), model=SMILa, folds=folds,parameters=parameters_smil, timer=True)
print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2))
AUC.append(auc)
ACCURACIE.append(accuracie)
print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE))
```
#### Simple MIL [average]
```
parameters_smil = {'type': 'average'}
print '\n========= SIMPLE MIL RESULT [AVERAGE] ========='
AUC = []
ACCURACIE=[]
for i in range(runs):
print '\n run #'+ str(i)
bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100))
accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels.ravel(), model=SMILa, folds=folds,parameters=parameters_smil, timer=True)
print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2))
AUC.append(auc)
ACCURACIE.append(accuracie)
print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE))
```
#### Bag of Words
```
bow_classifier = BOW()
parameters_bow = {'k':100,'covar_type':'diag','n_iter':20}
print '\n========= BAG OF WORDS RESULT ========='
AUC = []
ACCURACIE=[]
for i in range(runs):
print '\n run #'+ str(i)
bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100))
accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels.ravel(), model=bow_classifier, folds=folds,parameters=parameters_bow, timer=True)
print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2))
AUC.append(auc)
ACCURACIE.append(accuracie)
print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE))
```
#### Citation KNN
```
cknn_classifier = CKNN()
parameters_cknn = {'references': 3, 'citers': 5}
print '\n========= CKNN RESULT ========='
AUC = []
ACCURACIE=[]
for i in range(runs):
print '\n run #'+ str(i)
bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100))
accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels.ravel(), model=cknn_classifier, folds=folds,parameters=parameters_cknn, timer=True)
print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2))
AUC.append(auc)
ACCURACIE.append(accuracie)
print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE))
```
#### Diverse Density
```
maxDD_classifier = maxDD()
print '\n========= DIVERSE DENSITY RESULT========='
AUC = []
ACCURACIE=[]
for i in range(runs):
print '\n run #'+ str(i)
bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100))
accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels.ravel(), model=maxDD_classifier, folds=folds,parameters={}, timer=True)
print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2))
AUC.append(auc)
ACCURACIE.append(accuracie)
print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE))
```
#### EM-DD
```
emdd_classifier = EMDD()
print '\n========= EM-DD RESULT ========='
AUC = []
ACCURACIE=[]
for i in range(runs):
print '\n run #'+ str(i)
bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100))
accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels.ravel(), model=emdd_classifier, folds=folds,parameters={}, timer=True)
print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2))
AUC.append(auc)
ACCURACIE.append(accuracie)
print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE))
```
#### MILBoost
```
milboost_classifier = MILBoost()
print '\n========= MILBOOST RESULT ========='
AUC = []
ACCURACIE=[]
for i in range(runs):
print '\n run #'+ str(i)
bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100))
accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels, model=milboost_classifier, folds=folds,parameters={}, timer=True)
print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2))
AUC.append(auc)
ACCURACIE.append(accuracie)
print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE))
```
#### Miles
```
#Pending
```
| github_jupyter |
# Object Detection Demo
Welcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md) before you start.
# Imports
```
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from distutils.version import StrictVersion
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
from object_detection.utils import ops as utils_ops
if StrictVersion(tf.__version__) < StrictVersion('1.12.0'):
raise ImportError('Please upgrade your TensorFlow installation to v1.12.*.')
```
## Env setup
```
# This is needed to display the images.
%matplotlib inline
```
## Object detection imports
Here are the imports from the object detection module.
```
from utils import label_map_util
from utils import visualization_utils as vis_util
```
# Model preparation
## Variables
Any model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_FROZEN_GRAPH` to point to a new .pb file.
By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies.
```
# What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_FROZEN_GRAPH = MODEL_NAME + '/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')
```
## Download Model
```
opener = urllib.request.URLopener()
opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
file_name = os.path.basename(file.name)
if 'frozen_inference_graph.pb' in file_name:
tar_file.extract(file, os.getcwd())
```
## Load a (frozen) Tensorflow model into memory.
```
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
```
## Loading label map
Label maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
```
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
```
## Helper code
```
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
```
# Detection
```
# For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 3) ]
# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)
def run_inference_for_single_image(image, graph):
with graph.as_default():
with tf.Session() as sess:
# Get handles to input and output tensors
ops = tf.get_default_graph().get_operations()
all_tensor_names = {output.name for op in ops for output in op.outputs}
tensor_dict = {}
for key in [
'num_detections', 'detection_boxes', 'detection_scores',
'detection_classes', 'detection_masks'
]:
tensor_name = key + ':0'
if tensor_name in all_tensor_names:
tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(
tensor_name)
if 'detection_masks' in tensor_dict:
# The following processing is only for single image
detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0])
# Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)
detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])
detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1])
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
detection_masks, detection_boxes, image.shape[1], image.shape[2])
detection_masks_reframed = tf.cast(
tf.greater(detection_masks_reframed, 0.5), tf.uint8)
# Follow the convention by adding back the batch dimension
tensor_dict['detection_masks'] = tf.expand_dims(
detection_masks_reframed, 0)
image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')
# Run inference
output_dict = sess.run(tensor_dict,
feed_dict={image_tensor: image})
# all outputs are float32 numpy arrays, so convert types as appropriate
output_dict['num_detections'] = int(output_dict['num_detections'][0])
output_dict['detection_classes'] = output_dict[
'detection_classes'][0].astype(np.uint8)
output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
output_dict['detection_scores'] = output_dict['detection_scores'][0]
if 'detection_masks' in output_dict:
output_dict['detection_masks'] = output_dict['detection_masks'][0]
return output_dict
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
output_dict = run_inference_for_single_image(image_np_expanded, detection_graph)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks'),
use_normalized_coordinates=True,
line_thickness=8)
plt.figure(figsize=IMAGE_SIZE)
plt.imshow(image_np)
```
| github_jupyter |
# nlp-transform-snippets
creates snippets out of large text files
```
!pip3 install wget==3.2
import wget
import logging
import numpy as np
import os
import re
import shutil
import sys
import tarfile
import time
# file name for training data zip
input_filename = os.environ.get('input_filename ', 'data.zip')
# resulting model zip file name
output_model_zip = os.environ.get('output_model_zip', 'model.zip')
# temporal data storage for local execution
data_dir = os.environ.get('data_dir', '../../data/')
parameters = list(
map(
lambda s: re.sub('$', '"', s),
map(
lambda s: s.replace('=', '="'),
filter(
lambda s: s.find('=') > -1 and bool(re.match('[A-Za-z0-9_]*=[.\/A-Za-z0-9]*', s)),
sys.argv
)
)
)
)
for parameter in parameters:
logging.warning('Parameter: '+parameter)
exec(parameter)
source_folder=str(time.time())
shutil.unpack_archive(data_dir + input_filename, extract_dir=data_dir + source_folder)
# TODO generalize
letter = 'abcdefghijklmnopqrstuvwxyz'
digits = '0123456789'
others = '!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~'
alphabet = letter + digits + others
print('alphabet size:', len(alphabet))
# all-zeroes padding vector:
pad_vector = [0 for x in alphabet]
# pre-calculated one-hot vectors:
supported_chars_map = {}
for i, ch in enumerate(alphabet):
vec = [0 for x in alphabet]
vec[i] = 1
supported_chars_map[ch] = vec
def get_source_snippets(file_name, breakup=True):
# Read the file content and lower-case:
text = ""
with open(file_name, mode='r') as file:
text = file.read().lower()
lines = text.split('\n')
nlines = len(lines)
if breakup and nlines > 50:
aThird = nlines//3
twoThirds = 2*aThird
text1 = '\n'.join(lines[:aThird])
text2 = '\n'.join(lines[aThird:twoThirds])
text3 = '\n'.join(lines[twoThirds:])
return [text1, text2, text3]
return [text]
def turn_sample_to_vector(sample, sample_vectors_size=1024,
normalize_whitespace=True):
if normalize_whitespace:
# Map (most) white-space to space and compact to single one:
sample = sample.replace('\n', ' ').replace('\r', ' ').replace('\t', ' ')
sample = re.sub('\s+', ' ', sample)
# Encode the characters to one-hot vectors:
sample_vectors = []
for ch in sample:
if ch in supported_chars_map:
sample_vectors.append(supported_chars_map[ch])
# Truncate to fixed length:
sample_vectors = sample_vectors[0:sample_vectors_size]
# Pad with 0 vectors:
if len(sample_vectors) < sample_vectors_size:
for i in range(0, sample_vectors_size - len(sample_vectors)):
sample_vectors.append(pad_vector)
return np.array(sample_vectors)
def turn_file_to_vectors(file_name, sample_vectors_size=1024, normalize_whitespace=True, breakup=True):
samples = get_source_snippets(file_name, breakup)
return [turn_sample_to_vector(s, sample_vectors_size, normalize_whitespace) for s in samples]
def get_input_and_labels(root_folder, sample_vectors_size=1024, breakup=True):
X = []
Y = []
for i, lang in enumerate(langs):
print('Processing language:', lang)
# One-hot class label vector:
class_label = [0 for x in range(0, num_classes)]
class_label[i] = 1
# For all files in language folder:
folder = os.path.join(root_folder, lang)
for fn in os.listdir(folder):
if fn.startswith("."):
continue # Skip hidden files and Jupyterlab cache directories
file_name = os.path.join(folder, fn)
sample_vectors = turn_file_to_vectors(file_name,
sample_vectors_size=sample_vectors_size,
breakup=breakup)
for fv in sample_vectors:
X.append(fv) # the sample feature vector
Y.append(class_label) # the class ground-truth
return np.array(X, dtype=np.int8), np.array(Y, dtype=np.int8)
# TODO generalize
langs = [
"C",
"C#",
"C++",
"D",
"Haskell",
"Java",
"JavaScript",
"PHP",
"Python",
"Rust"
]
num_classes = len(langs)
x, y = get_input_and_labels(root_folder=data_dir + source_folder + '/train') #TODO use data folder
# Shuffle data
shuffle_indices = np.random.permutation(np.arange(len(y)))
x_shuffled = x[shuffle_indices]
y_shuffled = y[shuffle_indices]
print('samples shape', x_shuffled.shape)
print('class labels shape:', y_shuffled.shape)
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import Activation, Dense, Dropout, Flatten, Input
from tensorflow.keras.layers import Conv1D, MaxPooling1D, Concatenate
# Model Hyperparameters
kernel_sizes = (3, 9, 19)
pooling_sizes = (3, 9, 19)
num_filters = 128
dropout_prob = 0.5
hidden_dims = 128
stage_in = Input(shape=(1024, 68))
convs = []
for i in range(0, len(kernel_sizes)):
conv = Conv1D(filters=num_filters,
kernel_size=kernel_sizes[i],
padding='valid',
activation='relu',
strides=1)(stage_in)
pool = MaxPooling1D(pool_size=pooling_sizes[i])(conv)
flatten = Flatten()(pool)
convs.append(flatten)
if len(kernel_sizes) > 1:
out = Concatenate()(convs)
else:
out = convs[0]
stages = Model(inputs=stage_in, outputs=out)
model = Sequential([
stages,
Dense(hidden_dims, activation='relu'),
Dropout(dropout_prob),
Dense(num_classes, activation='softmax')
])
model.summary()
# Note: also need pydot and GraphViz installed for this.
#from tensorflow.keras.utils import plot_model
#plot_model(model, show_shapes=True, expand_nested=True)
batch_size = 64
num_epochs = 20
val_split = 0.1
model.compile(loss='categorical_crossentropy', optimizer='adam',
metrics=['accuracy'])
history = model.fit(x_shuffled, y_shuffled, batch_size=batch_size,
epochs=num_epochs, validation_split=val_split,
verbose=1)
model_folder=str(time.time())
model.save(data_dir + model_folder)
shutil.make_archive(data_dir + output_model_zip.split('.zip')[0], 'zip', data_dir + model_folder)
```
| github_jupyter |
# Final Project: Classifying Flowers
It's nearing the end of year and it's time we work on one final project. First we learned about AI, and now we are going to combine it with web scraping. The first thing we are going to do is create a neural network to classify the flowers. Then I will direct you to a website where you will have to scrape all the images off and classify them. Whover does it with the best accuracy will win. I will show you a benchmark for this competition. We are going to use Keras along with some other libraries we will need to extract data from the files. Here is the stuff we will be doing in this project:
1. Opening and Pre-processing Image Files
2. Data Augmentation
3. Using Pretrained Models
4. Hyperparameter Search
5. Saving a Neural Network
6. Web Scraping
I am trying to give you a more realistic view of what a real data scientist will do but on a short time, so follow along!
```
import numpy as np
from PIL import Image
from keras.applications import *
from keras.models import Model
from keras.layers import Input, Dense
from keras import backend as K
import os
```
## Opening Image Files
This is a very straightforward step. We can automatically extract images from an archived folder. Let's see what kinds of flowers are in this dataset:
```
flowers = os.listdir('flowers')
print (flowers)
```
Ok. So we have 5 classes: Rose, Dandelion, Sunflower, Tulip, and Daisy. Each folder has images of the respective class in it. If you read the description of the dataset, the images' dimensions are not all the same! We will have to resize them! Since I will be using a pretrained model and fine-tuning it, I will have to resize them to a specific size, regardless of the dimensions they were before. I will resize them to the 224x224. We will use `os.listdir('..input/flowers/flowers/flower_name')` for each folder so we can get their path. Then we can open the images!
<br>
Here is how we are going to do this: We will create a dictionary, where each key is a label. Then we will create a list corresponding to each key. We will go over all the files and add their paths to their respective label.
```
paths = {'rose': [],
'dandelion': [],
'sunflower': [],
'tulip': [],
'daisy': []
}
for key, images in paths.items():
for filename in os.listdir('flowers/flowers/'+key):
paths[key].append('flowers-recognition/flowers/flowers/'+key+'/'+filename)
print (len(images),key,'images')
X = []
Y = []
mapping = {'rose': 0,
'dandelion': 1,
'sunflower': 2,
'tulip': 3,
'daisy': 4
}
for label,image_paths in paths.items():
for path in image_paths:
if '.py' not in path:
image = Image.open(path)
image = image.resize((224,224))
X.append(np.array(image))
one_hot = np.array([0.,0.,0.,0.,0.])
one_hot[mapping[label]] = 1.
Y.append(one_hot)
aug_X = []
aug_Y = []
for image in X:
aug_X.append(np.flip(image,1))
aug_Y = Y
X = X + aug_X
Y = Y + aug_Y
len(X)
from keras.applications.resnet50 import ResNet50
from keras.models import Model
from keras.layers import Dense
from keras import backend as K
base_model = ResNet50(weights=None, include_top=False, input_shape=(224,224,3))
base_model.load_weights('../input/keras-pretrained-models/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5')
for layer in base_model.layers:
layer.trainable = False
output = base_model.output
from keras.layers import Flatten
output = Flatten()(output)
output = Dense(5, activation='softmax')(output)
model = Model(inputs=base_model.input, outputs=output)
model.summary()
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(np.stack(X,axis=0),np.stack(Y,axis=0),validation_split=0.1,batch_size=8,epochs=15,verbose=1)
```
| github_jupyter |
# Preferential Bayesian Optimization: Dueling-Thompson Sampling
Implementation of the algorithm by Gonzalez et al (2017).
```
import numpy as np
import gpflow
import tensorflow as tf
import tensorflow_probability as tfp
import matplotlib.pyplot as plt
import sys
import os
import datetime
import pickle
from gpflow.utilities import set_trainable, print_summary
gpflow.config.set_default_summary_fmt("notebook")
sys.path.append(os.path.split(os.path.split(os.path.split(os.getcwd())[0])[0])[0]) # Move 3 levels up directory to import project files as module
import importlib
PBO = importlib.import_module("Top-k-Ranking-Bayesian-Optimization")
gpu_to_use = 0
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
# Restrict TensorFlow to only use the first GPU
try:
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
tf.config.experimental.set_visible_devices(gpus[gpu_to_use], 'GPU')
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPU")
except RuntimeError as e:
# Visible devices must be set before GPUs have been initialized
print(e)
def log(message):
print(str(datetime.datetime.now()) + ': ' + message)
features = pickle.load( open( "sushi_features.p", "rb" ) )
fvals = pickle.load( open( "fvals.p", "rb" ) )
# construct dict
feat_to_fval_dict = {}
for i in range(len(features)):
key = features[i].data.tobytes()
feat_to_fval_dict[key] = fvals[i]
objective = lambda x: PBO.objectives.sushi(x, feat_to_fval_dict)
objective_low = np.min(features)
objective_high = np.max(features)
objective_name = "SUSHI"
acquisition_name = "DTS"
experiment_name = acquisition_name + "_" + objective_name
num_runs = 10
num_evals = 35
num_choices = 2
input_dims = 6
num_init_prefs = 10
num_fourier_features = 200
num_in_subset = 100
regularizer_lengthscale_mean_over_range = 0.2
regularizer_lengthscale_std_over_range = 0.5
input_range = objective_high - objective_low
lengthscale_mean_regularizer = input_range * regularizer_lengthscale_mean_over_range
lengthscale_std_regularizer = input_range * regularizer_lengthscale_std_over_range
lengthscale = lengthscale_mean_regularizer
results_dir = os.getcwd() + '/results/' + experiment_name + '/'
try:
# Create target Directory
os.makedirs(results_dir)
print("Directory " , results_dir , " created ")
except FileExistsError:
print("Directory " , results_dir , " already exists")
def std_representation(X, num_choices):
"""
:param X: tensor of shape (num_data, input_dims * num_choices)
:return: tensor of shape (num_data, num_choices, input_dims)
"""
input_dims = X.shape[-1] // num_choices
ret_val = np.zeros((X.shape[0], num_choices, input_dims))
for i in range(num_choices):
ret_val[:, i, :] = X[:, input_dims*i:input_dims*(i+1)]
return ret_val
def get_noisy_observation_dts(X, objective):
"""
:param X: tensor of shape (num_data, input_dims * 2)
:param objective: objective function
"""
num_data = X.shape[0]
X_std = std_representation(X, num_choices) # (num_data, num_choices, input_dims)
f = PBO.objectives.objective_get_f_neg(X_std, objective)
obs = np.array(PBO.observation_model.gen_observation_from_f(X_std, f, 1)) # (num_data, 1, input_dims)
ret_val = np.zeros((num_data, 1), dtype=np.int8)
for i in range(num_data):
if np.allclose(X_std[i, 0], obs[i, 0]):
ret_val[i] = 1
return ret_val
@tf.function
def lengthscale_regularizer(kernel): # for product kernel
loss = 0
for k in kernel.kernels:
loss += 0.5 * tf.reduce_sum(tf.square((k.lengthscale - lengthscale_mean_regularizer) / lengthscale_std_regularizer))
return loss
def train_and_visualize(X, y, lengthscale, title, num_steps=3000):
kernel = gpflow.kernels.Product([gpflow.kernels.RBF(lengthscale=lengthscale,
active_dims=[i, i+input_dims])
for i in range(input_dims)])
m = gpflow.models.SVGP(kernel=kernel,
likelihood=gpflow.likelihoods.Bernoulli(invlink=tf.math.sigmoid),
inducing_variable=X,
whiten=False)
m.inducing_variable.Z.trainable = False
optimizer = tf.keras.optimizers.RMSprop(rho=0.0)
loss = lambda: -m.log_likelihood(X, y) + lengthscale_regularizer(m.kernel)
prev_loss = loss().numpy()
for i in range(num_steps):
optimizer.minimize(loss, m.trainable_variables)
current_loss = loss().numpy()
if i % 500 == 0:
print('Loss at step %s: %s' % (i, current_loss))
if abs((current_loss-prev_loss) / prev_loss) < 1e-7:
print('Loss at step %s: %s' % (i, current_loss))
break
prev_loss = current_loss
return m
def flip(X):
"""
:param X: tensor of shape (num_data, input_dims * 2)
:return: tensor of shape (num_data, input_dims * 2), where the first input_dims is swapped with the second
"""
input_dims = X.shape[-1] // 2
ret_val = np.zeros((X.shape))
for i in range(X.shape[0]):
ret_val[i, :input_dims] = X[i, input_dims:]
ret_val[i, input_dims:] = X[i, :input_dims]
return ret_val
def flip_y(y):
"""
:param y: tensor of shape (num_data, 1), with int values either 0 or 1
"""
return (y + 1) % 2
```
Generate rank dictionary and immediate regret dictionary.
```
fval_idx_tuples = pickle.load(open("fval_idx_tuples.p", "rb"))
rank_dict = {}
for i in range(len(fval_idx_tuples)):
rank_dict[features[fval_idx_tuples[i][1]].data.tobytes()] = i + 1
```
This function is our main metric for the performance of the acquisition function.
```
def get_max_sushi(m, features, combs, rank_dict):
"""
Specifically for DTS
:param model: gpflow model
:param features: sushi features
:param rank_dict: dictionary from sushi idx to place in ranking
:return: tuple (index of max sushi, rank)
"""
y_vals = m.predict_y(combs)[0]
num_discrete_points = int(np.sqrt(y_vals.shape[0]))
soft_copeland = np.mean(np.reshape(y_vals,
[num_discrete_points, num_discrete_points]), axis=1) # (num_discrete_points)
max_idx = np.argmax(soft_copeland)
return (max_idx, rank_dict[features[max_idx].data.tobytes()])
```
Create the initial values for each run:
```
np.random.seed(0)
random_indices = np.zeros([num_runs, num_init_prefs, num_choices], dtype=np.int64)
for i in range(num_runs):
random_indices[i] = np.random.choice(features.shape[0], [num_init_prefs, num_choices], replace=False)
init_vals = np.take(features, random_indices, axis=0)
init_vals = np.reshape(init_vals, (num_runs, num_init_prefs, num_choices * input_dims))
```
Store the results in these arrays:
```
num_data_at_end = (num_init_prefs + num_evals) * 2
X_results = np.zeros([num_runs, num_data_at_end, input_dims * num_choices])
y_results = np.zeros([num_runs, num_data_at_end, 1])
immediate_regret = np.zeros([num_runs, num_evals], np.int32)
def array_in(a, b):
"""
a: 1-D array with shape (d, )
b: 2-D array with shape (n, d)
:return: bool
"""
for i in range(b.shape[0]):
if np.allclose(a, b[i]):
return True
return False
for run in range(num_runs):
log("Starting run {}".format(run))
#Fit a GP with kernel k to Dn
X = init_vals[run]
y = get_noisy_observation_dts(X, objective)
X = np.vstack([X, flip(X)])
y = np.vstack([y, flip_y(y)])
model = train_and_visualize(X, y, lengthscale=lengthscale, title="Run_{}_Initial_model".format(run))
for evaluation in range(num_evals):
log("Starting evaluation " + str(evaluation))
is_valid_query = False
num_tries = 0
while not is_valid_query:
discrete_space = features
combs = PBO.acquisitions.dts.combinations(features)
# Sample f using RFF
f_vals = PBO.acquisitions.dts.sample_f(model, X, combs, num_fourier_features)
# 2 and 3. Compute the acquisition for duels alpha and get next duel
log("Computing acquisition function")
x_next = PBO.acquisitions.dts.soft_copeland_maximizer(f_vals, discrete_space)
all_pairs = np.concatenate([np.tile(x_next, (discrete_space.shape[0], 1)), discrete_space], axis=1)
next_vars = np.squeeze(PBO.acquisitions.dts.variance_logistic_f(model, all_pairs),
axis=1)
xprime_next = discrete_space[np.argmax(next_vars)]
x_xprime_next = np.expand_dims(np.concatenate([x_next, xprime_next]), axis=0)
# If both x and x' are equal, or the query has already been made, will cause Fourier features matrix
# to become non-invertible later on
if np.all(np.equal(x_xprime_next, flip(x_xprime_next))) or array_in(x_xprime_next, X):
log("Invalid query, resampling f")
print("X:")
print(X)
print("Attempted query:")
print(x_xprime_next)
num_tries += 1
if num_tries >= 10:
raise ValueError
else:
log("x and x_prime: \n" + str(x_xprime_next))
is_valid_query = True
# 4. Run the duel and get y
y_next = get_noisy_observation_dts(x_xprime_next, objective)
log("y_next: \n" + str(y_next))
# 5. Augment X and Y, and add symmetric points
X = np.vstack([X, x_xprime_next, flip(x_xprime_next)])
y = np.vstack([y, y_next, flip_y(y_next)])
# Fit a GP with kernel k to Dj and learn pi(x).
model = train_and_visualize(X, y, lengthscale=lengthscale, title="Run_{}_Evaluation_{}".format(run, evaluation))
# Save model
kernels_variance = []
kernels_lengthscale = []
for k in model.kernel.kernels:
kernels_variance.append(k.variance.numpy())
kernels_lengthscale.append(k.lengthscale.numpy())
pickle.dump((X, y,
tuple(kernels_variance),
tuple(kernels_lengthscale),
model.q_mu.numpy(),
model.q_sqrt.numpy()),
open(results_dir + "Model_Run_{}_Evaluation_{}.p".format(run, evaluation), "wb"))
(max_idx, rank) = get_max_sushi(model, features, combs, rank_dict)
immediate_regret[run, evaluation] = rank - 1
print("Maximizing sushi has index {} and rank {}".format(max_idx, rank))
X_results[run] = X
y_results[run] = y
print("Run {} immediate regret: ".format(run))
print(immediate_regret[run])
pickle.dump((X_results, y_results, immediate_regret), open(results_dir + "res.p", "wb"))
ir = immediate_regret
mean = np.mean(ir, axis=0)
std_dev = np.std(ir, axis=0)
std_err = std_dev / np.sqrt(ir.shape[0])
print("Mean immediate regret at each evaluation averaged across all runs:")
print(mean)
print("Standard error of immediate regret at each evaluation averaged across all runs:")
print(std_err)
with open(results_dir + acquisition_name + "_" + objective_name + "_" + "mean_sem" + ".txt", "w") as text_file:
print("Mean immediate regret at each evaluation averaged across all runs:", file=text_file)
print(mean, file=text_file)
print("Standard error of immediate regret at each evaluation averaged across all runs:", file=text_file)
print(std_err, file=text_file)
pickle.dump((mean, std_err), open(results_dir + acquisition_name + "_" + objective_name + "_" + "mean_sem.p", "wb"))
```
| github_jupyter |
# Entities Recognition
<div class="alert alert-info">
This tutorial is available as an IPython notebook at [Malaya/example/entities](https://github.com/huseinzol05/Malaya/tree/master/example/entities).
</div>
<div class="alert alert-warning">
This module only trained on standard language structure, so it is not save to use it for local language structure.
</div>
```
%%time
import malaya
```
### Describe supported entities
```
import pandas as pd
pd.set_option('display.max_colwidth', -1)
malaya.entity.describe()
```
### Describe supported Ontonotes 5 entities
```
malaya.entity.describe_ontonotes5()
```
### List available Transformer NER models
```
malaya.entity.available_transformer()
```
Make sure you can check accuracy chart from here first before select a model, https://malaya.readthedocs.io/en/latest/models-accuracy.html#Entities-Recognition
### List available Transformer NER Ontonotes 5 models
```
malaya.entity.available_transformer_ontonotes5()
```
Make sure you can check accuracy chart from here first before select a model, https://malaya.readthedocs.io/en/latest/models-accuracy.html#Entities-Recognition-Ontonotes5
```
string = 'KUALA LUMPUR: Sempena sambutan Aidilfitri minggu depan, Perdana Menteri Tun Dr Mahathir Mohamad dan Menteri Pengangkutan Anthony Loke Siew Fook menitipkan pesanan khas kepada orang ramai yang mahu pulang ke kampung halaman masing-masing. Dalam video pendek terbitan Jabatan Keselamatan Jalan Raya (JKJR) itu, Dr Mahathir menasihati mereka supaya berhenti berehat dan tidur sebentar sekiranya mengantuk ketika memandu.'
string1 = 'memperkenalkan Husein, dia sangat comel, berumur 25 tahun, bangsa melayu, agama islam, tinggal di cyberjaya malaysia, bercakap bahasa melayu, semua membaca buku undang-undang kewangan, dengar laju Siti Nurhaliza - Seluruh Cinta sambil makan ayam goreng KFC'
```
### Load Transformer model
```python
def transformer(model: str = 'xlnet', quantized: bool = False, **kwargs):
"""
Load Transformer Entity Tagging model trained on Malaya Entity, transfer learning Transformer + CRF.
Parameters
----------
model : str, optional (default='bert')
Model architecture supported. Allowed values:
* ``'bert'`` - Google BERT BASE parameters.
* ``'tiny-bert'`` - Google BERT TINY parameters.
* ``'albert'`` - Google ALBERT BASE parameters.
* ``'tiny-albert'`` - Google ALBERT TINY parameters.
* ``'xlnet'`` - Google XLNET BASE parameters.
* ``'alxlnet'`` - Malaya ALXLNET BASE parameters.
* ``'fastformer'`` - FastFormer BASE parameters.
* ``'tiny-fastformer'`` - FastFormer TINY parameters.
quantized : bool, optional (default=False)
if True, will load 8-bit quantized model.
Quantized model not necessary faster, totally depends on the machine.
Returns
-------
result: model
List of model classes:
* if `bert` in model, will return `malaya.model.bert.TaggingBERT`.
* if `xlnet` in model, will return `malaya.model.xlnet.TaggingXLNET`.
* if `fastformer` in model, will return `malaya.model.fastformer.TaggingFastFormer`.
"""
```
```
model = malaya.entity.transformer(model = 'alxlnet')
```
#### Load Quantized model
To load 8-bit quantized model, simply pass `quantized = True`, default is `False`.
We can expect slightly accuracy drop from quantized model, and not necessary faster than normal 32-bit float model, totally depends on machine.
```
quantized_model = malaya.entity.transformer(model = 'alxlnet', quantized = True)
```
#### Predict
```python
def predict(self, string: str):
"""
Tag a string.
Parameters
----------
string : str
Returns
-------
result: Tuple[str, str]
"""
```
```
model.predict(string)
model.predict(string1)
quantized_model.predict(string)
quantized_model.predict(string1)
```
#### Group similar tags
```python
def analyze(self, string: str):
"""
Analyze a string.
Parameters
----------
string : str
Returns
-------
result: {'words': List[str], 'tags': [{'text': 'text', 'type': 'location', 'score': 1.0, 'beginOffset': 0, 'endOffset': 1}]}
"""
```
```
model.analyze(string)
model.analyze(string1)
```
#### Vectorize
Let say you want to visualize word level in lower dimension, you can use `model.vectorize`,
```python
def vectorize(self, string: str):
"""
vectorize a string.
Parameters
----------
string: List[str]
Returns
-------
result: np.array
"""
```
```
strings = [string,
'Husein baca buku Perlembagaan yang berharga 3k ringgit dekat kfc sungai petani minggu lepas, 2 ptg 2 oktober 2019 , suhu 32 celcius, sambil makan ayam goreng dan milo o ais',
'contact Husein at husein.zol05@gmail.com',
'tolong tempahkan meja makan makan nasi dagang dan jus apple, milo tarik esok dekat Restoran Sebulek']
r = [quantized_model.vectorize(string) for string in strings]
x, y = [], []
for row in r:
x.extend([i[0] for i in row])
y.extend([i[1] for i in row])
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
tsne = TSNE().fit_transform(y)
tsne.shape
plt.figure(figsize = (7, 7))
plt.scatter(tsne[:, 0], tsne[:, 1])
labels = x
for label, x, y in zip(
labels, tsne[:, 0], tsne[:, 1]
):
label = (
'%s, %.3f' % (label[0], label[1])
if isinstance(label, list)
else label
)
plt.annotate(
label,
xy = (x, y),
xytext = (0, 0),
textcoords = 'offset points',
)
```
Pretty good, the model able to know cluster similar entities.
### Load Transformer Ontonotes 5 model
```python
def transformer_ontonotes5(
model: str = 'xlnet', quantized: bool = False, **kwargs
):
"""
Load Transformer Entity Tagging model trained on Ontonotes 5 Bahasa, transfer learning Transformer + CRF.
Parameters
----------
model : str, optional (default='bert')
Model architecture supported. Allowed values:
* ``'bert'`` - Google BERT BASE parameters.
* ``'tiny-bert'`` - Google BERT TINY parameters.
* ``'albert'`` - Google ALBERT BASE parameters.
* ``'tiny-albert'`` - Google ALBERT TINY parameters.
* ``'xlnet'`` - Google XLNET BASE parameters.
* ``'alxlnet'`` - Malaya ALXLNET BASE parameters.
* ``'fastformer'`` - FastFormer BASE parameters.
* ``'tiny-fastformer'`` - FastFormer TINY parameters.
quantized : bool, optional (default=False)
if True, will load 8-bit quantized model.
Quantized model not necessary faster, totally depends on the machine.
Returns
-------
result: model
List of model classes:
* if `bert` in model, will return `malaya.model.bert.TaggingBERT`.
* if `xlnet` in model, will return `malaya.model.xlnet.TaggingXLNET`.
* if `fastformer` in model, will return `malaya.model.fastformer.TaggingFastFormer`.
"""
```
```
albert = malaya.entity.transformer_ontonotes5(model = 'albert')
alxlnet = malaya.entity.transformer_ontonotes5(model = 'alxlnet')
```
#### Load Quantized model
To load 8-bit quantized model, simply pass `quantized = True`, default is `False`.
We can expect slightly accuracy drop from quantized model, and not necessary faster than normal 32-bit float model, totally depends on machine.
```
quantized_albert = malaya.entity.transformer_ontonotes5(model = 'albert', quantized = True)
quantized_alxlnet = malaya.entity.transformer_ontonotes5(model = 'alxlnet', quantized = True)
```
#### Predict
```python
def predict(self, string: str):
"""
Tag a string.
Parameters
----------
string : str
Returns
-------
result: Tuple[str, str]
"""
```
```
albert.predict(string)
alxlnet.predict(string)
albert.predict(string1)
alxlnet.predict(string1)
quantized_albert.predict(string)
quantized_alxlnet.predict(string1)
```
#### Group similar tags
```python
def analyze(self, string: str):
"""
Analyze a string.
Parameters
----------
string : str
Returns
-------
result: {'words': List[str], 'tags': [{'text': 'text', 'type': 'location', 'score': 1.0, 'beginOffset': 0, 'endOffset': 1}]}
"""
```
```
alxlnet.analyze(string1)
```
#### Vectorize
Let say you want to visualize word level in lower dimension, you can use `model.vectorize`,
```python
def vectorize(self, string: str):
"""
vectorize a string.
Parameters
----------
string: List[str]
Returns
-------
result: np.array
"""
```
```
strings = [string, string1]
r = [quantized_model.vectorize(string) for string in strings]
x, y = [], []
for row in r:
x.extend([i[0] for i in row])
y.extend([i[1] for i in row])
tsne = TSNE().fit_transform(y)
tsne.shape
plt.figure(figsize = (7, 7))
plt.scatter(tsne[:, 0], tsne[:, 1])
labels = x
for label, x, y in zip(
labels, tsne[:, 0], tsne[:, 1]
):
label = (
'%s, %.3f' % (label[0], label[1])
if isinstance(label, list)
else label
)
plt.annotate(
label,
xy = (x, y),
xytext = (0, 0),
textcoords = 'offset points',
)
```
Pretty good, the model able to know cluster similar entities.
### Load general Malaya entity model
This model able to classify,
1. date
2. money
3. temperature
4. distance
5. volume
6. duration
7. phone
8. email
9. url
10. time
11. datetime
12. local and generic foods, can check available rules in malaya.texts._food
13. local and generic drinks, can check available rules in malaya.texts._food
We can insert BERT or any deep learning model by passing `malaya.entity.general_entity(model = model)`, as long the model has `predict` method and return `[(string, label), (string, label)]`. This is an optional.
```
entity = malaya.entity.general_entity(model = model)
entity.predict('Husein baca buku Perlembagaan yang berharga 3k ringgit dekat kfc sungai petani minggu lepas, 2 ptg 2 oktober 2019 , suhu 32 celcius, sambil makan ayam goreng dan milo o ais')
entity.predict('contact Husein at husein.zol05@gmail.com')
entity.predict('tolong tempahkan meja makan makan nasi dagang dan jus apple, milo tarik esok dekat Restoran Sebulek')
```
### Voting stack model
```
malaya.stack.voting_stack([albert, alxlnet, alxlnet], string1)
```
| github_jupyter |
**[WGT-01]**
Specify the TensorFlow version.
```
%tensorflow_version 2.x
```
**[WGT-02]**
Import modules.
```
import numpy as np
import copy, random, time
from tensorflow.keras import layers, models
from IPython.display import clear_output
```
**[WGT-03]**
Define a function to get the field data.
```
def get_field():
field_img = '''
##############
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
# #
##############
'''
field = []
for line in field_img.split('\n'):
if line == '':
continue
field.append(list(line))
return field
```
**[WGT-04]**
Define the Environ class.
```
class Environ:
def __init__(self):
self.action_map = [(0, 1), (1, 0), (0, -1), (-1, 0)]
self.restart()
def restart(self):
self.field = get_field()
for _ in range(10):
y = np.random.randint(1, 13)
x = np.random.randint(1, 13)
self.field[y][x] = 'x'
def move(self, s, a):
x, y = s
dx, dy = self.action_map[a]
self.field[y][x] = '+'
x += dx
y += dy
s_new = (x, y)
if self.field[y][x] != ' ':
return 0, s_new, True # Reward, Next position, Is game over?
return 1, s_new, False # Reward, Next position, Is game over?
def get_state(self, s):
x, y = s
walls = [[0.0 if c == ' ' else 1.0 for c in line] for line in self.field]
walker = np.zeros((14, 14))
walker[y][x] = 1.0
state = np.zeros((14, 14, 2))
state[:, :, 0] = walls
state[:, :, 1] = walker
return state.tolist()
```
**[WGT-05]**
Define the QValue class.
```
class QValue:
def __init__(self):
self.model = self.build_model()
def build_model(self):
cnn_input = layers.Input(shape=(14, 14, 2))
cnn = layers.Conv2D(8, (5, 5), padding='same', use_bias=True,
activation='relu')(cnn_input)
cnn_flatten = layers.Flatten()(cnn)
action_input = layers.Input(shape=(4,))
combined = layers.concatenate([cnn_flatten, action_input])
hidden1 = layers.Dense(2048, activation='relu')(combined)
hidden2 = layers.Dense(1024, activation='relu')(hidden1)
q_value = layers.Dense(1)(hidden2)
model = models.Model(inputs=[cnn_input, action_input], outputs=q_value)
model.compile(loss='mse')
return model
def get_action(self, state):
states = []
actions = []
for a in range(4):
states.append(np.array(state))
action_onehot = np.zeros(4)
action_onehot[a] = 1
actions.append(action_onehot)
q_values = self.model.predict([np.array(states), np.array(actions)])
optimal_action = np.argmax(q_values)
return optimal_action, q_values[optimal_action][0]
```
**[WGT-06]**
Define a function to get a single episode.
```
def get_episode(environ, q_value, epsilon):
episode = []
trace = []
environ.restart()
s = (np.random.randint(1, 13), np.random.randint(1, 13))
while True:
trace.append(s)
state = environ.get_state(s)
if np.random.random() < epsilon:
a = np.random.randint(4)
else:
a, _ = q_value.get_action(state)
r, s_new, game_over = environ.move(s, a)
if game_over:
state_new = None
else:
state_new = environ.get_state(s_new)
episode.append((state, a, r, state_new))
if game_over:
break
s = s_new
return episode, trace
```
**[WGT-07]**
Define a function to show a sample episode.
```
def show_sample(environ, q_value):
_, trace = get_episode(environ, q_value, epsilon=0)
display = copy.deepcopy(environ.field)
display = [[' ' if c == '+' else c for c in line] for line in display]
for s in trace:
x, y = s
display[y][x] = '*'
time.sleep(0.5)
clear_output(wait=True)
for line in display:
print(''.join(line))
display[y][x] = '+'
print('Length: {}'.format(len(trace)))
```
**[WGT-08]**
Define a function to train the model.
```
def train(environ, q_value, num):
experience = []
for c in range(num):
print()
print('Iteration {}'.format(c+1))
print('Collecting data', end='')
for n in range(50):
print('.', end='')
if n % 10 == 0:
epsilon = 0
else:
epsilon = 0.2
episode, _ = get_episode(environ, q_value, epsilon)
experience += episode
if len(experience) > 10000:
experience = experience[-10000:]
if len(experience) < 1000:
continue
print()
print('Training the model...')
examples = experience[-200:] + random.sample(experience[:-200], 400)
np.random.shuffle(examples)
states, actions, labels = [], [], []
for state, a, r, state_new in examples:
states.append(np.array(state))
action_onehot = np.zeros(len(environ.action_map))
action_onehot[a] = 1
actions.append(action_onehot)
if not state_new: # Terminal state
q_new = 0
else:
_, q_new = q_value.get_action(state_new)
labels.append(np.array(r + q_new))
q_value.model.fit([np.array(states), np.array(actions)], np.array(labels),
batch_size=50, epochs=100, verbose=0)
show_sample(environ, q_value)
```
**[WGT-09]**
Create as Environ instance and a QValue instance.
```
environ = Environ()
q_value = QValue()
q_value.model.summary()
```
**[WGT-10]**
Train the model.
```
train(environ, q_value, num=50)
```
**[WGT-11]**
Show a sample episode using the trained model.
```
show_sample(environ, q_value)
```
**[WGT-12]**
Mount the Google drive on the runtime environment.
```
from google.colab import drive
drive.mount('/content/gdrive')
```
**[WGT-13]**
Save the trained model.
```
q_value.model.save('/content/gdrive/My Drive/walk_game_model.hd5', save_format='h5')
!ls -l '/content/gdrive/My Drive/walk_game_model.hd5'
```
| github_jupyter |
# binary classification example - titanic dataset
```
import warnings
warnings.filterwarnings('ignore')
%load_ext autoreload
%autoreload 2
import copy
import numpy as np
import pandas as pd
import databricks.koalas as ks
from pandas.testing import assert_frame_equal
from pandas.testing import assert_series_equal
from xgboost import XGBClassifier
import treelite
import treelite_runtime
import dill
from sklearn.metrics import make_scorer
from sklearn.model_selection import StratifiedKFold
from xgboost import XGBClassifier
from hyperopt import hp
from hyperopt import tpe
from pyspark import SparkConf, SparkContext
conf = SparkConf()
conf.set('spark.executor.memory', '2g')
SparkContext(conf=conf)
ks.set_option('compute.default_index_type', 'distributed-sequence')
from gators.util import util
# data cleaning
from gators.data_cleaning import (
DropHighNaNRatio,
DropLowCardinality,
DropHighCardinality,
DropDatatypeColumns,
DropColumns,
KeepColumns,
Replace,
)
# imputers
from gators.imputers import (
FloatImputer,
IntImputer,
ObjectImputer,
)
# encoders
from gators.encoders import (
OrdinalEncoder,
WOEEncoder,
OneHotEncoder
)
# binning
from gators.binning import BinRareEvents
# feature generation
from gators.feature_generation import (
ClusterStatistics,
PolynomialFeatures,
)
from gators.feature_generation_str import (
StringContains,
StringLength,
Extract,
SplitExtract,
)
# feature selection
from gators.feature_selection import (
SelectFromModel,
InformationValue
)
#frame converter
from gators.converter import KoalasToPandas
# model building
from gators.model_building import (
TrainTestSplit,
HyperOpt,
XGBBoosterBuilder,
XGBTreeliteDumper,
)
# pipeline
from gators.pipeline import Pipeline
```
## offline pipeline - feature selection
### with pandas
```
data = pd.read_parquet('data/titanic.parquet')
data = data.reset_index(drop=True)
y = data['Survived']
X = data.drop('Survived', axis=1)
train_test_split = TrainTestSplit(test_ratio=0.3, strategy='ordered')
X_train, X_test, y_train, y_test = train_test_split.transform(X, y)
X_train.head(2)
label_counts = data['Survived'].value_counts(normalize=True).sort_values().iloc[::-1]
label_counts.plot(
kind='barh',
figsize=(4, 3),
title='Label Distribution',
);
```
### offline pipeline - data preprocessing & feature selection
```
k = 14
model = XGBClassifier(
random_state=0,
eval_metric='logloss',
)
offline_steps = [
StringLength(['Ticket', 'Cabin', 'Name']),
SplitExtract(['Name'], [', '], [1], ['Dummy']),
SplitExtract(['Dummy'], ['.'], [0], ['Title']),
DropColumns(['Ticket', 'Cabin', 'Name', 'Dummy']),
ObjectImputer(strategy='constant', value='MISSING'),
Replace(
to_replace_dict=
{'Title': {'Ms': 'Miss', 'Mme': 'Mrs', 'Mlle': 'Miss'}}
),
BinRareEvents(min_ratio=0.1),
WOEEncoder(),
FloatImputer(strategy='mean'),
IntImputer(strategy='constant', value=-1),
ClusterStatistics(
clusters_dict={'FamilyStat': ['SibSp', 'Parch']},
column_names=['FamilySize', 'FamilyStd'],
),
PolynomialFeatures(
columns=['Pclass', 'FamilySize', 'Fare', 'Age'],
degree=2, interaction_only=True),
DropColumns(columns=['SibSp', 'Parch']),
SelectFromModel(model=model, k=k)
]
offline_pipe = Pipeline(steps=offline_steps)
X_train_prepro = offline_pipe.fit_transform(X_train, y_train)
X_train_prepro_np = offline_pipe.transform_numpy(X_train.to_numpy())
features = list(X_train_prepro.columns)
assert np.allclose(
X_train_prepro.to_numpy(),
X_train_prepro_np
)
offline_pipe.get_feature_importances(k=k)[::-1].plot(
kind='barh',
figsize=(7, 5),
title='Feature Importance',
);
```
### koalas
```
data_ks = ks.read_parquet('data/titanic.parquet', index_col='PassengerId')
y_ks = data_ks['Survived']
X_ks = data_ks.drop('Survived', axis=1)
train_test_split = TrainTestSplit(test_ratio=0.3, strategy='ordered')
X_train_ks, X_test_ks, y_train_ks, y_test_ks = \
train_test_split.transform(X_ks, y_ks)
offline_pipe_ks = Pipeline(steps=offline_steps)
X_train_prepro_ks = offline_pipe_ks.fit_transform(
X_train_ks, y_train_ks)
X_train_prepro_ks_np = offline_pipe_ks.transform_numpy(
X_train_ks.to_numpy())
```
#### check pandas and koalas results match - offline pipeline
```
assert_frame_equal(
X_train_prepro,
X_train_prepro_ks.to_pandas()
)
assert np.allclose(
X_train_prepro_np,
X_train_prepro_ks_np
)
```
## production pipeline
```
prod_columns = offline_pipe.get_production_columns()
prod_steps = [
StringLength(['Ticket', 'Cabin', 'Name']),
SplitExtract(['Name'], [', '], [1], ['Dummy']),
SplitExtract(['Dummy'], ['.'], [0], ['Title']),
DropColumns(['Ticket', 'Cabin', 'Name', 'Dummy']),
ObjectImputer(strategy='constant', value='MISSING'),
Replace(
to_replace_dict=
{'Title': {'Ms': 'Miss', 'Mme': 'Mrs', 'Mlle': 'Miss'}}
),
BinRareEvents(min_ratio=0.1),
WOEEncoder(),
FloatImputer(strategy='mean'),
IntImputer(strategy='constant', value=-1),
ClusterStatistics(
clusters_dict={'FamilyStat': ['SibSp', 'Parch']},
column_names=['FamilySize', 'FamilyStd'],
),
PolynomialFeatures(
columns=['Pclass', 'FamilySize', 'Fare', 'Age'],
degree=2, interaction_only=True),
KeepColumns(columns_to_keep=features),
]
```
### pandas
```
X_train_prod = X_train[prod_columns]
X_train_prod_np = X_train_prod.to_numpy()
prod_pipe = Pipeline(steps=prod_steps)
_ = prod_pipe.fit(X_train_prod, y_train)
X_train_prod_prepro = prod_pipe.transform(X_train_prod)
X_train_prod_prepro_np = prod_pipe.transform_numpy(
X_train_prod.to_numpy())
```
### koalas
```
X_train_prod_ks = X_train_ks[prod_columns]
X_train_prod_ks_np = X_train_prod.to_numpy()
prod_pipe = Pipeline(steps=prod_steps)
_ = prod_pipe.fit(X_train_prod_ks, y_train_ks)
X_train_prod_prepro_ks = prod_pipe.transform(X_train_prod_ks)
X_train_prod_prepro_ks_np = prod_pipe.transform_numpy(
X_train_prod_ks.to_numpy())
```
#### check offline pipeline and production pipeline match
```
assert_frame_equal(
X_train_prepro,
X_train_prod_prepro
)
assert np.allclose(
X_train_prod_prepro_np,
X_train_prepro_np)
```
#### check pandas and koalas results match - production pipeline
```
assert np.allclose(
X_train_prod_prepro_np,
X_train_prod_prepro_ks_np)
assert_frame_equal(
X_train_prod_prepro,
X_train_prod_prepro_ks.to_pandas()
)
assert np.allclose(
X_train_prod_prepro_np,
X_train_prod_prepro_ks_np)
```
## model building
### numpy
```
def f1_score(y_true, y_pred):
p = y_true[y_pred == 1].mean()
r = y_pred[y_true == 1].mean()
if (p == 0) | (r == 0):
return 0
return 2 * p * r / (p + r)
f1_scoring = make_scorer(f1_score)
y_train_np = y_train.to_numpy()
n_splits = 10
max_evals = 10
kfold = StratifiedKFold(n_splits=n_splits)
space = {
'n_estimators': hp.quniform('n_estimators', 100, 200, 25),
'max_depth': hp.quniform('max_depth', 2, 3, 1),
'learning_rate': hp.loguniform('learning_rate', -4, -1),
'subsample': hp.uniform('subsample', 0.8, 1),
'colsample_bytree': hp.uniform('colsample_bytree', 0.8, 1),
}
model = XGBClassifier(
random_state=0,
use_label_encoder=False,
)
hyper = HyperOpt(
model=model,
algo=tpe.suggest,
scoring=f1_scoring,
space=space,
max_evals=max_evals,
kfold=kfold,
features=features,
)
_ = hyper.fit(X_train_prod_prepro_np, y_train.to_numpy())
hyper.get_feature_importances().iloc[::-1].plot(
kind='barh',
figsize=(9, 9),
title='Most important features',
);
hyper.history.sort_values('loss')
model_np = copy.copy(hyper.model)
model_path = 'models/model_titanic.dill'
with open(model_path, 'wb') as file:
dill.dump(model_np, file)
```
### pandas
```
model_pd = copy.copy(model_np)
_ = model_pd.fit(X_train_prod_prepro, y_train)
```
### treelite
```
xgb_booster = XGBBoosterBuilder.train(
model=hyper.model,
X_train=X_train_prod_prepro_np,
y_train=y_train_np,
)
treelite_model = treelite.Model.from_xgboost(xgb_booster)
treelite_model.export_lib(
toolchain='gcc',
libpath='./models/treelite_xgb_titanic.so',
params={'parallel_comp': 4},
verbose=True
)
model_tl = treelite_runtime.Predictor(
'./models/treelite_xgb_titanic.so', verbose=False)
```
#### check the models prediction match
```
X_test_prod = X_test[prod_columns]
X_test_prod_np = X_test_prod.to_numpy()
y_pred_pd = model_pd.predict_proba(
prod_pipe.transform(X_test_prod))[:, 1]
y_pred_np = model.predict_proba(
prod_pipe.transform_numpy(X_test_prod_np.copy()))[:, 1]
y_pred_tl = model_tl.predict(
treelite_runtime.DMatrix(
prod_pipe.transform_numpy(
X_test_prod_np.copy()
)
)
)
assert np.allclose(y_pred_np, y_pred_pd)
assert np.allclose(y_pred_np, y_pred_tl)
```
## model evaluation
```
model_np = copy.copy(hyper.model)
X_test_prod = X_test[prod_columns]
X_test_prod_np = X_test_prod.to_numpy()
X_test_prod_prepro_np = prod_pipe.transform(X_test_prod).to_numpy()
y_test_pred = model_np.predict(X_test_prod_prepro_np)
f'f1 score: {f1_score(y_test, y_test_pred):.2f}'
```
## per-sample bechmarking
### pipeline only
```
x = X_test_prod.iloc[[0]]
x_np = x.to_numpy()
stats_pd = %timeit -o prod_pipe.transform(x)
stats_np = %timeit -o prod_pipe.transform_numpy(x_np.copy())
speedup = 1e3 * float(str(stats_pd).split(' ')[0]) / float(str(stats_np).split(' ')[0])
f'Speed-up Pandas VS Numpy x{round(speedup)}'
```
### pipeline & model
```
stats_pd = %timeit -o model_pd.predict(prod_pipe.transform(x))[0]
stats_np = %timeit -o model.predict(prod_pipe.transform_numpy(x_np.copy()))[0]
stats_tl = %timeit -o model_tl.predict(treelite_runtime.DMatrix(prod_pipe.transform_numpy(x_np.copy())))
speedup = 1e3 * float(str(stats_pd).split(' ')[0]) / float(str(stats_np).split(' ')[0])
f'Speed-up Pandas VS Numpy x{round(speedup)}'
speedup = 1e3 * float(str(stats_pd).split(' ')[0]) / float(str(stats_tl).split(' ')[0])
f'Speed-up Pandas VS Numpy&Treelite x{round(speedup)}'
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import roc_auc_score,accuracy_score,classification_report,roc_curve,confusion_matrix
from sklearn.impute import SimpleImputer
pd.set_option('display.max_columns',None)
data_train = pd.read_csv('../titanic/train.csv')
data_train.head()
data_test = pd.read_csv('../titanic/test.csv')
data_test.head()
#shape of training dataset
data_train.shape
#checking for null values
data_train.isnull().sum()
data_test.isnull().sum()
#Removing the Cabin and Name column from train and test data since there are many null values
data_train.drop(columns =['Cabin','Name'], inplace = True)
data_test.drop(columns=['Cabin','Name'], inplace = True)
#Splitting the dataset in feature and target variable
X = data_train.drop(columns = 'Survived')
y = data_train['Survived'].copy()
#checking for null values
X.isnull().sum()
#Imputing Embarked and Age columns with most frequent
imp = SimpleImputer(missing_values=np.nan,strategy='most_frequent')
X['Age'] = imp.fit_transform(X[['Age']])
X['Embarked'] = imp.fit_transform(X[['Embarked']])
#Imputing in testing data as well
data_test['Age'] = imp.fit_transform(data_test[['Age']])
data_test['Embarked'] = imp.fit_transform(data_test[['Embarked']])
data_test['Fare'] = imp.fit_transform(data_test[['Fare']])
#checking data types of each columns
X.info()
fig = plt.figure(figsize=(25, 15))
cols = 5
rows = np.ceil(float(X.shape[1]) / cols)
for i, column in enumerate(X.columns):
ax = fig.add_subplot(rows, cols, i + 1)
ax.set_title(column)
if X.dtypes[column] == np.object:
X[column].value_counts().plot(kind="bar", axes=ax)
else:
X[column].hist(axes=ax)
plt.xticks(rotation="vertical")
plt.subplots_adjust(hspace=0.7, wspace=0.2)
#Removing ticket and passenger id columns
X.drop(columns =['Ticket','PassengerId'], inplace = True)
data_test1 = data_test.drop(columns=['Ticket','PassengerId'])
#Separating categorical and continuous variables
X_cat = X.select_dtypes(include='object')
X_cont = X.select_dtypes(exclude='object')
#checking skewness
for i in X_cont:
print(i, X_cont[i].skew())
#Removing skewness
for feature in X_cont:
if abs(X_cont[feature].skew())> 1:
X_cont[feature] = X_cont[feature].apply(np.log1p)
data_test1[feature] = data_test1[feature].apply(np.log1p)
#checking skewness
for i in X_cont:
print(i, X_cont[i].skew())
#Encoding the categorical variables
le = LabelEncoder()
for i in X_cat:
X_cat[i] = le.fit_transform(X_cat[i])
for j in data_test1.select_dtypes(include='object'):
data_test1[j] = le.fit_transform(data_test1.select_dtypes(include='object')[j])
data_test1.head()
#Training dataset
X_train = pd.concat([X_cat,X_cont],axis=1, sort=False)
X_train.head()
#Applying Logistic Regression
logreg = LogisticRegression()
logreg.fit(X_train,y)
y_pred = logreg.predict(data_test1)
score_lr = logreg.score(X_train,y)
print('Score for Logistic Regression', score_lr)
#Applying Decision Tree
dt = DecisionTreeClassifier(max_depth=3,random_state=42)
dt.fit(X_train,y)
y_pred1 = dt.predict(data_test1)
score_dt = dt.score(X_train,y)
print('Score for Decision Tree', score_dt)
#Naive aggregation
from sklearn.ensemble import VotingClassifier
# Various models
clf_1 = LogisticRegression()
clf_2 = DecisionTreeClassifier(random_state=4)
clf_3 = RandomForestClassifier(random_state=4)
model_list = [('lr',clf_1),('DT',clf_2),('RF',clf_3)]
#Hard voting
voting_clf_hard = VotingClassifier(estimators=model_list,voting='hard')
# Fit the model on training data
voting_clf_hard.fit(X_train,y)
score_hard = dt.score(X_train,y)
print('Score for Hard Voting ', score_hard)
#Soft Voting
voting_clf_soft = VotingClassifier(estimators=model_list,voting='soft')
# Fit the model on training data
voting_clf_soft.fit(X_train,y)
score_soft = dt.score(X_train,y)
print('Score for Soft Voting ', score_soft)
#Hyperparameter Tuning
rfc = RandomForestClassifier(random_state=2)
param_grid = {
'n_estimators': [6,7,8],
'max_features': ['auto', 'sqrt', 'log2'],
'max_depth' : [6,7,8,9,10],
'criterion' :['gini', 'entropy']
}
grid_search_model = GridSearchCV(rfc, param_grid=param_grid)
grid_search_model.fit(X_train, y)
grid_search_model.best_params_
#Applying Random Forest
rfc1 = RandomForestClassifier(random_state=6,criterion = 'entropy',max_depth = 8,max_features='auto',n_estimators=7)
rfc1.fit(X_train,y)
y_pred2 = rfc1.predict(data_test1)
score_rfc = rfc1.score(X_train,y)
print('Score for Random Forest ', score_rfc)
df = pd.DataFrame({'Survived':np.array(y_pred2)})
df['Survived'].value_counts()
test_dataset = pd.concat([data_test['PassengerId'],df],axis=1,sort=False)
test_dataset.head()
# Submission CSV file
submit = test_dataset
submit.to_csv('titanic submission.csv', index = False)
```
| github_jupyter |
```
# Use centrailzed training to compare with federated learning
epochs = 30
n_train_items = 12800
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import Dataset
from torchvision import datasets, transforms
import numpy as np
import opacus
from opacus import PrivacyEngine
import copy
# import torchcsprng as csprng
class Arguments():
def __init__(self):
self.batch_size = 250
self.test_batch_size = 64
self.epochs = epochs
self.lr = 0.25
self.momentum = 0.5
self.no_cuda = True
self.seed = 1
self.log_interval = 30
self.save_model = False
args = Arguments()
use_cuda = not args.no_cuda and torch.cuda.is_available()
torch.manual_seed(args.seed)
device = torch.device("cuda" if use_cuda else "cpu")
kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}
# random_number_generator = csprng.create_random_device_generator(
# "/dev/urandom")
train_dataloader = torch.utils.data.DataLoader(
datasets.MNIST('../../data', train=True, download=True, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.batch_size
)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../../data', train=False, download=True, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.test_batch_size
)
#---
# train_dataloader = [
# ((data), (target))
# for i, (data, target) in enumerate(train_loader)
# if i < n_train_items / args.batch_size
# ]
# test_dataloader = [
# ((data), (target))
# for i, (data, target) in enumerate(test_loader)
# if i < n_train_items / args.batch_size
# ]
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 16, 8, 2, padding=3)
self.conv2 = nn.Conv2d(16, 32, 4, 2)
self.fc1 = nn.Linear(32 * 4 * 4, 32)
self.fc2 = nn.Linear(32, 10)
def forward(self, x):
# x of shape [B, 1, 28, 28]
x = F.relu(self.conv1(x)) # -> [B, 16, 14, 14]
x = F.max_pool2d(x, 2, 1) # -> [B, 16, 13, 13]
x = F.relu(self.conv2(x)) # -> [B, 32, 5, 5]
x = F.max_pool2d(x, 2, 1) # -> [B, 32, 4, 4]
x = x.view(-1, 32 * 4 * 4) # -> [B, 512]
x = F.relu(self.fc1(x)) # -> [B, 32]
x = self.fc2(x) # -> [B, 10]
return x
def name(self):
return "SampleConvNet"
def train(args, device, model, train_dataloader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_dataloader):
data,target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.cross_entropy(output, target)
loss.backward()
optimizer.step()
o_model= copy.deepcopy(model)
optimizer_nodp = optim.SGD(o_model.parameters(), lr=args.lr)
output = o_model(data)
loss = F.cross_entropy(output, target)
loss.backward()
optimizer_nodp.step()
oms = o_model.state_dict()
ms = model.state_dict()
for key in ms:
oms[key] += (ms[key]-oms[key])*0.1+1e-6
model.load_state_dict(oms)
if batch_idx % args.log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * args.batch_size, len(train_dataloader.dataset),
100. * batch_idx / len(train_dataloader), loss.item()))
def test(args, device, model, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.cross_entropy(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / (len(test_loader.dataset))))
#%%time
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=args.lr)
privacy_engine = PrivacyEngine(
model,
batch_size=args.batch_size,
sample_size=len(train_dataloader.dataset),
alphas=[1 + x / 10.0 for x in range(1, 100)] + list(range(12, 64)),
noise_multiplier=6,
max_grad_norm=1.5,)
privacy_engine.attach(optimizer)
for epoch in range(1, args.epochs + 1):
train(args, device, model, train_dataloader, optimizer, epoch)
test(args, device, model, test_loader)
if (args.save_model):
torch.save(model.state_dict(), "mnist_cnn.pt")
```
| github_jupyter |
# Análise de Dados com Python
Neste notebook, utilizaremos dados de automóveis para analisar a influência das características de um carro em seu preço, tentando posteriormente prever qual será o preço de venda de um carro. Utilizaremos como fonte de dados um arquivo .csv com dados já tratados em outro notebook. Caso você tenha dúvidas quanto a como realizar o tratamento dos dados, dê uma olhada no meu repositório Learn-Pandas
```
import pandas as pd
import numpy as np
df = pd.read_csv('clean_auto_df.csv')
df.head()
```
<h4> Utilizando visualização de dados para verificar padrões de características individuais</h4>
```
# Importando as bibliotecas "Matplotlib" e "Seaborn
# utilizando "%matplotlib inline" para plotar o gráfico dentro do notebook.
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
```
<h4> Como escolher o método de visualização correto? </h4>
<p> Ao visualizar variáveis individuais, é importante primeiro entender com que tipo de variável você está lidando. Isso nos ajudará a encontrar o método de visualização correto para essa variável. Por exemplo, podemos calcular a correlação entre variáveis do tipo "int64" ou "float64" usando o método "corr":</p>
```
df.corr()
```
Os elementos diagonais são sempre um; (estudaremos isso, mais precisamente a correlação de Pearson no final do notebook)
```
# se quisermos verificar a correlação de apenas algumas colunas
df[['bore', 'stroke', 'compression-ratio', 'horsepower']].corr()
```
<h2> Variáveis numéricas contínuas: </h2>
<p> Variáveis numéricas contínuas são variáveis que podem conter qualquer valor dentro de algum intervalo. Variáveis numéricas contínuas podem ter o tipo "int64" ou "float64". Uma ótima maneira de visualizar essas variáveis é usando gráficos de dispersão com linhas ajustadas. </p>
<p> Para começar a compreender a relação (linear) entre uma variável individual e o preço. Podemos fazer isso usando "regplot", que plota o gráfico de dispersão mais a linha de regressão ajustada para os dados. </p>
<h4> Relação linear positiva </h4>
Vamos encontrar o gráfico de dispersão de "engine-size" e "price"
```
# Engine size as potential predictor variable of price
sns.regplot(x="engine-size", y="price", data=df)
plt.ylim(0,)
```
<p> Note que conforme o tamanho do motor aumenta, o preço sobe: isso indica uma correlação direta positiva entre essas duas variáveis. O tamanho do motor parece um bom preditor de preço, já que a linha de regressão é quase uma linha diagonal perfeita. </p>
```
# Podemos examinar a correlação entre 'engine-size' e 'price' e ver que é aproximadamente 0,87
df[["engine-size", "price"]].corr()
```
<h4> Relação linear Negativa </h4>
```
# city-mpg também pode ser um bom preditor para a variável price:
sns.regplot(x="city-mpg", y="price", data=df)
```
<p> À medida que o city-mpg sobe, o preço desce: isso indica uma relação inversa / negativa entre essas duas variáveis, podendo ser um indicador de preço. </p>
```
df[['city-mpg', 'price']].corr()
```
<h4> Relação linear neutra (ou fraca) </h4>
```
sns.regplot(x="peak-rpm", y="price", data=df)
```
<p> A variável peak-rpm não parece ser um bom preditor do preço, pois a linha de regressão está próxima da horizontal. Além disso, os pontos de dados estão muito dispersos e distantes da linha ajustada, apresentando grande variabilidade. Portanto, não é uma variável confiável. </p>
```
df[['peak-rpm','price']].corr()
```
<h2> Variáveis categóricas: </h2>
<p> Essas são variáveis que descrevem uma 'característica' de uma unidade de dados e são selecionadas a partir de um pequeno grupo de categorias. As variáveis categóricas podem ser do tipo "objeto" ou "int64". Uma boa maneira de visualizar variáveis categóricas é usar boxplots. </p>
```
sns.boxplot(x="body-style", y="price", data=df)
```
Vemos que as distribuições de preço entre as diferentes categorias de body-style têm uma sobreposição significativa e, portanto, body-style não seria um bom preditor de preço. Vamos examinar a "engine-location" e o "price" do motor:
```
sns.boxplot(x="engine-location", y="price", data=df)
```
<p> Aqui, vemos que a distribuição de preço entre essas duas categorias de localização do motor, dianteira e traseira, são distintas o suficiente para considerar a localização do motor como um bom indicador de preço em potencial. </p>
```
# drive-wheels
sns.boxplot(x="drive-wheels", y="price", data=df)
```
<p> Aqui vemos que a distribuição de preço entre as diferentes categorias de drive-wheels difere e podem ser um indicador de preço. </p>
<h2> Estatística Descritiva </h2>
<p> Vamos primeiro dar uma olhada nas variáveis usando um método de descrição. </p>
<p> A função <b> describe </b> calcula automaticamente estatísticas básicas para todas as variáveis contínuas. Quaisquer valores NaN são automaticamente ignorados nessas estatísticas. </p>
Isso mostrará:
<ul>
<li> a contagem dessa variável </li>
<li> a média </li>
<li> o desvio padrão (std) </li>
<li> o valor mínimo </li>
<li> o IQR (intervalo interquartil: 25%, 50% e 75%) </li>
<li> o valor máximo </li>
<ul>
```
df.describe()
# A configuração padrão de "describe" ignora variáveis do tipo de objeto.
# Podemos aplicar o método "describe" nas variáveis do tipo 'objeto' da seguinte forma:
df.describe(include=['object'])
```
<h3>Value Counts</h3>
A contagem de valores é uma boa maneira de entender quantas unidades de cada característica / variável temos.
Podemos aplicar o método "value_counts" na coluna 'drive-wheels'.
Não se esqueça que o método "value_counts" só funciona na série Pandas, não nos Dataframes Pandas.
Por isso, incluímos apenas um colchete "df ['drive-wheels']" e não dois colchetes "df [['drive-wheels']]".
```
df['drive-wheels'].value_counts()
# nós podemos converter a série para um dataframe:
df['drive-wheels'].value_counts().to_frame()
drive_wheels_counts = df['drive-wheels'].value_counts().to_frame()
drive_wheels_counts.rename(columns={'drive-wheels': 'value_counts'}, inplace=True)
drive_wheels_counts
# vamos renomear o index para 'drive-wheels':
drive_wheels_counts.index.name = 'drive-wheels'
drive_wheels_counts
# repetindo o processo para engine-location
engine_loc_counts = df['engine-location'].value_counts().to_frame()
engine_loc_counts.rename(columns={'engine-location': 'value_counts'}, inplace=True)
engine_loc_counts.index.name = 'engine-location'
engine_loc_counts.head()
```
<h2>Agrupando</h2>
<p> O método "groupby" agrupa os dados por categorias diferentes. Os dados são agrupados com base em uma ou várias variáveis e a análise é realizada nos grupos individuais. </p>
<p> Por exemplo, vamos agrupar pela variável "drive-wheels". Vemos que existem 3 categorias diferentes de rodas motrizes. </p>
```
df['drive-wheels'].unique()
```
<p> Se quisermos saber, em média, qual tipo de drive-wheels é mais valiosa, podemos agrupar "drive-wheels" e depois fazer a média delas. </p>
<p> Podemos selecionar as colunas 'drive-wheels', 'body-style' e 'price' e, em seguida, atribuí-las à variável "df_group_one". </p>
```
df_group_one = df[['drive-wheels','body-style','price']]
# Podemos então calcular o preço médio para cada uma das diferentes categorias de dados
df_group_one = df_group_one.groupby(['drive-wheels'],as_index=False).mean()
df_group_one
```
<p> Pelos nossos dados, parece que os veículos com tração traseira são, em média, os mais caros, enquanto as 4 rodas e as rodas dianteiras têm preços aproximadamente iguais. </p>
<p> Você também pode agrupar com várias variáveis. Por exemplo, vamos agrupar por 'drive-wheels' e 'body-style'. Isso agrupa o dataframe pelas combinações exclusivas 'drive-wheels' e 'body-style'. Podemos armazenar os resultados na variável 'grouped_test1'. </p>
```
df_gptest = df[['drive-wheels','body-style','price']]
grouped_test1 = df_gptest.groupby(['drive-wheels','body-style'],as_index=False).mean()
grouped_test1
```
Esses dados agrupados são muito mais fáceis de visualizar quando transformados em uma tabela dinâmica. Uma tabela dinâmica é como uma planilha do Excel, com uma variável ao longo da coluna e outra ao longo da linha. Podemos converter o dataframe em uma tabela dinâmica usando o método "pivô" para criar uma tabela dinâmica a partir dos grupos.
Nesse caso, deixaremos a variável da drive-wheels como as linhas da tabela e giraremos no estilo do corpo para se tornar as colunas da tabela:
```
grouped_pivot = grouped_test1.pivot(index='drive-wheels',columns='body-style')
grouped_pivot
```
As vezes não teremos dados para algumas das células pivô. Podemos preencher essas células ausentes com o valor 0, mas qualquer outro valor também pode ser usado. Deve ser mencionado que a falta de dados é um assunto bastante complexo...
```
grouped_pivot = grouped_pivot.fillna(0) #fill missing values with 0
grouped_pivot
df_gptest2 = df[['body-style','price']]
grouped_test_bodystyle = df_gptest2.groupby(['body-style'],as_index= False).mean()
grouped_test_bodystyle
```
<h2>Visualização dos dados</h2>
Vamos usar um mapa de calor para visualizar a relação entre body-style e price.
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.pcolor(grouped_pivot, cmap='RdBu')
plt.colorbar()
plt.show()
```
<p> O mapa de calor representa a variável alvo (price) proporcional à cor em relação às variáveis 'drive-wheels' e 'body-style' nos eixos vertical e horizontal, respectivamente. Isso nos permite visualizar como o preço está relacionado a 'drive-wheels' e 'body-style'. </p>
<p> Os rótulos padrão não transmitem informações úteis para nós. Vamos mudar isso: </p>
```
fig, ax = plt.subplots()
im = ax.pcolor(grouped_pivot, cmap='RdBu')
#label names
row_labels = grouped_pivot.columns.levels[1]
col_labels = grouped_pivot.index
#move ticks and labels to the center
ax.set_xticks(np.arange(grouped_pivot.shape[1]) + 0.5, minor=False)
ax.set_yticks(np.arange(grouped_pivot.shape[0]) + 0.5, minor=False)
#insert labels
ax.set_xticklabels(row_labels, minor=False)
ax.set_yticklabels(col_labels, minor=False)
#rotate label if too long
plt.xticks(rotation=90)
fig.colorbar(im)
plt.show()
```
<p> A visualização é muito importante na ciência de dados e os pacotes de visualização oferecem grande liberdade</p>
<p> A principal questão que queremos responder neste notebook é "Quais são as principais características que têm mais impacto no preço do carro?". </p>
<p> Para obter uma melhor medida das características importantes, olhamos para a correlação dessas variáveis com o preço do carro, em outras palavras: como o preço do carro depende dessa variável? </p>
<h2>Correlação e Causalidade</h2>
<p> <b> Correlação </b>: uma medida da extensão da interdependência entre as variáveis. </p>
<p> <b> Causalidade </b>: a relação entre causa e efeito entre duas variáveis. </p>
<p> É importante saber a diferença entre os dois e que a correlação não implica causalidade. Determinar a correlação é muito mais simples do que determinar a causalidade, pois a causalidade pode exigir experimentação independente. </p>
<p3> Correlação de Pearson </p>
<p> A Correlação de Pearson mede a dependência linear entre duas variáveis X e Y. </p>
<p> O coeficiente resultante é um valor entre -1 e 1 inclusive, onde: </p>
<ul>
<li> <b> 1 </b>: Correlação linear positiva total. </li>
<li> <b> 0 </b>: Sem correlação linear, as duas variáveis provavelmente não se afetam. </li>
<li> <b> -1 </b>: Correlação linear negativa total. </li>
</ul>
<p> Correlação de Pearson é o método padrão da função "corr". Como antes, podemos calcular a Correlação de Pearson das variáveis 'int64' ou 'float64'. </p>
```
df.corr()
```
<b> P-value </b>:
<p>P-value é o valor da probabilidade de que a correlação entre essas duas variáveis seja estatisticamente significativa. Normalmente, escolhemos um nível de significância de 0.05, o que significa que temos 95% de confiança de que a correlação entre as variáveis é significativa. </p>
Por convenção, quando o
<ul>
<li> o valor de p é $ <$ 0.001: afirmamos que há fortes evidências de que a correlação é significativa. </li>
<li> o valor p é $ <$ 0.05: há evidências moderadas de que a correlação é significativa. </li>
<li> o valor p é $ <$ 0.1: há evidências fracas de que a correlação é significativa. </li>
<li> o valor p é $> $ 0.1: não há evidências de que a correlação seja significativa. </li>
</ul>
```
# Podemos obter essas informações usando o módulo "stats" da biblioteca "scipy"
from scipy import stats
```
<h3>Wheel-base vs Price</h3>
Vamos calcular o coeficiente de correlação de Pearson e o P-value entre 'wheel-base' e 'price'.
```
pearson_coef, p_value = stats.pearsonr(df['wheel-base'], df['price'])
print('Coeficiente de Pearson', pearson_coef)
print('P-value', p_value)
```
A notacão científica do resultado indica que o valor é muito maior ou muito pequeno.
No caso de 8.076488270733218e-20 significa:
8.076488270733218 vezes 10 elevado a menos 20 (o que faz andar a casa decimal 20 vezes para esquerda):
0,0000000000000000008076488270733218
<h5> Conclusão: </h5>
<p> Como o P-value é $ <$ 0.001, a correlação entre wheel-base e price é estatisticamente significativa, embora a relação linear não seja extremamente forte (~ 0,585) </p>
<h3>Horsepower vs Price</h3>
```
pearson_coef, p_value = stats.pearsonr(df['horsepower'], df['price'])
print('Coeficiente de Pearson', pearson_coef)
print('P-value', p_value)
```
<h5> Conclusão: </h5>
<p> Como o P-value é $ <$ 0,001, a correlação entre a horsepower e price é estatisticamente significativa, e a relação linear é bastante forte (~ 0,809, próximo de 1) </p>
<h3>Length vs Price</h3>
```
pearson_coef, p_value = stats.pearsonr(df['length'], df['price'])
print('Coeficiente de Pearson', pearson_coef)
print('P-value', p_value)
```
<h5> Conclusão: </h5>
<p> Como o valor p é $ <$ 0,001, a correlação entre length e price é estatisticamente significativa, e a relação linear é moderadamente forte (~ 0,691). </p>
<h3>Width vs Price</h3>
```
pearson_coef, p_value = stats.pearsonr(df['width'], df['price'])
print('Coeficiente de Pearson', pearson_coef)
print('P-value', p_value)
```
##### Conclusão:
Como o valor p é <0,001, a correlação entre largura e preço é estatisticamente significativa e a relação linear é bastante forte (~ 0,751).
<h2>ANOVA</h2>
<p> A Análise de Variância (ANOVA) é um método estatístico usado para testar se existem diferenças significativas entre as médias de dois ou mais grupos. ANOVA retorna dois parâmetros: </p>
<p> <b> F-test score </b>: ANOVA assume que as médias de todos os grupos são iguais, calcula o quanto as médias reais se desviam da suposição e relata como a pontuação do F-test. Uma pontuação maior significa que há uma diferença maior entre as médias. </p>
<p> <b> P-value </b>: P-value diz o quão estatisticamente significativo é nosso valor de pontuação calculado. </p>
<p> Se nossa variável de preço estiver fortemente correlacionada com a variável que estamos analisando, espere que a ANOVA retorne uma pontuação considerável no F-test e um pequeno P-value. </p>
<h3>Drive Wheels</h3>
<p> Uma vez que ANOVA analisa a diferença entre diferentes grupos da mesma variável, a função groupby será útil. Como o algoritmo ANOVA calcula a média dos dados automaticamente, não precisamos tirar a média antes. </p>
<p> Vamos ver se diferentes tipos de 'drive wheels' afetam o 'price', agrupamos os dados. </ p>
```
grouped_test2=df_gptest[['drive-wheels', 'price']].groupby(['drive-wheels'])
grouped_test2.head(2)
# Podemos obter os valores do grupo de métodos usando o método "get_group".
grouped_test2.get_group('4wd')['price']
# podemos usar a função 'f_oneway' no módulo 'stats' para obter pontuação do test-F e o P-value
f_val, p_val = stats.f_oneway(grouped_test2.get_group('fwd')['price'], grouped_test2.get_group('rwd')['price'], grouped_test2.get_group('4wd')['price'])
print( "ANOVA: F=", f_val, ", P =", p_val)
```
Este é um ótimo resultado, com uma grande pontuação no test-F mostrando uma forte correlação e um P-value de quase 0 implicando em significância estatística quase certa. Mas isso significa que todos os três grupos testados são altamente correlacionados?
```
#### fwd e rwd
f_val, p_val = stats.f_oneway(grouped_test2.get_group('fwd')['price'], grouped_test2.get_group('rwd')['price'])
print( "ANOVA: F=", f_val, ", P =", p_val )
#### 4wd and rwd
f_val, p_val = stats.f_oneway(grouped_test2.get_group('4wd')['price'], grouped_test2.get_group('rwd')['price'])
print( "ANOVA: F=", f_val, ", P =", p_val)
#### 4wd and fwd
f_val, p_val = stats.f_oneway(grouped_test2.get_group('4wd')['price'], grouped_test2.get_group('fwd')['price'])
print("ANOVA: F=", f_val, ", P =", p_val)
```
<h3>Conclusão</h3>
<p> Agora temos uma ideia melhor de como são os nossos dados e quais variáveis são importantes levar em consideração ao prever o preço do carro.</p>
<p> À medida que avançamos na construção de modelos de aprendizado de máquina para automatizar nossa análise, alimentar o modelo com variáveis que afetam significativamente nossa variável de destino melhorará o desempenho de previsão do nosso modelo. </p>
# É isso!
### Este é apenas um exemplo de análise de dados com Python
Este notebook faz parte de uma série de notebooks com conteúdos extraídos de cursos dos quais participei como aluno, ouvinte, professor, monitor... Reunidos para consulta futura e compartilhamento de idéias, soluções e conhecimento!
### Muito obrigado pela sua leitura!
<h4>Anderson Cordeiro</h4>
Você pode encontrar mais conteúdo no meu Medium<br> ou então entrar em contato comigo :D
<a href="https://www.linkedin.com/in/andercordeiro/" target="_blank">[LinkedIn]</a>
<a href="https://medium.com/@andcordeiro" target="_blank">[Medium]</a>
| github_jupyter |
# 학습된 NarrativeKoGPT2을 이용한 Text Generation
## 1.Google Drive 연동
- 모델 파일과 학습 데이터가 저장 되어있는 구글 드라이브의 디렉토리와 Colab을 연동.
### 1.1 Google Drive 연동
아래 코드를 실행후 나오는 URL을 클릭하여 나오는 인증 코드 입력
```
from google.colab import drive
drive.mount('/content/drive')
```
**Colab 디렉토리 아래 NarrativeKoGPT2 경로 확인**
```
!ls drive/'My Drive'/'Colab Notebooks'/
```
**필요 패키지들 설치**
```
!pip install -r drive/'My Drive'/'Colab Notebooks'/NarrativeKoGPT2/requirements.txt
```
**시스템 경로 추가**
```
import os
import sys
sys.path.append('drive/My Drive/Colab Notebooks/')
```
## 2.KoGPT2 Text Generation
### 2.1.Import Package
```
import random
import torch
from torch.utils.data import DataLoader # 데이터로더
from gluonnlp.data import SentencepieceTokenizer
from NarrativeKoGPT2.kogpt2.utils import get_tokenizer
from NarrativeKoGPT2.kogpt2.utils import download, tokenizer
from NarrativeKoGPT2.model.torch_gpt2 import GPT2Config, GPT2LMHeadModel
from NarrativeKoGPT2.util.data import NovelDataset
import gluonnlp
```
### 2.2. koGPT-2 Config
```
ctx= 'cpu'#'cuda' #'cpu' #학습 Device CPU or GPU. colab의 경우 GPU 사용
cachedir='~/kogpt2/' # KoGPT-2 모델 다운로드 경로
epoch =200 # 학습 epoch
save_path = 'drive/My Drive/Colab Notebooks/NarrativeKoGPT2/checkpoint/'
load_path = 'drive/My Drive/Colab Notebooks/NarrativeKoGPT2/checkpoint/narrativeKoGPT2_checkpoint.tar'
#use_cuda = True # Colab내 GPU 사용을 위한 값
pytorch_kogpt2 = {
'url':
'https://kobert.blob.core.windows.net/models/kogpt2/pytorch/pytorch_kogpt2_676e9bcfa7.params',
'fname': 'pytorch_kogpt2_676e9bcfa7.params',
'chksum': '676e9bcfa7'
}
kogpt2_config = {
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_layer": 12,
"n_positions": 1024,
"vocab_size": 50000
}
```
### 2.3 Model and Vocab Download
```
# download model
model_info = pytorch_kogpt2
model_path = download(model_info['url'],
model_info['fname'],
model_info['chksum'],
cachedir=cachedir)
# download vocab
vocab_info = tokenizer
vocab_path = download(vocab_info['url'],
vocab_info['fname'],
vocab_info['chksum'],
cachedir=cachedir)
```
### 2.4.KoGPT-2 Model Vocab
**추론 및 학습 재개를 위한 모델 불러오기**
**저장하기**
```python
torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': loss,
...
}, PATH)
```
**불러오기**
``` python
model = TheModelClass(*args, **kwargs)
optimizer = TheOptimizerClass(*args, **kwargs)
checkpoint = torch.load(PATH)
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
epoch = checkpoint['epoch']
loss = checkpoint['loss']
model.eval()
# - or -
model.train()
```
```
# Device 설정
device = torch.device(ctx)
# 저장한 Checkpoint 불러오기
checkpoint = torch.load(load_path, map_location=device)
# KoGPT-2 언어 모델 학습을 위한 GPT2LMHeadModel 선언
kogpt2model = GPT2LMHeadModel(config=GPT2Config.from_dict(kogpt2_config))
kogpt2model.load_state_dict(checkpoint['model_state_dict'])
kogpt2model.eval()
vocab_b_obj = gluonnlp.vocab.BERTVocab.from_sentencepiece(vocab_path,
mask_token=None,
sep_token=None,
cls_token=None,
unknown_token='<unk>',
padding_token='<pad>',
bos_token='<s>',
eos_token='</s>')
```
### 2.5. Tokenizer
```
tok_path = get_tokenizer()
model, vocab = kogpt2model, vocab_b_obj
tok = SentencepieceTokenizer(tok_path)
```
### 2.6. NarrativeKoGPT-2 Text Generation
```
sent =''
while 1:
tmp_sent = input('다음...: ')
sent = sent+tmp_sent
toked = tok(sent)
count = 0
generated_text =''
input_size = 50
if len(toked) >1022:
break
while 1:
input_ids = torch.tensor([vocab[vocab.bos_token],] + vocab[toked]).unsqueeze(0)
predicts = model(input_ids)
pred = predicts[0]
# print('predicts:', torch.argmax(pred, axis=-1).squeeze())
gen = vocab.to_tokens(torch.argmax(pred, axis=-1).squeeze().tolist())[-1]
if gen == '</s>':
print('to_tokens:',vocab.to_tokens(torch.argmax(pred, axis=-1).squeeze().tolist()))
if gen == '.' or count>input_size:
sent += gen.replace('▁', ' ')
generated_text += gen.replace('▁', ' ')
sent += '\n'
generated_text += '\n'
toked = tok(sent)
count =0
break
# print('to_tokens:',vocab.to_tokens(torch.argmax(pred, axis=-1).squeeze().tolist()))
# if count >= input_size:
# break
sent += gen.replace('▁', ' ')
generated_text += gen.replace('▁', ' ')
# print(generated_text)
toked = tok(sent)
count += 1
print(generated_text)
generated_text=''
print(sent)
```
| github_jupyter |
# Liver Disorders Data Set Arm Identefication
# Importing the important libraries
```
import pandas as pd
import numpy
import sys
%matplotlib inline
import matplotlib.pyplot as plt
from pandas.plotting import scatter_matrix
import numpy as np
import time
import sklearn
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('png', 'pdf')
# fix random seed for reproducibility
plt.rcParams['figure.figsize'] = (18, 12)
from sklearn.datasets.samples_generator import make_blobs
from pandas.plotting import parallel_coordinates
```
# Data Set
## Load the dataset
To download the sataset from the web follow this is the link https://archive.ics.uci.edu/ml/machine-learning-databases/liver-disorders/bupa.data
```
# Link to data on the disk
raw_data = np.genfromtxt("https://archive.ics.uci.edu/ml/machine-learning-databases/liver-disorders/bupa.data", delimiter=',')
print('The dataset has been loaded')
# print(the number of examples and the dimenstion in this dataset
# Notice that the last feature in this dataset is the target
data = raw_data
size = np.shape(data)
data = data[~numpy.isnan(data).any(axis=1)]
print(' Number of features , number of examples = ', size)
np.random.shuffle(data)
X = data[:, 0:-1]
y = data[:, -1]
Input_toNN = X.shape[1]
OutPutNN = (np.unique(y)).shape[0]
```
## Peprocess the data
```
df = pd.DataFrame(X,y)
df.describe()
```
## Prepare the data for classification
```
dataset = df.values
from sklearn import preprocessing
# for training data
X = dataset.astype(float)
y[y == 1] = 0
y[y == 2] = 1
features = preprocessing.scale(X)
target = y
X.shape
```
## Feature Selection
```
from sklearn.model_selection import train_test_split
# Extracting 40% validation data
X_train, X_test, y_train, y_test = train_test_split(
features, target, test_size=0.40, random_state=17)
```
## Preproccing
```
print('Number of training examples',len(X_train))
print('Number of validation examples',len(X_test))
from sklearn import preprocessing
# normalize the data attributes
X_train = preprocessing.normalize(X_train)
#X_test = preprocessing.normalize(X_test)
# standardize the data attributes
X_train = preprocessing.scale(X_train)
#X_test = preprocessing.scale(X_test)
np.save("X_train_Connectionist", X_train)
np.save("X_test_Connectionist", X_test)
np.save("y_train_Connectionist", y_train)
np.save("y_test_Connectionist", y_test)
```
# Neural Networks
```
import seaborn as sns
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.regularizers import l2
from keras.utils import np_utils
#The layers where we want to prune
N1 = 20 # 20
N2 = 20
labelsTrain = np_utils.to_categorical(y_train)
model = Sequential()
model.add(Dense(N1,
input_shape=(Input_toNN,),
activation="relu"))
model.add(Dropout(0.25))
model.add(Dense(N2, activation='relu'))
model.add(Dropout(0.25))
model.add(Dense(OutPutNN, activation="softmax"))
model.compile(loss='categorical_crossentropy',
metrics=['accuracy'],
optimizer='adam')
start_time = time.time()
model.fit(X_train, labelsTrain, verbose=0, batch_size=1, epochs=300)
print("The time for training NN is %s seconds " % (time.time() - start_time))
loss, accuracy = model.evaluate(X_train, labelsTrain, batch_size=1, verbose=0)
accuracy_training_Model = accuracy
print("Test fraction correct (NN-loss) = {:.2f}".format(loss))
print("Test fraction correct (NN-Accuracy) = {:.2f}".format(accuracy))
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model = accuracy
print("accuracy_testing_Model =",accuracy_testing_Model)
model.save('my_model.h5')
import scipy.io
weights = model.get_weights()
scipy.io.savemat('parameters.mat', dict(w=weights))
# distribution of the weights before pruning
weights_unpruned = []
for wei in weights:
wei = wei[wei!=0]
weights_unpruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_unpruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - Unpruned Model')
plt.show()
from keras.models import load_model
modelBuckup = load_model('my_model.h5')
```
## The Architecture of the Model
```
modelBuckup.summary()
# specify the layer should have N1 and N2 weights
L = 2
# Set Threshold and constant
Threshold = 0
constant=1
```
Set the number of rounding same for all methods
```
Round = 1800 # will be the same for all methods 1800
```
# Random policy
Random policy or Epsilon-first strategy : during T rounds, sample a random arm (uniform sampling), and then choose the best arms.
## Pruning the Weights
```
modelBuckup = load_model('my_model.h5')
startP = time.time()
N_arry = np.zeros(N1*N2)
noExamples = len(X_train)
NL = np.zeros(Round)
RP = np.zeros(Round)
Avg_Accumaltive_RP = np.zeros(N1*N2)
p_reshape = Avg_Accumaltive_RP.reshape(N1,N2)
Count_RP = np.ones(N1*N2)
import random
count = 0
# uniform sampling
for j in range(Round):
b = random.randint(0,noExamples-1)
loss, accuracy = modelBuckup.evaluate(X_train[b:b+1], labelsTrain[b:b+1], batch_size=1, verbose=0)
ind = random.randint(0,N1*N2-1)
i,k = np.unravel_index(ind, p_reshape.shape)
#print(i,k)
All_weights=modelBuckup.get_weights()
temp = All_weights[2][i][k]
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
Count_RP[ind]=Count_RP[ind]+1
loss_New, accuracy_New = modelBuckup.evaluate(X_train[b:b+1], labelsTrain[b:b+1], batch_size=1, verbose=0)
delta = loss_New - loss
#print(delta)
reward = max(0,Threshold + delta)/constant
#print(reward)
val = Count_RP[ind]
Avg_Accumaltive_RP[ind] = (val-1)/val * Avg_Accumaltive_RP[ind] + 1/val * reward
All_weights[2][i][k] = temp
modelBuckup.set_weights(All_weights)
RP[j]=ind
# Exploting
# The weight that has been chosen by Random policy method
reshapeRP = Avg_Accumaltive_RP.reshape(N1,N2)
P = np.argmax(Avg_Accumaltive_RP)
endP = time.time()
print("Execution time = ",endP - startP)
Pi,Pj = np.unravel_index(P, reshapeRP.shape)
print("\n The index of chosen W_ji is = ", Pj, Pi)
```
## Prune 5%, 10%, 25% and 50% from the unpruned model
```
print('Finsh playing start pruining:')
rewards = Avg_Accumaltive_RP
Pecent = {1:'5%', 2:'10%', 3:'25%', 4:'50%'}
ListPecent = [0.05, 0.1, 0.25, 0.5]
No_ofweights = len(weights_unpruned)
accuracyL = []
# Prune 5%
No_of_pruned_weight = int(ListPecent[0] * N1*N2)
for t in range(No_of_pruned_weight):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 5% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('Random_Policy_model_5_Pecernt.h5')
# Prune 10%
No_of_pruned_weight_10 = int(ListPecent[1] * N1*N2)
for t in range(No_of_pruned_weight, No_of_pruned_weight_10):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 10% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('Random_Policy_model_10_Pecernt.h5')
# Prune 25%
No_of_pruned_weight_25 = int(ListPecent[2] * N1*N2)
for t in range(No_of_pruned_weight_10, No_of_pruned_weight_25):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 25% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('Random_Policy_model_25_Pecernt.h5')
# Prune 50%
No_of_pruned_weight_50 = int(ListPecent[3] * N1*N2)
for t in range(No_of_pruned_weight_25, No_of_pruned_weight_50):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 50% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('Random_Policy_model_50_Pecernt.h5')
x= [5,10,25,50]
xx = [accuracy_training_Model, accuracy_training_Model, accuracy_training_Model, accuracy_training_Model]
plt.plot(x, accuracyL, '+')
plt.plot(x, accuracyL, 'b--.', label='after pruning')
plt.plot(x, xx, 'k--.', label='before pruning')
plt.ylabel('Accuracy after pruning')
plt.title('Accuracy vs Sparsity')
plt.xlabel('Percentage of Sparsity')
plt.legend(loc = 3)
plt.grid(True)
plt.show()
# distribution of the weights before pruning
weights_unpruned = []
for wei in weights:
wei = wei[wei!=0]
weights_unpruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_unpruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - Unpruned Model')
plt.show()
# distribution of the weights 5% pruning
model = load_model('Random_Policy_model_5_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 5% of the Model')
plt.show()
# distribution of the weights 10% pruning
model = load_model('Random_Policy_model_10_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 10% of the Model')
plt.show()
# distribution of the weights 25% pruning
model = load_model('Random_Policy_model_25_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 25% of the Model')
plt.show()
# distribution of the weights 50% pruning
model = load_model('Random_Policy_model_50_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 50% of the Model')
plt.show()
```
### On testing dataset (Unseen dataset)
```
model = load_model('my_model.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model = accuracy
print("The accuracy of the model before pruning on testing data = ", accuracy_testing_Model)
model = load_model('Random_Policy_model_5_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 5 on testing data = ", accuracy_testing_Model_05)
model = load_model('Random_Policy_model_10_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_10 = accuracy
print("The accuracy of the model after pruned 10 on testing data = ", accuracy_testing_Model_10)
model = load_model('Random_Policy_model_25_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_25 = accuracy
print("The accuracy of the model after pruned 25 on testing data = ", accuracy_testing_Model_25)
model = load_model('Random_Policy_model_50_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_50 = accuracy
print("The accuracy of the model after pruned 50 on testing data = ", accuracy_testing_Model_50)
accuracyL_test = [accuracy_testing_Model_05, accuracy_testing_Model_10, accuracy_testing_Model_25
, accuracy_testing_Model_50]
xx = [accuracy_testing_Model, accuracy_testing_Model, accuracy_testing_Model, accuracy_testing_Model]
plt.plot(x, accuracyL_test, '+')
plt.plot(x, accuracyL_test, 'b--.', label='after pruning')
plt.plot(x, xx, 'k--.', label='before pruning')
plt.ylabel('Accuracy after pruning')
plt.title('Accuracy vs Sparsity')
plt.xlabel('Percentage of Sparsity')
plt.legend(loc = 3)
plt.grid(True)
plt.show()
```
# Epsilon Greedy
## Pruning the Weights
```
modelBuckup = load_model('my_model.h5')
startG = time.time()
N_arry = np.zeros(N1*N2)
#noExamples = 100
NL = np.zeros(Round)
ep = np.zeros(Round)
Avg_Accumaltive_R_EGN = np.zeros(N1*N2)
p_reshape = Avg_Accumaltive_R_EGN.reshape(N1,N2)
Count_EGN = np.ones(N1*N2)
import random
epsilon=0.5
count = 0
for i in range(N1):
for j in range(N2):
loss, accuracy = modelBuckup.evaluate(X_train[1:10], labelsTrain[1:10], batch_size=1, verbose=0)
# Prune the neuron in the layer
All_weights=modelBuckup.get_weights()
temp = All_weights[2][i][j]
All_weights[2][i][j] = 0
modelBuckup.set_weights(All_weights)
loss_New, accuracy_New = modelBuckup.evaluate(X_train[1:10], labelsTrain[1:10], batch_size=1, verbose=0)
delta = loss_New - loss
reward = max(0,Threshold + delta)/constant
All_weights[2][i][j]= temp
Avg_Accumaltive_R_EGN[count] = reward
modelBuckup.set_weights(All_weights)
count = count+1
for j in range(N1*N2-1, Round):
b = random.randint(0,noExamples-1)
loss, accuracy = modelBuckup.evaluate(X_train[b:b+1], labelsTrain[b:b+1], batch_size=1, verbose=0)
# Prune the neuron in the layer
if (epsilon>random.uniform(0, 1)):
ind = np.argmax(Avg_Accumaltive_R_EGN)
else:
ind = random.randint(0,N1*N2-1)
i,k = np.unravel_index(ind, p_reshape.shape)
#print(i,k)
All_weights=modelBuckup.get_weights()
temp = All_weights[2][i][k]
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
Count_EGN[ind]=Count_EGN[ind]+1
loss_New, accuracy_New = modelBuckup.evaluate(X_train[b:b+1], labelsTrain[b:b+1], batch_size=1, verbose=0)
delta = loss_New - loss
#print(delta)
reward = max(0,Threshold + delta)/constant
#print(reward)
val = Count_EGN[ind]
Avg_Accumaltive_R_EGN[ind] = (val-1)/val * Avg_Accumaltive_R_EGN[ind] + 1/val * reward
All_weights[2][i][k] = temp
modelBuckup.set_weights(All_weights)
ep[j]=ind
endG = time.time()
print("Execution time ",endG - startG)
```
## The single weight that has been chosen by Epsilon Greedy method
Practically we choose k weights that have high rewards but the purpose of this file to show the computation time and which is the method be able to choose best arm comparing to direct method.
```
reshapeG = Avg_Accumaltive_R_EGN.reshape(N1,N2)
G = np.argmax(Avg_Accumaltive_R_EGN)
Gi,Gj = np.unravel_index(G, reshapeG.shape)
print("The index of chosen W_ji is = ", Gj, Gi)
```
## Prune 5%, 10%, 25% and 50% from the unpruned model
```
print('Finsh playing start pruining:')
rewards = Avg_Accumaltive_R_EGN
Pecent = {1:'5%', 2:'10%', 3:'25%', 4:'50%'}
ListPecent = [0.05, 0.1, 0.25, 0.5]
No_ofweights = len(weights_unpruned)
accuracyL = []
# Prune 5%
No_of_pruned_weight = int(ListPecent[0] * N1*N2)
for t in range(No_of_pruned_weight):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 5% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('Epsilon_Greedy_model_5_Pecernt.h5')
# Prune 10%
No_of_pruned_weight_10 = int(ListPecent[1] * N1*N2)
for t in range(No_of_pruned_weight, No_of_pruned_weight_10):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 10% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('Epsilon_Greedy_model_10_Pecernt.h5')
# Prune 25%
No_of_pruned_weight_25 = int(ListPecent[2] * N1*N2)
for t in range(No_of_pruned_weight_10, No_of_pruned_weight_25):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 25% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('Epsilon_Greedy_model_25_Pecernt.h5')
# Prune 50%
No_of_pruned_weight_50 = int(ListPecent[3] * N1*N2)
for t in range(No_of_pruned_weight_25, No_of_pruned_weight_50):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 50% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('Epsilon_Greedy_model_50_Pecernt.h5')
x= [5,10,25,50]
xx = [accuracy_training_Model, accuracy_training_Model, accuracy_training_Model, accuracy_training_Model]
plt.plot(x, accuracyL, '+')
plt.plot(x, accuracyL, 'b--.', label='after pruning')
plt.plot(x, xx, 'k--.', label='before pruning')
plt.legend(loc = 3)
plt.ylabel('Accuracy after pruning')
plt.title('Accuracy vs Sparsity')
plt.xlabel('Percentage of Sparsity')
plt.grid(True)
plt.show()
# distribution of the weights before pruning
weights_unpruned = []
for wei in weights:
wei = wei[wei!=0]
weights_unpruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_unpruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - Unpruned Model')
plt.show()
# distribution of the weights 5% pruning
model = load_model('Epsilon_Greedy_model_5_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 5% of the Model')
plt.show()
# distribution of the weights 10% pruning
model = load_model('Epsilon_Greedy_model_10_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 10% of the Model')
plt.show()
# distribution of the weights 25% pruning
model = load_model('Epsilon_Greedy_model_25_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 25% of the Model')
plt.show()
# distribution of the weights 50% pruning
model = load_model('Epsilon_Greedy_model_50_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 50% of the Model')
plt.show()
```
### On testing dataset (Unseen dataset)
```
model = load_model('my_model.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model = accuracy
print("The accuracy of the model before pruning on testing data = ", accuracy_testing_Model)
model = load_model('Epsilon_Greedy_model_5_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 5 on testing data = ", accuracy_testing_Model_05)
model = load_model('Epsilon_Greedy_model_10_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_10 = accuracy
print("The accuracy of the model after pruned 10 on testing data = ", accuracy_testing_Model_10)
model = load_model('Epsilon_Greedy_model_25_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_25 = accuracy
print("The accuracy of the model after pruned 25 on testing data = ", accuracy_testing_Model_25)
model = load_model('Epsilon_Greedy_model_50_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_50 = accuracy
print("The accuracy of the model after pruned 50 on testing data = ", accuracy_testing_Model_50)
accuracyL_test = [accuracy_testing_Model_05, accuracy_testing_Model_10, accuracy_testing_Model_25
, accuracy_testing_Model_50]
xx = [accuracy_testing_Model, accuracy_testing_Model, accuracy_testing_Model, accuracy_testing_Model]
plt.plot(x, accuracyL_test, '+')
plt.plot(x, accuracyL_test, 'b--.', label='after pruning')
plt.plot(x, xx, 'k--.', label='before pruning')
plt.legend(loc = 3)
plt.ylabel('Accuracy after pruning')
plt.title('Accuracy vs Sparsity')
plt.xlabel('Percentage of Sparsity')
plt.grid(True)
plt.show()
```
# UCB1
## Pruning Weights
```
modelBuckup = load_model('my_model.h5')
startU = time.time()
N_arry = np.zeros(N1*N2)
NL = np.zeros(Round)
Avg_Accumaltive_UCB = np.zeros(N1*N2)
Count_UCB = np.ones(N1*N2)
UCB1 = np.zeros(Round)
p_reshape = Avg_Accumaltive_UCB.reshape(N1,N2)
count = 0
import random
tau=4
for i in range(N1):
for j in range(N2):
loss, accuracy = modelBuckup.evaluate(X_train[1:10], labelsTrain[1:10], batch_size=1, verbose=0)
All_weights=modelBuckup.get_weights()
temp = All_weights[2][i][j]
All_weights[2][i][j] = 0
modelBuckup.set_weights(All_weights)
loss_New, accuracy_New = modelBuckup.evaluate(X_train[1:10], labelsTrain[1:10], batch_size=1, verbose=0)
delta = loss_New - loss
reward = max(0,Threshold + delta)/constant
All_weights[2][i][j]= temp
Avg_Accumaltive_UCB[count] = reward
modelBuckup.set_weights(All_weights)
count = count+1
for j in range(N1*N2-1, Round):
b = random.randint(0,noExamples-1)
loss, accuracy = modelBuckup.evaluate(X_train[b:b+1], labelsTrain[b:b+1], batch_size=1, verbose=0)
padding = np.sqrt(2*Count_UCB.sum()/Count_UCB)
ucb = Avg_Accumaltive_UCB + padding
ind = np.argmax(ucb)
Count_UCB[ind] = Count_UCB[ind] + 1
i,k = np.unravel_index(ind, p_reshape.shape)
All_weights=modelBuckup.get_weights()
temp = All_weights[2][i][k]
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss_New, accuracy_New = modelBuckup.evaluate(X_train[b:b+1], labelsTrain[b:b+1], batch_size=1, verbose=0)
delta = loss_New - loss
reward = max(0,Threshold + delta)/constant
All_weights[2][i][k] = temp
modelBuckup.set_weights(All_weights)
val = Count_UCB[ind]
Avg_Accumaltive_UCB[ind] = (val-1)/val * Avg_Accumaltive_UCB[ind] + 1/val * reward
modelBuckup.set_weights(All_weights)
UCB1[j]=ind
endU = time.time()
print("Execution time ",endU - startU)
```
## The single weight that has been chosen by UCB1 method
Practically we choose k weights that have high rewards but the purpose of this file to show the computation time and which is the method be able to choose best arm comparing to direct method.
```
reshapeU = Avg_Accumaltive_UCB.reshape(N1,N2)
U = np.argmax(Avg_Accumaltive_UCB)
Ui,Uj = np.unravel_index(U, reshapeU.shape)
print("The index of chosen W_ji is = ", Uj, Ui)
```
## Prune 5%, 10%, 25% and 50% from the unpruned model
```
print('Finsh playing start pruining:')
rewards = Avg_Accumaltive_UCB
Pecent = {1:'5%', 2:'10%', 3:'25%', 4:'50%'}
ListPecent = [0.05, 0.1, 0.25, 0.5]
No_ofweights = len(weights_unpruned)
accuracyL = []
# Prune 5%
No_of_pruned_weight = int(ListPecent[0] * N1*N2)
for t in range(No_of_pruned_weight):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 5% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('UCB1_model_5_Pecernt.h5')
# Prune 10%
No_of_pruned_weight_10 = int(ListPecent[1] * N1*N2)
for t in range(No_of_pruned_weight, No_of_pruned_weight_10):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 10% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('UCB1_model_10_Pecernt.h5')
# Prune 25%
No_of_pruned_weight_25 = int(ListPecent[2] * N1*N2)
for t in range(No_of_pruned_weight_10, No_of_pruned_weight_25):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 25% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('UCB1_model_25_Pecernt.h5')
# Prune 50%
No_of_pruned_weight_50 = int(ListPecent[3] * N1*N2)
for t in range(No_of_pruned_weight_25, No_of_pruned_weight_50):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 50% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('UCB1_model_50_Pecernt.h5')
x= [5,10,25,50]
xx = [accuracy_training_Model, accuracy_training_Model, accuracy_training_Model, accuracy_training_Model]
plt.plot(x, accuracyL, '+')
plt.plot(x, accuracyL, 'b--.', label='after pruning')
plt.plot(x, xx, 'k--.', label='before pruning')
plt.legend(loc = 3)
plt.ylabel('Accuracy after pruning')
plt.title('Accuracy vs Sparsity')
plt.xlabel('Percentage of Sparsity')
plt.grid(True)
plt.show()
# distribution of the weights before pruning
weights_unpruned = []
for wei in weights:
wei = wei[wei!=0]
weights_unpruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_unpruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - Unpruned Model')
plt.show()
# distribution of the weights 5% pruning
model = load_model('UCB1_model_5_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 5% of the Model')
plt.show()
# distribution of the weights 10% pruning
model = load_model('UCB1_model_10_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 10% of the Model')
plt.show()
# distribution of the weights 25% pruning
model = load_model('UCB1_model_25_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 25% of the Model')
plt.show()
# distribution of the weights 50% pruning
model = load_model('UCB1_model_50_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 50% of the Model')
plt.show()
model = load_model('my_model.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model = accuracy
print("The accuracy of the model before pruning on testing data = ", accuracy_testing_Model)
model = load_model('UCB1_model_5_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 5 on testing data = ", accuracy_testing_Model_05)
model = load_model('UCB1_model_10_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_10 = accuracy
print("The accuracy of the model after pruned 10 on testing data = ", accuracy_testing_Model_10)
model = load_model('UCB1_model_25_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_25 = accuracy
print("The accuracy of the model after pruned 25 on testing data = ", accuracy_testing_Model_25)
model = load_model('UCB1_model_50_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_50 = accuracy
print("The accuracy of the model after pruned 50 on testing data = ", accuracy_testing_Model_50)
accuracyL_test = [accuracy_testing_Model_05, accuracy_testing_Model_10, accuracy_testing_Model_25
, accuracy_testing_Model_50]
xx = [accuracy_testing_Model, accuracy_testing_Model, accuracy_testing_Model, accuracy_testing_Model]
plt.plot(x, accuracyL_test, '+')
plt.plot(x, accuracyL_test, 'b--.', label='after pruning')
plt.plot(x, xx, 'k--.', label='before pruning')
plt.legend(loc = 3)
plt.ylabel('Accuracy after pruning')
plt.title('Accuracy vs Sparsity')
plt.xlabel('Percentage of Sparsity')
plt.grid(True)
plt.show()
```
# Thompson Sampling
## Pruning weights
```
modelBuckup = load_model('my_model.h5')
startT = time.time()
N_arry = np.zeros(N1*N2)
NL = np.zeros(Round)
Avg_Accumaltive_TS = np.zeros(N1*N2)
Count_TS = np.ones(N1*N2)
TS = np.zeros(Round)
p_reshape = Avg_Accumaltive_TS.reshape(N1,N2)
count = 0
success = np.zeros(N1*N2)
failure = np.zeros(N1*N2)
for i in range(N1):
for j in range(N2):
loss, accuracy = modelBuckup.evaluate(X_train[1:10], labelsTrain[1:10], batch_size=1, verbose=0)
# Prune the neuron in the layer
All_weights=modelBuckup.get_weights()
temp = All_weights[2][i][j]
All_weights[2][i][j] = 0
modelBuckup.set_weights(All_weights)
loss_New, accuracy_New = modelBuckup.evaluate(X_train[1:10], labelsTrain[1:10], batch_size=1, verbose=0)
delta = loss_New - loss
if(delta>0):
reward = 1
success[i] = success[i]+1
else:
reward = 0
failure[i] = failure[i]+1
All_weights[2][i][j]= temp
Avg_Accumaltive_TS[count] = reward
modelBuckup.set_weights(All_weights)
count = count+1
for j in range(N1*N2-1, Round):
b = random.randint(0,noExamples-1)
loss, accuracy = modelBuckup.evaluate(X_train[b:b+1], labelsTrain[b:b+1], batch_size=1, verbose=0)
# Prune the neuron in the layer
ind = np.argmax(np.random.beta(1+success, 1+failure))
Count_TS[ind] = Count_TS[ind] + 1
i,k = np.unravel_index(ind, p_reshape.shape)
All_weights=modelBuckup.get_weights()
temp = All_weights[2][i][k]
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss_New, accuracy_New = modelBuckup.evaluate(X_train[b:b+1], labelsTrain[b:b+1], batch_size=1, verbose=0)
delta = loss_New - loss
if(delta>0):
reward = 1
success[i] = success[i]+1
else:
reward = 0
failure[i] = failure[i]+1
All_weights[2][i][k] = temp
modelBuckup.set_weights(All_weights)
val = Count_TS[ind]
Avg_Accumaltive_TS[ind] = (val-1)/val * Avg_Accumaltive_TS[ind] + 1/val * reward
TS[j]=ind
endT = time.time()
print("Execution time ",endT - startT)
```
## The single weight that has been chosen by Thompson Sampling method
Practically we choose k weights that have high rewards but the purpose of this file to show the computation time and which is the method be able to choose best arm comparing to direct method.
```
reshapeT = Avg_Accumaltive_TS.reshape(N1,N2)
T = np.argmax(Avg_Accumaltive_TS)
Ti,Tj = np.unravel_index(T, reshapeT.shape)
print("The index of chosen W_ji is = ", Tj, Ti)
```
## Prune 5%, 10%, 25% and 50% from the unpruned model
```
print('Finsh playing start pruining:')
rewards = Avg_Accumaltive_TS
Pecent = {1:'5%', 2:'10%', 3:'25%', 4:'50%'}
ListPecent = [0.05, 0.1, 0.25, 0.5]
No_ofweights = len(weights_unpruned)
accuracyL = []
# Prune 5%
No_of_pruned_weight = int(ListPecent[0] * N1*N2)
for t in range(No_of_pruned_weight):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 5% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('Thompson_Sampling_model_5_Pecernt.h5')
# Prune 10%
No_of_pruned_weight_10 = int(ListPecent[1] * N1*N2)
for t in range(No_of_pruned_weight, No_of_pruned_weight_10):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 10% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('Thompson_Sampling_model_10_Pecernt.h5')
# Prune 25%
No_of_pruned_weight_25 = int(ListPecent[2] * N1*N2)
for t in range(No_of_pruned_weight_10, No_of_pruned_weight_25):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 25% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('Thompson_Sampling_model_25_Pecernt.h5')
# Prune 50%
No_of_pruned_weight_50 = int(ListPecent[3] * N1*N2)
for t in range(No_of_pruned_weight_25, No_of_pruned_weight_50):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 50% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('Thompson_Sampling_model_50_Pecernt.h5')
x= [5,10,25,50]
xx = [accuracy_training_Model, accuracy_training_Model, accuracy_training_Model, accuracy_training_Model]
plt.plot(x, accuracyL, '+')
plt.plot(x, accuracyL, 'b--.', label='after pruning')
plt.plot(x, xx, 'k--.', label='before pruning')
plt.ylabel('Accuracy after pruning')
plt.title('Accuracy vs Sparsity')
plt.xlabel('Percentage of Sparsity')
plt.legend(loc = 3)
plt.grid(True)
plt.show()
# distribution of the weights before pruning
weights_unpruned = []
for wei in weights:
wei = wei[wei!=0]
weights_unpruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_unpruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - Unpruned Model')
plt.show()
# distribution of the weights 5% pruning
model = load_model('Thompson_Sampling_model_5_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 5% of the Model')
plt.show()
# distribution of the weights 10% pruning
model = load_model('Thompson_Sampling_model_10_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 10% of the Model')
plt.show()
# distribution of the weights 25% pruning
model = load_model('Thompson_Sampling_model_25_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 25% of the Model')
plt.show()
# distribution of the weights 50% pruning
model = load_model('Thompson_Sampling_model_50_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 50% of the Model')
plt.show()
model = load_model('my_model.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model = accuracy
print("The accuracy of the model before pruning on testing data = ", accuracy_testing_Model)
model = load_model('Thompson_Sampling_model_5_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 5 on testing data = ", accuracy_testing_Model_05)
model = load_model('Thompson_Sampling_model_10_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_10 = accuracy
print("The accuracy of the model after pruned 10 on testing data = ", accuracy_testing_Model_10)
model = load_model('Thompson_Sampling_model_25_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_25 = accuracy
print("The accuracy of the model after pruned 25 on testing data = ", accuracy_testing_Model_25)
model = load_model('Thompson_Sampling_model_50_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_50 = accuracy
print("The accuracy of the model after pruned 50 on testing data = ", accuracy_testing_Model_50)
accuracyL_test = [accuracy_testing_Model_05, accuracy_testing_Model_10, accuracy_testing_Model_25
, accuracy_testing_Model_50]
xx = [accuracy_testing_Model, accuracy_testing_Model, accuracy_testing_Model, accuracy_testing_Model]
plt.plot(x, accuracyL_test, '+')
plt.plot(x, accuracyL_test, 'b--.', label='after pruning')
plt.plot(x, xx, 'k--.', label='before pruning')
plt.ylabel('Accuracy after pruning')
plt.title('Accuracy vs Sparsity')
plt.xlabel('Percentage of Sparsity')
plt.legend(loc = 3)
plt.grid(True)
plt.show()
```
# Successive Rejects
For a K-armed bandit, Successive Rejects operates in (K − 1) phases. At the end of each phase, the arm with the lowest average reward is discarded. Thus, at the end of phase (K − 1) only one arm survives, and this arm is recommended.
## Pruning weights
```
modelBuckup = load_model('my_model.h5')
startS = time.time()
N_arry = np.zeros(N1*N2)
NL = np.zeros(noExamples)
sr = []
Avg_Accumaltive_SR = np.zeros(N1*N2)
Avg_Accumaltive_SR2 = np.zeros(N1*N2)
Avg_Accumaltive_SR1 = np.zeros(N1*N2)
p_reshape = Avg_Accumaltive_SR2.reshape(N1,N2)
check_array = np.ones((N1,N2))
Count_SR = np.ones(N1*N2)
A = [0]
Nk = []
K = N1*N2
Log = 0.5
for k in range(K):
d = k+2
Log = Log + 1/d
for k in range(K-2):
d = k+1
nK = int(np.floor(1/Log * (Round-K)/(K+1-d)))
if nK!=0:
A.append(nK)
A.sort(reverse=False)
#print("The round of the phases : ",A)
g=0
for a in A:
h = a - g
g = a
Nk.append(h)
count=0
#print(a)
for n in range(h):
c=0
for i in range(N1):
for j in range(N2):
if check_array[i][j]==1:
b = random.randint(0,noExamples-1)
loss, accuracy = modelBuckup.evaluate(X_train[b:b+1], labelsTrain[b:b+1], batch_size=1, verbose=0)
All_weights=modelBuckup.get_weights()
temp = All_weights[2][i][j]
All_weights[2][i][j] = 0
modelBuckup.set_weights(All_weights)
loss_New, accuracy_New = modelBuckup.evaluate(X_train[b:b+1], labelsTrain[b:b+1], batch_size=1, verbose=0)
delta = loss_New - loss
reward = max(0,Threshold + delta)/constant
All_weights[2][i][j] = temp
modelBuckup.set_weights(All_weights)
val = Count_SR[c]
#print(reward)
Avg_Accumaltive_SR[c] = (val-1)/val * Avg_Accumaltive_SR[c] + 1/val * reward
All_weights[2][i][j] = temp
modelBuckup.set_weights(All_weights)
count = count+1
c = c + 1
Avg_Accumaltive_SR2=Avg_Accumaltive_SR2+Avg_Accumaltive_SR
Avg_Accumaltive_SR1=Avg_Accumaltive_SR2.copy()
ind = np.argmin(Avg_Accumaltive_SR2)
Avg_Accumaltive_SR2[ind] = 100
#print(Avg_Accumaltive_SR)
s,t = np.unravel_index(ind, p_reshape.shape)
###check_array[s][t]=0
sr.append(ind)
endS = time.time()
print("Execution time ",endS - startS)
#print("A = ", A)
#print("Nk = ", Nk)
Avg_Accumaltive_SR3 = np.abs(100-Avg_Accumaltive_SR2)
```
## The single weight that has been chosen by Successive Rejects method
Practically we choose k weights that have high rewards but the purpose of this file to show the computation time and which is the method be able to choose best arm comparing to direct method.
```
reshapeS = Avg_Accumaltive_SR3.reshape(N1,N2)
S = np.argmax(Avg_Accumaltive_SR3)
Si,Sj = np.unravel_index(S, reshapeS.shape)
print("The index of chosen W_ji is = ", Sj, Si)
```
## Prune 5%, 10%, 25% and 50% from the unpruned model
```
print('Finsh playing start pruining:')
rewards = Avg_Accumaltive_SR3
Pecent = {1:'5%', 2:'10%', 3:'25%', 4:'50%'}
ListPecent = [0.05, 0.1, 0.25, 0.5]
No_ofweights = len(weights_unpruned)
accuracyL = []
# Prune 5%
No_of_pruned_weight = int(ListPecent[0] * N1*N2)
for t in range(No_of_pruned_weight):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 5% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('Successive_Rejects_model_5_Pecernt.h5')
# Prune 10%
No_of_pruned_weight_10 = int(ListPecent[1] * N1*N2)
for t in range(No_of_pruned_weight, No_of_pruned_weight_10):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 10% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('Successive_Rejects_model_10_Pecernt.h5')
# Prune 25%
No_of_pruned_weight_25 = int(ListPecent[2] * N1*N2)
for t in range(No_of_pruned_weight_10, No_of_pruned_weight_25):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 25% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('Successive_Rejects_model_25_Pecernt.h5')
# Prune 50%
No_of_pruned_weight_50 = int(ListPecent[3] * N1*N2)
for t in range(No_of_pruned_weight_25, No_of_pruned_weight_50):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 50% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('Successive_Rejects_model_50_Pecernt.h5')
x= [5,10,25,50]
xx = [accuracy_training_Model, accuracy_training_Model, accuracy_training_Model, accuracy_training_Model]
plt.plot(x, accuracyL, '+')
plt.plot(x, accuracyL, 'b--.', label='after pruning')
plt.plot(x, xx, 'k--.', label='before pruning')
plt.ylabel('Accuracy after pruning')
plt.title('Accuracy vs Sparsity')
plt.xlabel('Percentage of Sparsity')
plt.legend(loc = 3)
plt.grid(True)
plt.show()
# distribution of the weights before pruning
weights_unpruned = []
for wei in weights:
wei = wei[wei!=0]
weights_unpruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_unpruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - Unpruned Model')
plt.show()
# distribution of the weights 5% pruning
model = load_model('Successive_Rejects_model_5_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 5% of the Model')
plt.show()
# distribution of the weights 10% pruning
model = load_model('Successive_Rejects_model_10_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 10% of the Model')
plt.show()
# distribution of the weights 25% pruning
model = load_model('Successive_Rejects_model_25_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 25% of the Model')
plt.show()
# distribution of the weights 50% pruning
model = load_model('Successive_Rejects_model_50_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 50% of the Model')
plt.show()
model = load_model('my_model.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model = accuracy
print("The accuracy of the model before pruning on testing data = ", accuracy_testing_Model)
model = load_model('Successive_Rejects_model_5_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 5 on testing data = ", accuracy_testing_Model_05)
model = load_model('Successive_Rejects_model_10_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_10 = accuracy
print("The accuracy of the model after pruned 10 on testing data = ", accuracy_testing_Model_10)
model = load_model('Successive_Rejects_model_25_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_25 = accuracy
print("The accuracy of the model after pruned 25 on testing data = ", accuracy_testing_Model_25)
model = load_model('Successive_Rejects_model_50_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_50 = accuracy
print("The accuracy of the model after pruned 50 on testing data = ", accuracy_testing_Model_50)
accuracyL_test = [accuracy_testing_Model_05, accuracy_testing_Model_10, accuracy_testing_Model_25
, accuracy_testing_Model_50]
xx = [accuracy_testing_Model, accuracy_testing_Model, accuracy_testing_Model, accuracy_testing_Model]
plt.plot(x, accuracyL_test, '+')
plt.plot(x, accuracyL_test, 'b--.', label='after pruning')
plt.plot(x, xx, 'k--.', label='before pruning')
plt.ylabel('Accuracy after pruning')
plt.title('Accuracy vs Sparsity')
plt.xlabel('Percentage of Sparsity')
plt.legend(loc = 3)
plt.grid(True)
plt.show()
```
# Best Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence
The implimetation based on:
K. Terayama, H. Iwata, M. Araki, Y. Okuno, K. Tsuda, "Machine Learning Accelerates MD-based Binding-Pose Prediction between Ligands and Proteins", Bioinformatics, 2017.
Gabillon, V.; Ghavamzadeh, M.; Lazaric, A. "Best arm identification: A unified approach to fixed budget and fixed confidence." NIPS, pp.3212–3220, 2012.
## Pruning weights
```
a=0.25
def beta(s, budget, K, a, mean_list, automatic = False, auto_para = 1, H = 1):
if automatic:
alpha = auto_para
a = alpha* (budget - K)/(4*H)
return np.sqrt(a / s)
else:
return np.sqrt(a*(budget - K) / s)
def U(i, t, budget, pulls, a):
K = len(pulls)
return np.mean(pulls[i]) + beta(len(pulls[i]), budget, K, a)
def L(i, t, budget, pulls, a):
K = len(pulls)
return np.mean(pulls[i]) - beta(len(pulls[i]), budget, K, a)
def B(i, t, budget, pulls, a, K):
list_woi = range(K)
list_woi.pop(i)
return np.max([U(j, t, budget, pulls, a) - L(i, t, budget, pulls, a) for j in list_woi])
def calc_B(k, U_l, L_l, K, max_U_i_t_index, max_U_i_t, max_U_i_eq_k):
if k == max_U_i_t_index:
return max_U_i_eq_k - L_l[k]
else:
return max_U_i_t - L_l[k]
import copy as cp
modelBuckup = load_model('my_model.h5')
startUB = time.time()
N_arry = np.zeros(N1*N2)
NL = np.zeros(Round)
Avg_Accumaltive_UB = np.zeros(N1*N2)
Count_UB = np.ones(N1*N2)
UB = np.zeros(Round)
p_reshape = Avg_Accumaltive_UB.reshape(N1,N2)
count = 0
import random
K = N1*N2
# Play each arm once
for i in range(N1):
for j in range(N2):
loss, accuracy = modelBuckup.evaluate(X_train[1:10], labelsTrain[1:10], batch_size=1, verbose=0)
All_weights=modelBuckup.get_weights()
temp = All_weights[2][i][j]
All_weights[2][i][j] = 0
modelBuckup.set_weights(All_weights)
loss_New, accuracy_New = modelBuckup.evaluate(X_train[1:10], labelsTrain[1:10], batch_size=1, verbose=0)
delta = loss_New - loss
reward = max(0,Threshold + delta)/constant
All_weights[2][i][j]= temp
Avg_Accumaltive_UB[count] = reward
modelBuckup.set_weights(All_weights)
count = count+1
#Exploration loop
for j in range(N1*N2-1, Round):
mean_list = [np.mean(Avg_Accumaltive_UB[i]) for i in range(K)]
beta_list = [beta(len([Avg_Accumaltive_UB[i]]), Round, K, a, mean_list) for i in range(K)]
b = random.randint(0,noExamples-1)
loss, accuracy = modelBuckup.evaluate(X_train[b:b+1], labelsTrain[b:b+1], batch_size=1, verbose=0)
U_list = [mean_list[i] + beta_list[i] for i in range(K)]
L_list = [mean_list[i] - beta_list[i] for i in range(K)]
max_U_i_t_index = np.argmax(U_list)
max_U_i_t = U_list[max_U_i_t_index]
max_U_i_eq_k = np.max(cp.copy(U_list).pop(max_U_i_t_index))
B_list = [calc_B(k, U_list, L_list, K, max_U_i_t_index, max_U_i_t, max_U_i_eq_k) for k in range(K)]
J_t = np.argmin([B_list[i] if Count_UB[i] < 20 else 10**10 for i in range(K)])
list_woJt = list(range(K))
list_woJt.pop(J_t)
u_t = list_woJt[np.argmax([U_list[i] if Count_UB[i] < 20 else -10**10 for i in list_woJt])]
l_t = J_t
I_t = [l_t, u_t][np.argmax([beta_list[i] for i in [l_t, u_t]])]
i,k = np.unravel_index(ind, p_reshape.shape)
Count_UB[I_t] = Count_UB[I_t] + 1
All_weights=modelBuckup.get_weights()
temp = All_weights[2][i][k]
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss_New, accuracy_New = modelBuckup.evaluate(X_train[b:b+1], labelsTrain[b:b+1], batch_size=1, verbose=0)
delta = loss_New - loss
reward = max(0,Threshold + delta)/constant
All_weights[2][i][k] = temp
modelBuckup.set_weights(All_weights)
val = Count_UB[I_t]
Avg_Accumaltive_UB[I_t] = (val-1)/val * Avg_Accumaltive_UB[I_t] + 1/val * reward
modelBuckup.set_weights(All_weights)
UB[j]=I_t
endUB = time.time()
print("Execution time ",endUB - startUB)
Avg_Accumaltive_UB3 = np.abs(100-Avg_Accumaltive_UB)
```
## The single weight that has been chosen by Successive Rejects method
Practically we choose k weights that have high rewards but the purpose of this file to show the computation time and which is the method be able to choose best arm comparing to direct method.
```
reshapeS = Avg_Accumaltive_UB3.reshape(N1,N2)
S = np.argmax(Avg_Accumaltive_UB3)
Si,Sj = np.unravel_index(S, reshapeS.shape)
print("The index of chosen W_ji is = ", Sj, Si)
```
## Prune 5%, 10%, 25% and 50% from the unpruned model
```
print('Finsh playing start pruining:')
rewards = Avg_Accumaltive_UB3
Pecent = {1:'5%', 2:'10%', 3:'25%', 4:'50%'}
ListPecent = [0.05, 0.1, 0.25, 0.5]
No_ofweights = len(weights_unpruned)
accuracyL = []
# Prune 5%
No_of_pruned_weight = int(ListPecent[0] * N1*N2)
for t in range(No_of_pruned_weight):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 5% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('Unified_Approach_model_5_Pecernt.h5')
# Prune 10%
No_of_pruned_weight_10 = int(ListPecent[1] * N1*N2)
for t in range(No_of_pruned_weight, No_of_pruned_weight_10):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 10% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('Unified_Approach_model_10_Pecernt.h5')
# Prune 25%
No_of_pruned_weight_25 = int(ListPecent[2] * N1*N2)
for t in range(No_of_pruned_weight_10, No_of_pruned_weight_25):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 25% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('Unified_Approach_model_25_Pecernt.h5')
# Prune 50%
No_of_pruned_weight_50 = int(ListPecent[3] * N1*N2)
for t in range(No_of_pruned_weight_25, No_of_pruned_weight_50):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 50% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('Unified_Approach_model_50_Pecernt.h5')
x= [5,10,25,50]
xx = [accuracy_training_Model, accuracy_training_Model, accuracy_training_Model, accuracy_training_Model]
plt.plot(x, accuracyL, '+')
plt.plot(x, accuracyL, 'b--.', label='after pruning')
plt.plot(x, xx, 'k--.', label='before pruning')
plt.legend(loc = 3)
plt.ylabel('Accuracy after pruning')
plt.title('Accuracy vs Sparsity')
plt.xlabel('Percentage of Sparsity')
plt.grid(True)
plt.show()
# distribution of the weights before pruning
weights_unpruned = []
for wei in weights:
wei = wei[wei!=0]
weights_unpruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_unpruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - Unpruned Model')
plt.show()
# distribution of the weights 5% pruning
model = load_model('Unified_Approach_model_5_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 5% of the Model')
plt.show()
# distribution of the weights 10% pruning
model = load_model('Unified_Approach_model_10_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 10% of the Model')
plt.show()
# distribution of the weights 25% pruning
model = load_model('Unified_Approach_model_25_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 25% of the Model')
plt.show()
# distribution of the weights 50% pruning
model = load_model('Unified_Approach_model_50_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 50% of the Model')
plt.show()
model = load_model('my_model.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model = accuracy
print("The accuracy of the model before pruning on testing data = ", accuracy_testing_Model)
model = load_model('Unified_Approach_model_5_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 5 on testing data = ", accuracy_testing_Model_05)
model = load_model('Unified_Approach_model_10_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_10 = accuracy
print("The accuracy of the model after pruned 10 on testing data = ", accuracy_testing_Model_10)
model = load_model('Unified_Approach_model_25_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_25 = accuracy
print("The accuracy of the model after pruned 25 on testing data = ", accuracy_testing_Model_25)
model = load_model('Unified_Approach_model_50_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_50 = accuracy
print("The accuracy of the model after pruned 50 on testing data = ", accuracy_testing_Model_50)
accuracyL_test = [accuracy_testing_Model_05, accuracy_testing_Model_10, accuracy_testing_Model_25
, accuracy_testing_Model_50]
xx = [accuracy_testing_Model, accuracy_testing_Model, accuracy_testing_Model, accuracy_testing_Model]
plt.plot(x, accuracyL_test, '+')
plt.plot(x, accuracyL_test, 'b--.', label='after pruning')
plt.plot(x, xx, 'k--.', label='before pruning')
plt.legend(loc = 3)
plt.ylabel('Accuracy after pruning')
plt.title('Accuracy vs Sparsity')
plt.xlabel('Percentage of Sparsity')
plt.grid(True)
plt.show()
```
# Softmax
## Pruning the Weights
```
modelBuckup = load_model('my_model.h5')
startSM = time.time()
N_arry = np.zeros(N1*N2)
#noExamples = 100
NL = np.zeros(Round)
SM = np.zeros(Round)
Avg_Accumaltive_R_SM= np.zeros(N1*N2)
p_reshape = Avg_Accumaltive_R_SM.reshape(N1,N2)
Count_SM = np.ones(N1*N2)
import random
tau=4
count = 0
for i in range(N1):
for j in range(N2):
loss, accuracy = modelBuckup.evaluate(X_train[1:10], labelsTrain[1:10], batch_size=1, verbose=0)
# Prune the neuron in the layer
All_weights=modelBuckup.get_weights()
temp = All_weights[2][i][j]
All_weights[2][i][j] = 0
modelBuckup.set_weights(All_weights)
loss_New, accuracy_New = modelBuckup.evaluate(X_train[1:10], labelsTrain[1:10], batch_size=1, verbose=0)
delta = loss_New - loss
reward = max(0,Threshold + delta)/constant
All_weights[2][i][j]= temp
Avg_Accumaltive_R_SM[count] = reward
modelBuckup.set_weights(All_weights)
count = count+1
for j in range(N1*N2-1, Round):
b = random.randint(0,noExamples-1)
loss, accuracy = modelBuckup.evaluate(X_train[b:b+1], labelsTrain[b:b+1], batch_size=1, verbose=0)
prob = np.exp(Avg_Accumaltive_R_SM/tau)
sum = prob.sum()
prb = prob/sum
ind = np.random.choice(numpy.arange(0, N1*N2), p=prb.reshape(N1*N2))
i,k = np.unravel_index(ind, p_reshape.shape)
All_weights=modelBuckup.get_weights()
temp = All_weights[2][i][k]
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
Count_SM[ind]=Count_SM[ind]+1
loss_New, accuracy_New = modelBuckup.evaluate(X_train[b:b+1], labelsTrain[b:b+1], batch_size=1, verbose=0)
delta = loss_New - loss
#print(delta)
reward = max(0,Threshold + delta)/constant
#print(reward)
val = Count_SM[ind]
Avg_Accumaltive_R_SM[ind] = (val-1)/val * Avg_Accumaltive_R_SM[ind] + 1/val * reward
All_weights[2][i][k] = temp
modelBuckup.set_weights(All_weights)
SM[j]=ind
endSM = time.time()
print("Execution time ",endSM - startSM)
```
## The single weight that has been chosen by Softmax method
Practically we choose k weights that have high rewards but the purpose of this file to show the computation time and which is the method be able to choose best arm comparing to direct method.
```
reshapeSM = Avg_Accumaltive_R_SM.reshape(N1,N2)
G = np.argmax(Avg_Accumaltive_R_SM)
Gi,Gj = np.unravel_index(G, reshapeSM.shape)
print("The index of chosen W_ji is = ", Gj, Gi)
```
## Prune 5%, 10%, 25% and 50% from the unpruned model
```
print('Finsh playing start pruining:')
rewards = Avg_Accumaltive_R_SM
Pecent = {1:'5%', 2:'10%', 3:'25%', 4:'50%'}
ListPecent = [0.05, 0.1, 0.25, 0.5]
No_ofweights = len(weights_unpruned)
accuracyL = []
# Prune 5%
No_of_pruned_weight = int(ListPecent[0] * N1*N2)
for t in range(No_of_pruned_weight):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 5% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('Softmax_model_5_Pecernt.h5')
# Prune 10%
No_of_pruned_weight_10 = int(ListPecent[1] * N1*N2)
for t in range(No_of_pruned_weight, No_of_pruned_weight_10):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 10% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('Softmax_model_10_Pecernt.h5')
# Prune 25%
No_of_pruned_weight_25 = int(ListPecent[2] * N1*N2)
for t in range(No_of_pruned_weight_10, No_of_pruned_weight_25):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 25% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('Softmax_model_25_Pecernt.h5')
# Prune 50%
No_of_pruned_weight_50 = int(ListPecent[3] * N1*N2)
for t in range(No_of_pruned_weight_25, No_of_pruned_weight_50):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 50% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('Softmax_model_50_Pecernt.h5')
x= [5,10,25,50]
xx = [accuracy_training_Model, accuracy_training_Model, accuracy_training_Model, accuracy_training_Model]
plt.plot(x, accuracyL, '+')
plt.plot(x, accuracyL, 'b--.', label='after pruning')
plt.plot(x, xx, 'k--.', label='before pruning')
plt.legend(loc = 3)
plt.ylabel('Accuracy after pruning')
plt.title('Accuracy vs Sparsity')
plt.xlabel('Percentage of Sparsity')
plt.grid(True)
plt.show()
# distribution of the weights before pruning
weights_unpruned = []
for wei in weights:
wei = wei[wei!=0]
weights_unpruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_unpruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - Unpruned Model')
plt.show()
# distribution of the weights 5% pruning
model = load_model('Softmax_model_5_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 5% of the Model')
plt.show()
# distribution of the weights 10% pruning
model = load_model('Softmax_model_10_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 10% of the Model')
plt.show()
# distribution of the weights 25% pruning
model = load_model('Softmax_model_25_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 25% of the Model')
plt.show()
# distribution of the weights 50% pruning
model = load_model('Softmax_model_50_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 50% of the Model')
plt.show()
```
### On testing dataset (Unseen dataset)
```
model = load_model('my_model.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model = accuracy
print("The accuracy of the model before pruning on testing data = ", accuracy_testing_Model)
model = load_model('Softmax_model_5_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 5 on testing data = ", accuracy_testing_Model_05)
model = load_model('Softmax_model_10_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_10 = accuracy
print("The accuracy of the model after pruned 10 on testing data = ", accuracy_testing_Model_10)
model = load_model('Softmax_model_25_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_25 = accuracy
print("The accuracy of the model after pruned 25 on testing data = ", accuracy_testing_Model_25)
model = load_model('Softmax_model_50_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_50 = accuracy
print("The accuracy of the model after pruned 50 on testing data = ", accuracy_testing_Model_50)
accuracyL_test = [accuracy_testing_Model_05, accuracy_testing_Model_10, accuracy_testing_Model_25
, accuracy_testing_Model_50]
xx = [accuracy_testing_Model, accuracy_testing_Model, accuracy_testing_Model, accuracy_testing_Model]
plt.plot(x, accuracyL_test, '+')
plt.plot(x, accuracyL_test, 'b--.', label='after pruning')
plt.plot(x, xx, 'k--.', label='before pruning')
plt.legend(loc = 3)
plt.ylabel('Accuracy after pruning')
plt.title('Accuracy vs Sparsity')
plt.xlabel('Percentage of Sparsity')
plt.grid(True)
plt.show()
```
# Win–Stay, Lose–Shift (Pursuit method)
## Pruning the Weights
```
modelBuckup = load_model('my_model.h5')
startWSLS = time.time()
N_arry = np.zeros(N1*N2)
#noExamples = 100
NL = np.zeros(Round)
WSLS = np.zeros(Round)
Avg_Accumaltive_R_WSLS= np.zeros(N1*N2)
Probs = 1/(N1*N2) * np.ones(N1*N2)
p_reshape = Avg_Accumaltive_R_WSLS.reshape(N1,N2)
Count_WSLS = np.ones(N1*N2)
import random
beta = 0.3
count = 0
for i in range(N1):
for j in range(N2):
loss, accuracy = modelBuckup.evaluate(X_train[1:10], labelsTrain[1:10], batch_size=1, verbose=0)
# Prune the neuron in the layer
All_weights=modelBuckup.get_weights()
temp = All_weights[2][i][j]
All_weights[2][i][j] = 0
modelBuckup.set_weights(All_weights)
loss_New, accuracy_New = modelBuckup.evaluate(X_train[1:10], labelsTrain[1:10], batch_size=1, verbose=0)
delta = loss_New - loss
reward = max(0,Threshold + delta)/constant
if reward>0:
Probs[count] = Probs[count] + beta * (1-Probs[count])
else:
Probs[count] = Probs[count] - beta * Probs[count]
All_weights[2][i][j]= temp
Avg_Accumaltive_R_WSLS[count] = reward
modelBuckup.set_weights(All_weights)
for j in range(N1*N2-1, Round):
b = random.randint(0,noExamples-1)
loss, accuracy = modelBuckup.evaluate(X_train[b:b+1], labelsTrain[b:b+1], batch_size=1, verbose=0)
ind = np.argmax(Probs)
Count_WSLS[ind] = Count_WSLS[ind] + 1
i,k = np.unravel_index(ind, p_reshape.shape)
All_weights=modelBuckup.get_weights()
temp = All_weights[2][i][k]
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss_New, accuracy_New = modelBuckup.evaluate(X_train[b:b+1], labelsTrain[b:b+1], batch_size=1, verbose=0)
delta = loss_New - loss
#print(delta)
reward = max(0,Threshold + delta)/constant
if reward>0:
Probs[ind] = Probs[ind] + beta * (1-Probs[ind])
else:
Probs[ind] = Probs[ind] - beta * Probs[ind]
count = count+1
#print(reward)
val = Count_WSLS[ind]
Avg_Accumaltive_R_WSLS[ind] = (val-1)/val * Avg_Accumaltive_R_WSLS[ind] + 1/val * reward
All_weights[2][i][k] = temp
modelBuckup.set_weights(All_weights)
WSLS[j]=ind
endWSLS = time.time()
print("Execution time ",endWSLS - startWSLS)
```
## The single weight that has been chosen by Epsilon Greedy method
Practically we choose k weights that have high rewards but the purpose of this file to show the computation time and which is the method be able to choose best arm comparing to direct method.
```
reshapeWSLS = Avg_Accumaltive_R_WSLS.reshape(N1,N2)
G = np.argmax(Avg_Accumaltive_R_WSLS)
Gi,Gj = np.unravel_index(G, reshapeWSLS.shape)
print("The index of chosen W_ji is = ", Gj, Gi)
```
## Prune 5%, 10%, 25% and 50% from the unpruned model
```
print('Finsh playing start pruining:')
rewards = Avg_Accumaltive_R_WSLS
Pecent = {1:'5%', 2:'10%', 3:'25%', 4:'50%'}
ListPecent = [0.05, 0.1, 0.25, 0.5]
No_ofweights = len(weights_unpruned)
accuracyL = []
# Prune 5%
No_of_pruned_weight = int(ListPecent[0] * N1*N2)
for t in range(No_of_pruned_weight):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 5% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('WSLS_model_5_Pecernt.h5')
# Prune 10%
No_of_pruned_weight_10 = int(ListPecent[1] * N1*N2)
for t in range(No_of_pruned_weight, No_of_pruned_weight_10):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 10% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('WSLS_model_10_Pecernt.h5')
# Prune 25%
No_of_pruned_weight_25 = int(ListPecent[2] * N1*N2)
for t in range(No_of_pruned_weight_10, No_of_pruned_weight_25):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 25% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('WSLS_model_25_Pecernt.h5')
# Prune 50%
No_of_pruned_weight_50 = int(ListPecent[3] * N1*N2)
for t in range(No_of_pruned_weight_25, No_of_pruned_weight_50):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 50% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('WSLS_model_50_Pecernt.h5')
x= [5,10,25,50]
xx = [accuracy_training_Model, accuracy_training_Model, accuracy_training_Model, accuracy_training_Model]
plt.plot(x, accuracyL, '+')
plt.plot(x, accuracyL, 'b--.', label='after pruning')
plt.plot(x, xx, 'k--.', label='before pruning')
plt.legend(loc = 3)
plt.ylabel('Accuracy after pruning')
plt.title('Accuracy vs Sparsity')
plt.xlabel('Percentage of Sparsity')
plt.grid(True)
plt.show()
# distribution of the weights before pruning
weights_unpruned = []
for wei in weights:
wei = wei[wei!=0]
weights_unpruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_unpruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - Unpruned Model')
plt.show()
# distribution of the weights 5% pruning
model = load_model('WSLS_model_5_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 5% of the Model')
plt.show()
# distribution of the weights 10% pruning
model = load_model('WSLS_model_10_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 10% of the Model')
plt.show()
# distribution of the weights 25% pruning
model = load_model('WSLS_model_25_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 25% of the Model')
plt.show()
# distribution of the weights 50% pruning
model = load_model('WSLS_model_50_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 50% of the Model')
plt.show()
```
### On testing dataset (Unseen dataset)
```
model = load_model('my_model.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model = accuracy
print("The accuracy of the model before pruning on testing data = ", accuracy_testing_Model)
model = load_model('WSLS_model_5_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 5 on testing data = ", accuracy_testing_Model_05)
model = load_model('WSLS_model_10_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_10 = accuracy
print("The accuracy of the model after pruned 10 on testing data = ", accuracy_testing_Model_10)
model = load_model('WSLS_model_25_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_25 = accuracy
print("The accuracy of the model after pruned 25 on testing data = ", accuracy_testing_Model_25)
model = load_model('WSLS_model_50_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_50 = accuracy
print("The accuracy of the model after pruned 50 on testing data = ", accuracy_testing_Model_50)
accuracyL_test = [accuracy_testing_Model_05, accuracy_testing_Model_10, accuracy_testing_Model_25
, accuracy_testing_Model_50]
xx = [accuracy_testing_Model, accuracy_testing_Model, accuracy_testing_Model, accuracy_testing_Model]
plt.plot(x, accuracyL_test, '+')
plt.plot(x, accuracyL_test, 'b--.', label='after pruning')
plt.plot(x, xx, 'k--.', label='before pruning')
plt.legend(loc = 3)
plt.ylabel('Accuracy after pruning')
plt.title('Accuracy vs Sparsity')
plt.xlabel('Percentage of Sparsity')
plt.grid(True)
plt.show()
```
# KL-UCB
```
def klFunction(x, y, eps=1e-15):
x = max(x, eps)
y = max(y, eps)
return x * np.log(x / y) + (1 - x) * np.log((1 - x) / (1 - y))
modelBuckup = load_model('my_model.h5')
startklU = time.time()
N_arry = np.zeros(N1*N2)
NL = np.zeros(Round)
Avg_Accumaltive_klUCB = np.zeros(N1*N2)
currentq = Avg_Accumaltive_klUCB + 0.000001
Count_klUCB = np.ones(N1*N2)
klUCB = np.zeros(Round)
p_reshape = Avg_Accumaltive_klUCB.reshape(N1,N2)
count = 0
import random
tau=4
for i in range(N1):
for j in range(N2):
loss, accuracy = modelBuckup.evaluate(X_train[1:10], labelsTrain[1:10], batch_size=1, verbose=0)
All_weights=modelBuckup.get_weights()
temp = All_weights[2][i][j]
All_weights[2][i][j] = 0
modelBuckup.set_weights(All_weights)
loss_New, accuracy_New = modelBuckup.evaluate(X_train[1:10], labelsTrain[1:10], batch_size=1, verbose=0)
delta = loss_New - loss
reward = max(0,Threshold + delta)/constant
if reward>0:
currentq[count] = (currentq[count] + reward)/ reward
All_weights[2][i][j]= temp
Avg_Accumaltive_klUCB[count] = reward
modelBuckup.set_weights(All_weights)
count = count+1
d=0
for j in range(N1*N2-1, Round):
b = random.randint(0,noExamples-1)
loss, accuracy = modelBuckup.evaluate(X_train[b:b+1], labelsTrain[b:b+1], batch_size=1, verbose=0)
ind = np.argmax(currentq)
Count_klUCB[ind] = Count_klUCB[ind] + 1
i,k = np.unravel_index(ind, p_reshape.shape)
try:
normalized = (Avg_Accumaltive_klUCB - np.min(Avg_Accumaltive_klUCB)) / (np.max(Avg_Accumaltive_klUCB) - np.min(Avg_Accumaltive_klUCB))
except:
normalized = Avg_Accumaltive_klUCB / np.max(Avg_Accumaltive_klUCB)
q = normalized[ind]
while (q < 1) and (d < np.log(j)/Count_klUCB[ind]):
d = klFunction(normalized[ind], q) # d(mu_i, q)
q = q + 0.01
currentq[ind]= q
All_weights=modelBuckup.get_weights()
temp = All_weights[2][i][k]
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss_New, accuracy_New = modelBuckup.evaluate(X_train[b:b+1], labelsTrain[b:b+1], batch_size=1, verbose=0)
delta = loss_New - loss
reward = max(0,Threshold + delta)/constant
All_weights[2][i][k] = temp
modelBuckup.set_weights(All_weights)
val = Count_klUCB[ind]
Avg_Accumaltive_klUCB[ind] = (val-1)/val * Avg_Accumaltive_klUCB[ind] + 1/val * reward
modelBuckup.set_weights(All_weights)
UCB1[j]=ind
endklU = time.time()
print("Execution time ",endklU - startklU)
```
## The single weight that has been chosen by KL-UCB method
Practically we choose k weights that have high rewards but the purpose of this file to show the computation time and which is the method be able to choose best arm comparing to direct method.
```
reshapeklU = Avg_Accumaltive_klUCB.reshape(N1,N2)
klU = np.argmax(Avg_Accumaltive_klUCB)
Ui,Uj = np.unravel_index(klU, reshapeU.shape)
print("The index of chosen W_ji is = ", Uj, Ui)
```
## Prune 5%, 10%, 25% and 50% from the unpruned model
```
print('Finsh playing start pruining:')
rewards = Avg_Accumaltive_klUCB
Pecent = {1:'5%', 2:'10%', 3:'25%', 4:'50%'}
ListPecent = [0.05, 0.1, 0.25, 0.5]
No_ofweights = len(weights_unpruned)
accuracyL = []
# Prune 5%
No_of_pruned_weight = int(ListPecent[0] * N1*N2)
for t in range(No_of_pruned_weight):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 5% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('kl_UCB_model_5_Pecernt.h5')
# Prune 10%
No_of_pruned_weight_10 = int(ListPecent[1] * N1*N2)
for t in range(No_of_pruned_weight, No_of_pruned_weight_10):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 10% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('kl_UCB_model_10_Pecernt.h5')
# Prune 25%
No_of_pruned_weight_25 = int(ListPecent[2] * N1*N2)
for t in range(No_of_pruned_weight_10, No_of_pruned_weight_25):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 25% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('kl_UCB_model_25_Pecernt.h5')
# Prune 50%
No_of_pruned_weight_50 = int(ListPecent[3] * N1*N2)
for t in range(No_of_pruned_weight_25, No_of_pruned_weight_50):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 50% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('kl_UCB_model_50_Pecernt.h5')
x= [5,10,25,50]
xx = [accuracy_training_Model, accuracy_training_Model, accuracy_training_Model, accuracy_training_Model]
plt.plot(x, accuracyL, '+')
plt.plot(x, accuracyL, 'b--.', label='after pruning')
plt.plot(x, xx, 'k--.', label='before pruning')
plt.legend(loc = 3)
plt.ylabel('Accuracy after pruning')
plt.title('Accuracy vs Sparsity')
plt.xlabel('Percentage of Sparsity')
plt.grid(True)
plt.show()
# distribution of the weights before pruning
weights_unpruned = []
for wei in weights:
wei = wei[wei!=0]
weights_unpruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_unpruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - Unpruned Model')
plt.show()
# distribution of the weights 5% pruning
model = load_model('kl_UCB_model_5_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 5% of the Model')
plt.show()
# distribution of the weights 10% pruning
model = load_model('kl_UCB_model_10_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 10% of the Model')
plt.show()
# distribution of the weights 25% pruning
model = load_model('kl_UCB_model_25_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 25% of the Model')
plt.show()
# distribution of the weights 50% pruning
model = load_model('kl_UCB_model_50_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 50% of the Model')
plt.show()
model = load_model('my_model.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model = accuracy
print("The accuracy of the model before pruning on testing data = ", accuracy_testing_Model)
model = load_model('kl_UCB_model_5_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 5 on testing data = ", accuracy_testing_Model_05)
model = load_model('kl_UCB_model_10_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_10 = accuracy
print("The accuracy of the model after pruned 10 on testing data = ", accuracy_testing_Model_10)
model = load_model('kl_UCB_model_25_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_25 = accuracy
print("The accuracy of the model after pruned 25 on testing data = ", accuracy_testing_Model_25)
model = load_model('kl_UCB_model_50_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_50 = accuracy
print("The accuracy of the model after pruned 50 on testing data = ", accuracy_testing_Model_50)
accuracyL_test = [accuracy_testing_Model_05, accuracy_testing_Model_10, accuracy_testing_Model_25
, accuracy_testing_Model_50]
xx = [accuracy_testing_Model, accuracy_testing_Model, accuracy_testing_Model, accuracy_testing_Model]
plt.plot(x, accuracyL_test, '+')
plt.plot(x, accuracyL_test, 'b--.', label='after pruning')
plt.plot(x, xx, 'k--.', label='before pruning')
plt.legend(loc = 3)
plt.ylabel('Accuracy after pruning')
plt.title('Accuracy vs Sparsity')
plt.xlabel('Percentage of Sparsity')
plt.grid(True)
plt.show()
```
# Bayes UCB
## Pruning weights
```
from scipy.stats import beta
modelBuckup = load_model('my_model.h5')
startbayucb = time.time()
N_arry = np.zeros(N1*N2)
NL = np.zeros(Round)
Avg_Accumaltive_bayucb = np.zeros(N1*N2)
Count_bayucb = np.ones(N1*N2)
bayucb = np.zeros(Round)
p_reshape = Avg_Accumaltive_bayucb.reshape(N1,N2)
count = 0
success = np.zeros(N1*N2)
failure = np.zeros(N1*N2)
for i in range(N1):
for j in range(N2):
loss, accuracy = modelBuckup.evaluate(X_train[1:10], labelsTrain[1:10], batch_size=1, verbose=0)
# Prune the neuron in the layer
All_weights=modelBuckup.get_weights()
temp = All_weights[2][i][j]
All_weights[2][i][j] = 0
modelBuckup.set_weights(All_weights)
loss_New, accuracy_New = modelBuckup.evaluate(X_train[1:10], labelsTrain[1:10], batch_size=1, verbose=0)
delta = loss_New - loss
if(delta>0):
reward = 1
success[i] = success[i]+1
else:
reward = 0
failure[i] = failure[i]+1
All_weights[2][i][j]= temp
Avg_Accumaltive_bayucb[count] = reward
modelBuckup.set_weights(All_weights)
count = count+1
for j in range(N1*N2-1, Round):
b = random.randint(0,noExamples-1)
loss, accuracy = modelBuckup.evaluate(X_train[b:b+1], labelsTrain[b:b+1], batch_size=1, verbose=0)
ind = np.argmax(beta.pdf(1-1/Count_bayucb, 1+success, 1+failure))
Count_bayucb[ind] = Count_bayucb[ind] + 1
i,k = np.unravel_index(ind, p_reshape.shape)
All_weights=modelBuckup.get_weights()
temp = All_weights[2][i][k]
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss_New, accuracy_New = modelBuckup.evaluate(X_train[b:b+1], labelsTrain[b:b+1], batch_size=1, verbose=0)
delta = loss_New - loss
if(delta>0):
reward = 1
success[i] = success[i]+1
else:
reward = 0
failure[i] = failure[i]+1
All_weights[2][i][k] = temp
modelBuckup.set_weights(All_weights)
val = Count_bayucb[ind]
Avg_Accumaltive_bayucb[ind] = (val-1)/val * Avg_Accumaltive_bayucb[ind] + 1/val * reward
TS[j]=ind
endbayucb = time.time()
print("Execution time ",endbayucb - startbayucb)
```
## The single weight that has been chosen by Bayes UCB method
Practically we choose k weights that have high rewards but the purpose of this file to show the computation time and which is the method be able to choose best arm comparing to direct method.
```
reshapebayucb = Avg_Accumaltive_bayucb.reshape(N1,N2)
bayucb = np.argmax(Avg_Accumaltive_bayucb)
Ti,Tj = np.unravel_index(bayucb, reshapebayucb.shape)
print("The index of chosen W_ji is = ", Tj, Ti)
```
## Prune 5%, 10%, 25% and 50% from the unpruned model
```
print('Finsh playing start pruining:')
rewards = Avg_Accumaltive_bayucb
Pecent = {1:'5%', 2:'10%', 3:'25%', 4:'50%'}
ListPecent = [0.05, 0.1, 0.25, 0.5]
No_ofweights = len(weights_unpruned)
accuracyL = []
# Prune 5%
No_of_pruned_weight = int(ListPecent[0] * N1*N2)
for t in range(No_of_pruned_weight):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 5% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('bayucb_model_5_Pecernt.h5')
# Prune 10%
No_of_pruned_weight_10 = int(ListPecent[1] * N1*N2)
for t in range(No_of_pruned_weight, No_of_pruned_weight_10):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 10% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('bayucb_model_10_Pecernt.h5')
# Prune 25%
No_of_pruned_weight_25 = int(ListPecent[2] * N1*N2)
for t in range(No_of_pruned_weight_10, No_of_pruned_weight_25):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 25% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('bayucb_model_25_Pecernt.h5')
# Prune 50%
No_of_pruned_weight_50 = int(ListPecent[3] * N1*N2)
for t in range(No_of_pruned_weight_25, No_of_pruned_weight_50):
x = np.argmax(rewards)
i,k = np.unravel_index(x, p_reshape.shape)
rewards[x] = -100
All_weights = modelBuckup.get_weights()
All_weights[2][i][k] = 0
modelBuckup.set_weights(All_weights)
loss, accuracy = modelBuckup.evaluate(X_train, labelsTrain, batch_size=1, verbose=2)
print("Accuract after prune 50% is ", accuracy)
accuracyL.append(accuracy)
modelBuckup.save('bayucb_model_50_Pecernt.h5')
x= [5,10,25,50]
xx = [accuracy_training_Model, accuracy_training_Model, accuracy_training_Model, accuracy_training_Model]
plt.plot(x, accuracyL, '+')
plt.plot(x, accuracyL, 'b--.', label='after pruning')
plt.plot(x, xx, 'k--.', label='before pruning')
plt.ylabel('Accuracy after pruning')
plt.title('Accuracy vs Sparsity')
plt.xlabel('Percentage of Sparsity')
plt.legend(loc = 3)
plt.grid(True)
plt.show()
# distribution of the weights before pruning
weights_unpruned = []
for wei in weights:
wei = wei[wei!=0]
weights_unpruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_unpruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - Unpruned Model')
plt.show()
# distribution of the weights 5% pruning
model = load_model('bayucb_model_5_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 5% of the Model')
plt.show()
# distribution of the weights 10% pruning
model = load_model('bayucb_model_10_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 10% of the Model')
plt.show()
# distribution of the weights 25% pruning
model = load_model('bayucb_model_25_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 25% of the Model')
plt.show()
# distribution of the weights 50% pruning
model = load_model('bayucb_model_50_Pecernt.h5')
weights = model.get_weights()
weights_pruned = []
for wei in weights:
wei = wei[wei!=0]
weights_pruned.extend(wei)
plt.figure()
n, bins, patches = plt.hist(weights_pruned, 100, range = (-0.5,0.5), alpha=0.6)
plt.xlabel('Magnitude of weights')
plt.ylabel('Number of weights')
plt.title('Distribution of the magnitude of Weights - pruned 50% of the Model')
plt.show()
model = load_model('my_model.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model = accuracy
print("The accuracy of the model before pruning on testing data = ", accuracy_testing_Model)
model = load_model('bayucb_model_5_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 5 on testing data = ", accuracy_testing_Model_05)
model = load_model('bayucb_model_10_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_10 = accuracy
print("The accuracy of the model after pruned 10 on testing data = ", accuracy_testing_Model_10)
model = load_model('bayucb_model_25_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_25 = accuracy
print("The accuracy of the model after pruned 25 on testing data = ", accuracy_testing_Model_25)
model = load_model('bayucb_model_50_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_50 = accuracy
print("The accuracy of the model after pruned 50 on testing data = ", accuracy_testing_Model_50)
accuracyL_test = [accuracy_testing_Model_05, accuracy_testing_Model_10, accuracy_testing_Model_25
, accuracy_testing_Model_50]
xx = [accuracy_testing_Model, accuracy_testing_Model, accuracy_testing_Model, accuracy_testing_Model]
plt.plot(x, accuracyL_test, '+')
plt.plot(x, accuracyL_test, 'b--.', label='after pruning')
plt.plot(x, xx, 'k--.', label='before pruning')
plt.ylabel('Accuracy after pruning')
plt.title('Accuracy vs Sparsity')
plt.xlabel('Percentage of Sparsity')
plt.legend(loc = 3)
plt.grid(True)
plt.show()
```
# Sparsity
```
ListPecent = [0.05, 0.1, 0.25, 0.5]
# 5% of the model
modelBuckup = load_model('my_model.h5')
fivePercent = int(ListPecent[0] * N1*N2)
All_weights = modelBuckup.get_weights()
p_reshape = All_weights[2].reshape(N1*N2)
idx = np.argpartition(np.abs(p_reshape), fivePercent)
p_reshape[idx[:fivePercent]] = 0
modelBuckup.set_weights(All_weights)
modelBuckup.save('sparse_model_5_Pecernt.h5')
# 10% of the model
modelBuckup = load_model('my_model.h5')
tenPercent = int(ListPecent[1] * N1*N2)
All_weights = modelBuckup.get_weights()
p_reshape = All_weights[2].reshape(N1*N2)
idx = np.argpartition(np.abs(p_reshape), tenPercent)
p_reshape[idx[:tenPercent]] = 0
modelBuckup.set_weights(All_weights)
modelBuckup.save('sparse_model_10_Pecernt.h5')
Testing_Accuracy = []
model1 = load_model('sparse_model_5_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model1.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model = accuracy
print("The accuracy of the model after pruning 5% on testing data = ", accuracy_testing_Model)
Testing_Accuracy.append(accuracy)
Testing_Accuracy = []
model = load_model('sparse_model_10_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model = accuracy
print("The accuracy of the model after pruning 10% on testing data = ", accuracy_testing_Model)
Testing_Accuracy.append(accuracy)
```
# Comparing the results
## Execution time for different methods
```
print("Execution time of Epsilon Greedy Method =",endG - startG)
print("Execution time of UCB1 Method =",endU - startU)
print("Execution time of Thompson Sampling Method =",endT - startT)
print("Execution time of Successive Rejects Method =",endS - startS)
print("Execution time of Unified Approach Method =",endUB - startUB)
print("Execution time of Random policy Method =",endP - startP)
print("Execution time of Softmax Method =",endSM - startSM)
print("Execution time of Win–Stay, Lose–Shift =",endWSLS - startWSLS)
print("Execution time of KL-UCB Method =",endklU - startklU)
print("Execution time of Bayes-UCB Method =",endbayucb - startbayucb)
Time = {'Name of Method': ["Epsilon Greedy", "UCB1", "Thompson Sampling", "Successive Rejects","Unified Approach","Random policy", "Softmax", "Win–Stay, Lose–Shift","KL-UCB","bayes UCB"],
'Execution Time': [endG - startG, endU - startU, endT - startT, endS - startS, endUB - startUB, endP - startP, endSM - startSM, endWSLS - startWSLS, endklU - startklU, endbayucb - startbayucb]}
df_time = pd.DataFrame.from_dict(Time)
plt.rcParams['figure.figsize'] = (18, 12)
df_time.plot.bar(x='Name of Method', y='Execution Time', rot=0, color ='red');
```
## Accuarcy before pruning and after pruned 5% using different methods
```
Testing_Accuracy = []
model = load_model('my_model.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model = accuracy
print("The accuracy of the model before pruning on testing data = ", accuracy_testing_Model)
Testing_Accuracy.append(accuracy)
#Random Policy
model = load_model('Random_Policy_model_5_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 5% using Random Policy on testing data = ", accuracy_testing_Model_05)
Testing_Accuracy.append(accuracy)
#Epsilon Greedy
model = load_model('Epsilon_Greedy_model_5_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 5% using Epsilon Greedy on testing data = ", accuracy_testing_Model_05)
Testing_Accuracy.append(accuracy)
#UCB1
model = load_model('UCB1_model_5_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 5% using UCB1 on testing data = ", accuracy_testing_Model_05)
Testing_Accuracy.append(accuracy)
#Thompson Sampling
model = load_model('Thompson_Sampling_model_5_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 5% using Thompson Sampling on testing data = ", accuracy_testing_Model_05)
Testing_Accuracy.append(accuracy)
#Unified Approach
model = load_model('Unified_Approach_model_5_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 5% using Unified Approach on testing data = ", accuracy_testing_Model_05)
Testing_Accuracy.append(accuracy)
#Successive Rejects
model = load_model('Successive_Rejects_model_5_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 5% using Successive Rejects on testing data = ", accuracy_testing_Model_05)
Testing_Accuracy.append(accuracy)
#Softmax
model = load_model('Softmax_model_5_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 5% using Softmax on testing data = ", accuracy_testing_Model_05)
Testing_Accuracy.append(accuracy)
#Win–Stay, Lose–Shift
model = load_model('WSLS_model_5_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 5% using Win–Stay, Lose–Shift on testing data = ", accuracy_testing_Model_05)
Testing_Accuracy.append(accuracy)
#KL-UCB
model = load_model('kl_UCB_model_5_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 5% using KL-UCB on testing data = ", accuracy_testing_Model_05)
Testing_Accuracy.append(accuracy)
#Bayes-UCB
model = load_model('bayucb_model_5_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 5% using Bayes-UCB on testing data = ", accuracy_testing_Model_05)
Testing_Accuracy.append(accuracy)
# Sparsity
model1 = load_model('sparse_model_5_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model1.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model = accuracy
print("\nThe accuracy of the model after pruning 5% using Sparsity method on testing data = ", accuracy_testing_Model)
Testing_Accuracy.append(accuracy)
Methods = ["Model", "Random policy", "Epsilon Greedy", "UCB1", "Thompson Sampling", "Unified Approach","Successive Rejects","Softmax" ,"Win–Stay, Lose–Shift", "KL-UCB","Bayes-UCB", "Sparsity"]
acc = {'Name of Method': Methods,
'Acceracy': Testing_Accuracy}
df_accauracy = pd.DataFrame.from_dict(acc)
plt.rcParams['figure.figsize'] = (18, 12)
df_accauracy.plot.bar(x='Name of Method', y='Acceracy', rot=0, color ='gold');
```
## Accuarcy before pruning and after pruned 10% using different methods
```
Testing_Accuracy = []
model = load_model('my_model.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model = accuracy
print("The accuracy of the model before pruning on testing data = ", accuracy_testing_Model)
Testing_Accuracy.append(accuracy)
#Random Policy
model = load_model('Random_Policy_model_10_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 10% using Random Policy on testing data = ", accuracy_testing_Model_05)
Testing_Accuracy.append(accuracy)
#Epsilon Greedy
model = load_model('Epsilon_Greedy_model_10_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 10% using Epsilon Greedy on testing data = ", accuracy_testing_Model_05)
Testing_Accuracy.append(accuracy)
#UCB1
model = load_model('UCB1_model_10_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 10% using UCB1 on testing data = ", accuracy_testing_Model_05)
Testing_Accuracy.append(accuracy)
#Thompson Sampling
model = load_model('Thompson_Sampling_model_10_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 10% using Thompson Sampling on testing data = ", accuracy_testing_Model_05)
Testing_Accuracy.append(accuracy)
#Unified Approach
model = load_model('Unified_Approach_model_10_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 10% using Unified Approach on testing data = ", accuracy_testing_Model_05)
Testing_Accuracy.append(accuracy)
#Successive Rejects
model = load_model('Successive_Rejects_model_10_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 10% using Successive Rejects on testing data = ", accuracy_testing_Model_05)
Testing_Accuracy.append(accuracy)
#Softmax
model = load_model('Softmax_model_10_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 10% using Softmax on testing data = ", accuracy_testing_Model_05)
Testing_Accuracy.append(accuracy)
#Win–Stay, Lose–Shift
model = load_model('WSLS_model_5_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 5% using Win–Stay, Lose–Shift on testing data = ", accuracy_testing_Model_05)
Testing_Accuracy.append(accuracy)
#KL-UCB
model = load_model('kl_UCB_model_10_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 10% using KL-UCB on testing data = ", accuracy_testing_Model_05)
Testing_Accuracy.append(accuracy)
#Bayes-UCB
model = load_model('bayucb_model_10_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model_05 = accuracy
print("\nThe accuracy of the model after pruned 10% using Bayes-UCB on testing data = ", accuracy_testing_Model_05)
Testing_Accuracy.append(accuracy)
# Sparsity
model1 = load_model('sparse_model_10_Pecernt.h5')
labelsTest = np_utils.to_categorical(y_test)
loss, accuracy = model1.evaluate(X_test, labelsTest, batch_size=1, verbose=0)
accuracy_testing_Model = accuracy
print("\nThe accuracy of the model after pruning 10% using Sparsity method on testing data = ", accuracy_testing_Model)
Testing_Accuracy.append(accuracy)
Methods = ["Model", "Random policy", "Epsilon Greedy", "UCB1", "Thompson Sampling", "Unified Approach","Successive Rejects","Softmax" ,"Win–Stay, Lose–Shift", "KL-UCB","Bayes-UCB", "Sparsity"]
acc = {'Name of Method': Methods,
'Acceracy': Testing_Accuracy}
df_accauracy = pd.DataFrame.from_dict(acc)
plt.rcParams['figure.figsize'] = (18, 12)
df_accauracy.plot.bar(x='Name of Method', y='Acceracy', rot=0, color ='gold');
```
| github_jupyter |
```
import pandas as pd
# demo 验证
train_df = pd.read_csv('Data/Movie_RS.csv',nrows=10000)
print(train_df.shape)
train_df.head(1)
# 去除空值
train_df.dropna(axis=0, how='any', inplace=True)
train_df.info()
!pip install lightfm
from sklearn.metrics.pairwise import cosine_similarity
from lightfm import LightFM, cross_validation
from scipy.sparse import csr_matrix, coo_matrix
from lightfm.evaluation import auc_score
from lightfm.data import Dataset
import numpy as np
# 建立用户名和 id 映射的字典
user_dict = {}
for index, value in enumerate(train_df['Username'].unique()):
user_dict[value] = index
train_df['uid_int'] = train_df['Username'].apply(lambda x: user_dict[x])
# 字典翻转
reverse_user_dict = {v: k for k, v in user_dict.items()}
# 建立电影名和 id 映射的字典
item_dict = {}
for index, value in enumerate(train_df['Movie_Name'].unique()):
item_dict[value] = index
train_df['item_int'] = train_df['Movie_Name'].apply(lambda x: item_dict[x])
# 字典翻转
reverse_item_dict = {v: k for k, v in item_dict.items()}
```
#### 设置电影和用户特征
```
def create_features(dataframe, features_name, id_col_name):
features = dataframe[features_name].apply(
lambda x: ','.join(x.map(str)), axis=1)
features = features.str.split(',')
features1 = list(zip(dataframe[id_col_name], features))
features2 = features.apply(pd.Series).stack().reset_index(drop=True)
return features1, features2
# 电影特征
items_f = ['Movie_Score', 'Review_Count','item_int' ]
# 用户特征
users_f = ['uid_int','User_Comment_Distribution']
train_df['items_features'], item_feature_list = create_features(
train_df, items_f, 'item_int')
train_df['users_features'], user_feature_list = create_features(
train_df, users_f, 'uid_int')
train_df.head(1)
# 开始制作数据集
make_dataset = Dataset()
make_dataset.fit(train_df['uid_int'], train_df['item_int'],
item_features=item_feature_list, user_features=user_feature_list)
# 构建打分矩阵
interactions, weights = make_dataset.build_interactions(
list(zip(train_df['uid_int'], train_df['item_int'], train_df['Score'])))
# 打印用户评分
train_df['Score'].unique()
```
数据集划分
```
# model load
model_test = LightFM(no_components=30, random_state=1)
# 固定划分数据
train, test = cross_validation.random_train_test_split(
interactions, test_percentage=0.2)
```
模型训练
```
# 电影特征处理
items_features = make_dataset.build_item_features(train_df['items_features'])
# 用户特征处理
users_features = make_dataset.build_user_features(train_df['users_features'])
# model train
%time model_test.fit(train, item_features=items_features, user_features=users_features, epochs=10, verbose=False)
```
模型验证
```
# 开始验证模型
auc = auc_score(model_test, test, item_features=items_features,
user_features=users_features)
# 计算 auc 的均值
np.mean(auc)
```
### 首页用户个性化推荐
1. 包含召回和排序两个部分。
2. 适合首页 Feed 流的离线推荐计算场景。
使用全量数据进行推荐模型训练。
```
# 推荐模型训练
model = LightFM(no_components=30, random_state=1)
%time model.fit(interactions, sample_weight=weights, item_features=items_features, user_features=users_features, epochs=30, verbose=False)
```
召回
1. 过滤用户看过的
2. 尽可能找到用户喜欢的
```
# 按照用户名进行推荐
Username = '尾黑'
# id 转换
user_x = user_dict[Username]
# 对电影进行去重
allready_knews = train_df.loc[train_df['uid_int'].isin([user_x])].drop_duplicates('item_int', keep='first', inplace=False)
```
暂时使用全量用户没看过的,且电影分数大于 Threshold 的电影作为召回。
```
# 过滤看过的内容
known_items = allready_knews['item_int'].tolist()
df_use_for_prediction = train_df.loc[~train_df['item_int'].isin(known_items)].drop_duplicates(
'item_int', keep='first', inplace=False)
# 挑选高质量的电影
Threshold = 8
df_use_for_prediction = df_use_for_prediction[df_use_for_prediction['Movie_Score'] > Threshold]
df_use_for_prediction.head(1)
```
对召回的结果进行排序。
```
# 对过滤之后的内容进行排序
df_use_for_prediction['rec_score'] = model.predict(user_ids=user_x, item_ids=df_use_for_prediction['item_int'].tolist(),
item_features=items_features, user_features=users_features)
df_use_for_prediction.head(1)
```
对排序结果进行 topk 选取。
```
# 取 topk
top_rec_item = 5
rec_list_ = df_use_for_prediction.sort_values(by='rec_score', ascending=False)[:top_rec_item]['item_int']
# 排序后的电影推荐列表
rec_list_.tolist()
# 利用推荐列表找出电影信息
result = train_df.loc[train_df['item_int'].isin(rec_list_.tolist())][[
'Movie_Name', 'item_int', 'Movie_Score', 'Movie_Tags']].drop_duplicates('Movie_Name', keep='first', inplace=False)
# 推荐结果显示
result.head(1)
```
### 看了还看
1. 根据当前看的内容找到看过此次内容相关的内容。
2. 对结果依旧使用阈值过滤,挑选优质内容。
```
# 提取电影特征向量
embedding_movie, feature_movie = csr_matrix(
model.item_embeddings), csr_matrix(items_features)
# 电影特征提取
movie_inner_max = np.dot(feature_movie, embedding_movie)
# 电影之间求得余弦相似度
movie_cosine_sim = cosine_similarity(movie_inner_max)
# 找到与查询电影最近余弦距离上的电影
def next_movie_recommendations(movie_name, cosine_sim, topk):
# 电影名转换唯一的 id
int_id = item_dict[movie_name]
# 去除自身 id
sim_scores = sorted(
list(enumerate(cosine_sim[int_id])),
key=lambda x: x[1], reverse=True)[1:topk+1]
return sim_scores
```
假设当前看的电影。
```
# 利用电影名查询
movie_name = '55步 55 Steps'
train_df.loc[train_df['Movie_Name'].isin([movie_name])].drop_duplicates(
'item_int', keep='first', inplace=False).head(1)
```
推理用户接下来看的 topk 个电影,并去除当前看的电影和自己已经看过的电影。
```
topk = 20
next_movie = [i[0] for i in next_movie_recommendations(
movie_name, movie_cosine_sim, topk)]
# 推荐和看过的取差集
next_list = list(set(next_movie).difference(set(known_items)))
next_list
```
增加阈值过滤进行下一个结果输出,保证推荐质量。
```
next_movie_result = train_df.loc[train_df['item_int'].isin(
next_list)].drop_duplicates('item_int', keep='first', inplace=False)
# 使用 Threshold 进行过滤
Threshold = 7.5
next_movie_result = next_movie_result[next_movie_result['Movie_Score'] >= Threshold]
# 显示下一个电影的推荐结果
next_movie_result.head()[['Movie_Name', 'item_int',
'Movie_Score', 'Movie_Tags']]
```
### Push 推荐
1. 通过计算每位用户对单一电影的兴趣度,去除看过的即可得出 topk 的 push 结果。
2. 通过电影找用户,解决电影冷启动问题。
```
movie_int = item_dict[movie_name]
# 查看当前物品
item_rec_user = train_df.loc[train_df['item_int'].isin([movie_int])].drop_duplicates('item_int', keep='first', inplace=False)
item_rec_user.head(1)
movie_name
# 利用交互矩阵的维度
n_users, n_movies = interactions.shape
movie2user_rec = model.predict(user_ids=np.arange(n_users), item_ids=np.repeat(
movie_int, n_users), item_features=items_features, user_features=users_features)
# 建立感兴趣的用户索引倒排
wait_rec_list = np.argsort(-movie2user_rec).tolist()
# 找出看过此商品的用户 id
item_known_users_int = train_df[train_df['item_int'] == movie_int]['uid_int'].unique().tolist()
```
推荐候选集合过滤,得出最终的 Push 用户列表。
```
push_topk = 10
push_result = []
count = 0
# 增加早停
for x in wait_rec_list:
if x not in item_known_users_int:
push_result.append(x)
count +=1
if count > push_topk:
break
push_result
```
打印待 Push 用户的信息。
```
item_rec_user = train_df.loc[train_df['uid_int'].isin(push_result)].drop_duplicates('uid_int', keep='first', inplace=False)[users_f]
item_rec_user.head()
```
### 相关用户推荐
1. 相关用户推荐,找到用户相关的用户,挖掘用户的潜在兴趣。
2. 增加平台和用户的交互。
```
# 提取用户特征向量
embedding_user, feature_user = csr_matrix(
model.user_embeddings), csr_matrix(users_features)
# 用户特征提取
user_inner_max = np.dot(feature_user, embedding_user)
# 用户之间求得余弦相似度,容易因用户数量过大造成 MemoryError
user_cosine_sim = cosine_similarity(user_inner_max)
# 找到与查询电影最近余弦距离上的电影
def sim_user_recommendations(user_name, cosine_sim, topk):
# 用户名转换唯一的 id
int_id = user_dict[user_name]
# 去除自身 id,建立倒排索引
sim_scores = sorted(
list(enumerate(cosine_sim[int_id])),
key=lambda x: x[1], reverse=True)[1:topk+1]
return sim_scores
```
假设当前的用户。
```
# 直接使用 Username 作为当前用户名
train_df.loc[train_df['Username'].isin([Username])].drop_duplicates(
'uid_int', keep='first', inplace=False).head()
```
找到 topk 个相关用户。
```
topk = 5
sim_user_list = [i[0] for i in sim_user_recommendations(Username, user_cosine_sim, topk)]
sim_user_list
sim_user_result = train_df.loc[train_df['uid_int'].isin(sim_user_list)].drop_duplicates('uid_int', keep='first', inplace=False)
# todo:可以添加优质用户筛选策略
# Threshold
# sim_user_result = sim_user_result[sim_user_result['User_level'] >= Threshold]
# 显示相关用户结果
sim_user_result.head()[['Username','User_Comment_Distribution']]
```
| github_jupyter |
## Calculating inter-annotators agreement
#### This script generates 8 additional files:
1. batches_annotators.json – a list of annotators per batch
2. k_alpha_per_batch_4_options.csv – Krippendorff's alpha per batch for all 4 options
3. k_alpha_per_batch_2_options.csv – Krippendorff's alpha per batch for 2 options ('Omstreden' and 'Niet omstreden'), other responses are filtered out, so this data has missing values; the purpose of it is to check the agreement between annotators who could decide whether a term was contentious or non-contentious in a given sample (without options 'I don't know' or 'Bad OCR')
4. pairwise_agreement.csv – Krippendorff's alpha for every pair of annotators in a batch
5. mean_alpha_per_annotator.csv – mean Krippendorff's alpha per annotator (taking all the alpha values from an annotator's pairs)
6. perc_agreement.csv – percentage agreement between annotators per sample
7. k_alpha_per_batch_2_options_filtered_alpha.csv – Krippendorff's alpha per batch for 2 options without annotators whose mean K alpha lower than 0.2
8. k_alpha_per_batch_2_options_filtered_controls.csv – Krippendorff's alpha per batch for 2 options without annotators who got 3 or more control questions 'wrong' (different from experts)
```
import csv
import json
import statistics
import requests
import io
import pandas as pd
from itertools import combinations
from sklearn.metrics import cohen_kappa_score
from nltk.metrics.agreement import AnnotationTask
from collections import Counter
# importing csv files from GitHub
url_1 = "#" # link to the raw file (https://raw.githubusercontent.com/cultural-ai/ConConCor/master/Dataset/Annotations.csv)
annotations = requests.get(url_1).content
url_2 = "#" # link to the raw file (https://raw.githubusercontent.com/cultural-ai/ConConCor/master/Dataset/Extracts.csv)
extracts = requests.get(url_2).content
# putting the csv data into pandas df
annotations_data = pd.read_csv(io.StringIO(annotations.decode('utf-8')))
extracts_data = pd.read_csv(io.StringIO(extracts.decode('utf-8')))
# selecting relevant columns
annotations_asr = annotations_data[['anonymised_participant_id','extract_id','response']]
extracts_et = extracts_data[['extract_id','target']]
# merging dfs
annotations_with_target = pd.merge(annotations_asr, extracts_et, how='inner', on=['extract_id'])
# converting df to a list
list_merged = annotations_with_target.values.tolist()
# making a list of unique extract IDs
list_of_extracts = extracts_data['extract_id']
list_of_unique_extracts = list(Counter(list_of_extracts).keys())
# making a list of unique annotators
list_of_unique_annotators = list(annotations_asr.groupby('anonymised_participant_id').groups.keys())
# grouping by 'extract_id' to see how many times the same extracts were annotated
groups_per_extract = dict(annotations_asr.groupby('extract_id').groups)
sample_anns = {} # dict with 'extract_id':'number of annotations'
for ext in groups_per_extract:
anns_num = len(groups_per_extract[ext]) # counting the number of annotations per sample
sample_anns[ext] = anns_num
```
#### Getting groups of annotators (batches) (batches_annotators.json)
* if the annotators are in one group it means that they annotated the same set of samples (a batch)
* we need to have the annotators grouped for calculating K alpha per batch
```
# checking which samples every participant annotated
annotator_extracts = {} #dict with 'anonymised_participant_id': list of extracts they annotated
for group in annotations_asr.groupby('anonymised_participant_id'):
annotator_extracts[group[0]] = list(group[1]['extract_id'])
# finding unique batches of samples
# lists of samples are transformed to str to use Counter
unique_batches = list(Counter(str(e) for e in list(annotator_extracts.values())).keys())
group_annotators = {} # dict 'group': list of annotators in the group
group_counter = 0
for unique in unique_batches: # iterating over the list of unique batches
annotators_list = []
group_counter += 1
for antr in annotator_extracts: # checking if the unique batch matches the list of extracts
if str(annotator_extracts[antr]) == unique:
annotators_list.append(antr) # putting annotators in the same group in a list
group_name = f"batch_{group_counter}"
group_annotators[group_name] = annotators_list
# exporting lists of annotators by batches in json
with open('batches_annotators.json', 'w') as outfile:
json.dump(group_annotators, outfile)
```
#### Krippendorff's alpha per group (k_alpha_per_batch_4_options.csv) (all options)
* to calculate K alpha we need tuples with ('anonymised_participant_id','extract_id','response') for every batch
* we take all 4 options (Contentious, Non-contentious, I don't know, Bad OCR)
```
# converting df to a list with all responses
triples_results = annotations_asr.values.tolist()
# creating a csv with aplha scores per group and num of annotators
with open('k_alpha_per_batch_4_options.csv', 'w', newline='') as file:
writer = csv.writer(file)
writer.writerow(['batch', 'k_alpha_4', 'num_annotators']) #header
for group in group_annotators: # iterating over groups
responses_list = [] # list to store tuples for every group
for triple in triples_results: # iterating over responses
if triple[0] in group_annotators[group]: # collecting tuples for every group
responses_tuple = (triple[0],triple[1],triple[2])
responses_list.append(responses_tuple)
try:
t = AnnotationTask(data=responses_list)
k_alpha = round(t.alpha(),3)
except ZeroDivisionError: # batch_50 has only 1 annotator
k_alpha = 'zero division'
writer.writerow([group,k_alpha,len(group_annotators[group])])
```
#### Krippendorff's alpha per group (k_alpha_per_batch_2_options.csv) (only omstreden/niet omstreden)
* to calculate K alpha we need tuples with ('anonymised_participant_id','extract_id','response') for every batch
* we filter out (1) all the extracts (for every annotator in a group) with less than 2 options 'Omstreden' or 'Niet omstreden' in every batch (117) (it is necessary for calculating alpha correctly) and (2) the extracts with options 'Weet ik niet' and 'Onleesbare OCR' (3523), in total of 3700 extracts
```
# filtering out the extracts with less than 2 options 'Omstreden' or 'Niet omstreden' in a batch
extracts_to_filter = [] # storing the extracts to be filtered out (negative list)
for unique_extract_id in list_of_unique_extracts: # iterating over the list of unique extracts IDs
list_of_responses_per_extract = [] # collecting responses for every extract
for row in list_merged: # iterating over the list with all responses
if unique_extract_id == row[1]: # matching with the unique extract ID
list_of_responses_per_extract.append(row[2]) # adding all the responses per single extract
extract_id = row[1] # saving the extract ID
# counting the 2 options in the responses list per extract
opt_1 = list_of_responses_per_extract.count('Omstreden naar huidige maatstaven')
opt_2 = list_of_responses_per_extract.count('Niet omstreden')
check_sum = opt_1 + opt_2
if check_sum < 2: # the sum of 2 options should be no less than 2
extracts_to_filter.append(extract_id)
triples_results_filtered = [] # the list of the filtered responses (18100)
for triple in triples_results: # the original results list (non-filtered) (21800)
# checking if the 'extract_id' is not on the negative list AND (117 in the negative list)
# filtering out 'Weet ik niet' and 'Onleesbare OCR' options (3583)
if triple[1] not in extracts_to_filter and triple[2] != 'Weet ik niet' and triple[2] != 'Onleesbare OCR':
triples_results_filtered.append(triple)
# creating a csv with aplha scores per group and num of annotators (for 2 options)
with open('k_alpha_per_batch_2_options.csv', 'w', newline='') as file:
writer = csv.writer(file)
writer.writerow(['batch', 'k_alpha_2', 'num_annotators']) #header
for group in group_annotators: # iterating over groups
responses_list = [] # list to store tuples for every group
for triple in triples_results_filtered: # iterating over responses
if triple[0] in group_annotators[group]: # collecting tuples for every group
responses_tuple = (triple[0],triple[1],triple[2])
responses_list.append(responses_tuple)
try:
t = AnnotationTask(data=responses_list)
k_alpha = round(t.alpha(),3)
except ZeroDivisionError: # batch_50 has only 1 annotator
k_alpha = 'zero division'
writer.writerow([group,k_alpha,len(group_annotators[group])])
```
#### Pairwise agreement (pairwise_alpha.csv)
```
with open('pairwise_alpha.csv', 'w', newline='') as file:
writer = csv.writer(file)
writer.writerow(['batch', 'annotator_1', 'annotator_2', 'alpha']) #header
for group in group_annotators: # iterating over annotators' groups
pairs_in_group = list(combinations(group_annotators[group], 2)) # pairs combinations in every group
for pair in pairs_in_group: # iterating over every pair of annotators in a group
responses_list = [] # all the responses for 2 annotators in a pair
for triple in triples_results: # iterating over the responses
if pair[0] == triple[0]: # matching annotators' IDs
responses_list.append((triple[0],triple[1],triple[2])) # putting all the responses of annotator_1 in a tuple
if pair[1] == triple[0]:
responses_list.append((triple[0],triple[1],triple[2])) # putting all the responses of annotator_2 in a tuple
# alpha for every pair in the group
try:
t = AnnotationTask(data=responses_list)
k_alpha = round(t.alpha(),3)
except ZeroDivisionError:
k_alpha = 'zero division'
writer.writerow([group,pair[0],pair[1],k_alpha])
```
#### Mean K alpha for every annotator (mean_alpha_per_annotator.csv)
```
# reading 'pairwise_alpha.csv'
pairwise_agreement = pd.read_csv("pairwise_alpha.csv")
pairs_in_tuples = [] # list with tuples (pairs of annotators) and alpha per pair
for i,row in pairwise_agreement.iterrows():
pairs_in_tuples.append([(row['annotator_1'],row['annotator_2']),row['alpha']])
ann_mean = [] # list of mean alpha per annotator
for unique in list_of_unique_annotators: # iterating over the list of unique annotators
ann_values = []
for pair in pairs_in_tuples:
if unique in pair[0]: # matching annotators IDs
ann_values.append(float(pair[1])) # putting alpha of single annotator in a list
if ann_values != []: # there's one batch with only 1 annotator, so there's no pair
mean_kappa = round(statistics.mean(ann_values),3)
ann_mean.append([unique,mean_kappa])
# exporting the csv
mean_alpha = pd.DataFrame(ann_mean,columns=['anonymised_participant_id','mean_alpha'])
mean_alpha.to_csv('mean_alpha_per_annotator.csv')
```
#### Percentage agreement (perc_agreement.csv)
```
# Creating 'perc_agreement.csv' with count by responses per extract and % agreement
with open('perc_agreement.csv', 'w', newline='') as file:
writer = csv.writer(file)
writer.writerow(['extract_id', 'omstreden', 'niet_omstreden',
'weet_ik_niet', 'bad_ocr', 'num_annotators', '%_agree']) #header
for unique_extract_id in list_of_unique_extracts: # iterating over the list of unique extracts IDs
list_of_responses_per_extract = [] # collecting responses for every extract
for row in list_merged: # iterating over the list with all responses
if unique_extract_id == row[1]: # matching with the unique extract ID
list_of_responses_per_extract.append(row[2]) # adding all the responses per single extract
extract_id = row[1] # saving the extract ID
target = row[3] # saving the target word of the extract
# counting every option in the responses list per extract
opt_1 = list_of_responses_per_extract.count('Omstreden naar huidige maatstaven')
opt_2 = list_of_responses_per_extract.count('Niet omstreden')
opt_3 = list_of_responses_per_extract.count('Weet ik niet')
opt_4 = list_of_responses_per_extract.count('Onleesbare OCR')
num_ann = len(list_of_responses_per_extract) # number of annotators per extract
perc_agree = round(max(opt_1,opt_2,opt_3,opt_4)/num_ann * 100) # % agreement
extract_name = f"{target}_{extract_id}" # giving to an extract a new ID with its target word
result_row = [extract_name,opt_1,opt_2,opt_3,opt_4,num_ann,perc_agree]
writer.writerow(result_row)
```
#### How does median K alpha per batch change when annotators with mean alpha < 0.2 are excluded? (k_alpha_per_batch_2_options_filtered_alpha.csv)
```
# list of annotators with alpha < 0.2
low_alpha_annotators = []
for index,row in mean_alpha.iterrows():
if row['mean_alpha'] < 0.2:
low_alpha_annotators.append(str(row['anonymised_participant_id']))
# making a duplicate of group_annotators to remove annotators with low alpha
no_low_alpha = group_annotators
# excluding the low alpha annotators from the batches they annotated
for ids in low_alpha_annotators:
for group in no_low_alpha:
if ids in no_low_alpha[group]:
no_low_alpha[group].remove(ids) # ! it changes no_low_alpha
# re-running calculation of K alpha for 2 options without the low alpha annotators
# creating a csv with aplha scores per group and num of annotators (for 2 options)
# without annotators with mean alpha < 0.2
with open('k_alpha_per_batch_2_options_filtered_alpha.csv', 'w', newline='') as file:
writer = csv.writer(file)
writer.writerow(['batch', 'k_alpha_2', 'num_annotators']) #header
for group in no_low_alpha: # iterating over groups
responses_list = [] # list to store tuples for every group
for triple in triples_results_filtered: # iterating over responses
if triple[0] in no_low_alpha[group]: # collecting tuples for every group
responses_tuple = (triple[0],triple[1],triple[2])
responses_list.append(responses_tuple)
try:
t = AnnotationTask(data=responses_list)
k_alpha = round(t.alpha(),3)
except ZeroDivisionError: # batches with only 1 annotator
k_alpha = 'zero division'
writer.writerow([group,k_alpha,len(no_low_alpha[group])])
```
#### How does median K alpha change when excluding annotators whose responses to the 5 control questions differ from the unanimous responses of experts (3 or more questions)? (k_alpha_per_batch_2_options_filtered_controls.csv)
```
# gathering all the responses to control samples
all_control_responses = []
for response in triples_results: # list of responses
if 'c' in response[1]: #c is a prefix for control samples
all_control_responses.append([response[0],(response[1],response[2])])
# responses of experts to control questions with 100% agreement
unanimous_responses = [('c0','Omstreden naar huidige maatstaven'),
('c1','Omstreden naar huidige maatstaven'),
('c2','Niet omstreden'),
('c3','Omstreden naar huidige maatstaven'),
('c4', 'Niet omstreden')]
wrong_controls = []
for response in all_control_responses:
if response[1] not in unanimous_responses:
wrong_controls.append(response[0]) # IDs of annotators who got the controls 'wrong'
annotators_to_filter = []
for i in dict(Counter(wrong_controls)):
if dict(Counter(wrong_controls))[i] >= 3: # 3 or more 'wrong' controls
annotators_to_filter.append(i)
# making a duplicate of group_annotators to remove annotators who got 3 or more controls 'wrong'
no_wrong_controls = group_annotators
# excluding the annotators with 'wrong' controls from the batches they annotated
for ids in annotators_to_filter:
for group in no_wrong_controls:
if ids in no_wrong_controls[group]:
no_wrong_controls[group].remove(ids) # ! it changes no_wrong_controls
# re-running calculation of K alpha for 2 options without the annotators with 'wrong' controls
# creating a csv with aplha scores per group and num of annotators (for 2 options)
# without annotators with 'wrong' controls
with open('k_alpha_per_batch_2_options_filtered_controls.csv', 'w', newline='') as file:
writer = csv.writer(file)
writer.writerow(['batch', 'k_alpha_2', 'num_annotators']) #header
for group in no_wrong_controls: # iterating over groups
responses_list = [] # list to store tuples for every group
for triple in triples_results_filtered: # iterating over responses
if triple[0] in no_wrong_controls[group]: # collecting tuples for every group
responses_tuple = (triple[0],triple[1],triple[2])
responses_list.append(responses_tuple)
try:
t = AnnotationTask(data=responses_list)
k_alpha = round(t.alpha(),3)
except ZeroDivisionError: # batches with only 1 annotator
k_alpha = 'zero division'
writer.writerow([group,k_alpha,len(no_wrong_controls[group])])
alpha_per_batch = pd.read_csv('k_alpha_per_batch_2_options.csv')
alpha_per_batch_filtered_low_alpha = pd.read_csv('k_alpha_per_batch_2_options_filtered_alpha.csv')
alpha_per_batch_filtered_controls = pd.read_csv('k_alpha_per_batch_2_options_filtered_controls.csv')
# comparing medians
for i,row in alpha_per_batch.iterrows():
if (row['k_alpha_2']) == 'zero division':
alpha_per_batch.drop([i], axis=0, inplace=True)
for i,row in alpha_per_batch_filtered_low_alpha.iterrows():
if (row['k_alpha_2']) == 'zero division':
alpha_per_batch_filtered_low_alpha.drop([i], axis=0, inplace=True)
for i,row in alpha_per_batch_filtered_controls.iterrows():
if (row['k_alpha_2']) == 'zero division':
alpha_per_batch_filtered_controls.drop([i], axis=0, inplace=True)
print(f"Median K alpha:{alpha_per_batch['k_alpha_2'].median()}\n Median K alpha (no annotators with a < 0.2): {alpha_per_batch_filtered_low_alpha['k_alpha_2'].median()}\n Median K alpha (no annotators with 3 or more 'wrong' controls): {alpha_per_batch_filtered_controls['k_alpha_2'].median()}")
```
| github_jupyter |
Copyright (c) Microsoft Corporation.
Licensed under the MIT License.
# Library Imports
```
data_lake_account_name = '' # Synapse Workspace ADLS
file_system_name = 'data'
table_name = "c360_data.preparedinferencedata"
#AML workspace details
subscription_id = ""
resource_group = ""
workspace_name = ""
import azureml.core
#print("SDK Version:", azureml.core.VERSION)
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.train.automl.run import AutoMLRun
from azureml.train.automl import AutoMLConfig
from azureml.automl.runtime.onnx_convert import OnnxConverter
from azureml.core.model import Model
from azureml.core import Environment
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core import Workspace
```
# Connect to AML Workspace
```
#save the model to a local file
import azureml.core
from azureml.core import Workspace
ws = Workspace(workspace_name = workspace_name,
subscription_id = subscription_id,
resource_group = resource_group)
ws.write_config()
from azureml.core.model import Model
import joblib
model_name='lease_renewal_model'
model_path = Model.get_model_path(model_name=model_name, _workspace=ws)
loaded_model = joblib.load(model_path)
print('model loaded!')
sql_str = "SELECT cid ,pid ,uid ,InitialLeaseTerm ,(InitialLeaseTerm + ifnull((num_renewals * avg_renewal_leaseterm),0)) as TotalLeaseTerm,isRenewed, \
WO_AirConditioning ,WO_AirDuctsorVents ,WO_CabinetDoors ,WO_Cabinets ,WO_Carpet , \
WO_Ceiling ,WO_CeilingFan ,WO_CleanUnit ,WO_Cleaning ,WO_Dishwasher ,WO_DoorRepairorReplace ,WO_Doorbell , \
WO_Drain ,WO_Dryer ,WO_Faucet ,WO_FilterReplacement ,WO_GarageRemote ,WO_GarbageDisposal ,WO_Heating ,WO_LightBulbs , \
WO_LightFixture ,WO_LockorExteriorDoor ,WO_LockorInteriorDoor ,WO_Microwave ,WO_Other ,WO_Outlets ,WO_Paint ,WO_PestControl , \
WO_Refrigerator ,WO_ShelvesorDrawers ,WO_ShowerHead ,WO_Sink ,WO_SinkStopper ,WO_SmokeDetector ,WO_StoveorOven ,WO_Switches , \
WO_Thermostat ,WO_Toilet ,WO_TowelBar ,WO_TrashRemoval ,WO_TuborShower ,WO_UnderSink ,WO_Walls ,WO_Washer ,WO_Windows , \
Movein_Easeofprocess ,Movein_OverallSatisfaction ,Movein_StaffCommunication ,Movein_StaffCourtesy ,Movein_StaffProfessionalism , \
Renewal_ConditionofProperty,Renewal_MaintenanceOverallSatisfaction ,Renewal_ManagementCommunication ,Renewal_ManagementCourtesy , \
Renewal_ManagementProblemResolution ,Renewal_ManagementProfessionalism ,Renewal_OverallSatisfaction FROM " + table_name + \
" where isMovedOut = 'N'"
df_test = spark.sql(sql_str)
df_test_pd = df_test.toPandas()
#batch inferencing
drop_cols = ['cid','pid','uid','isRenewed']
df_test = df_test_pd.drop(drop_cols, axis=1)
model_path = Model.get_model_path(model_name = 'lease_renewal_model')
model = joblib.load(model_path)
prediction = model.predict(df_test)
predit_proba = model.predict_proba(df_test)
df_result = df_test_pd[['cid','pid','uid']]
df_result['prediction'] = prediction
df_result['predict_proba'] = predit_proba[['Y']]
df_sp = spark.createDataFrame(df_result)
df_sp.write.option('header', 'true').mode('overwrite').csv(f'abfss://{file_system_name}@{data_lake_account_name}.dfs.core.windows.net/resident360/batchpredictions/')
df_sp.write.mode("overwrite").saveAsTable("c360_data.lease_renewal_predictions_temp")
sql_str = '''select CustomerId, min(prediction) as RenewalPrediction, round(min(predict_proba),4) * 100 as RenewalPredictionScore from c360_data.lease_renewal_predictions_temp as pred
inner join c360_data.customer_profile_ids_combined as ids on pred.cid = ids.cid
group by CustomerId'''
df_pred = spark.sql(sql_str)
df_pred.write.mode("overwrite").saveAsTable("c360_data.lease_renewal_predictions")
df_pred.write.mode("overwrite").saveAsTable("c360_data.lease_renewal_prediction")
spark.sql('drop table c360_data.lease_renewal_predictions_temp')
```
| github_jupyter |
```
import tensorflow as tf
#import wave
import glob
import scipy.io.wavfile as wavfile
import numpy as np
from matplotlib import pyplot as plt
from tqdm import tqdm
%matplotlib inline
global SMP_RATE
SMP_RATE = 16000
def getWaveName(wavepath):
return wavepath.split('/')[-1]
def findWave(wavefile,path):
r = glob.glob(path+wavefile)
return r[0]
fpath = '../../cmu_artic/female_us_slt/' #desktop
mpath = '../../cmu_artic/male_us_bdl/' #desktop
# fpath = '../data/female_us/'
# mpath = '../data/male_us/'
# map male to female
m2f = dict()
for i in glob.glob(mpath+'/*.wav'):
m2f[i]=findWave(getWaveName(i),fpath)
bin_size_x = (2,3)
bin_size_y = (2,3)
bin_mapping = {}
for filename in glob.glob(mpath+'*.wav'):
fm, wav_data = wavfile.read(filename)
size = wav_data.shape[0]/(1.0*fm)
if size < bin_size_x[1] and size > bin_size_x[0]:
fm2, wav_data2 = wavfile.read(m2f[filename])
size2 = wav_data2.shape[0]/(1.0*fm2)
if size2 < bin_size_x[1] and size2 > bin_size_x[0]:
bin_mapping[filename] = m2f[filename]
def padTo(x,size,kind='SYM'):
"""
Expects 1D array
"""
assert kind in ['LEFT','RIGHT','SYM']
if kind == 'LEFT':
pad = np.zeros(size-len(x),dtype=x.dtype)
r=np.concatenate((pad,x),axis=0)
elif kind == 'RIGHT':
pad = np.zeros(size-len(x),dtype=x.dtype)
r=np.concatenate((x,pad),axis=0)
elif kind == 'SYM':
padl = np.zeros((size-len(x))/2,dtype=x.dtype)
padr = np.zeros((size-len(x))/2+(size-len(x))%2,dtype=x.dtype)
r = np.concatenate((padl,x,padr),axis=0)
return r
class Batch(object):
def __init__(self,mapping,bin_max,batch_size=1):
self.mapping = mapping
self.bin_max = bin_max
self.batch_size = batch_size
self.seq_size = self.bin_max*SMP_RATE
self.shape = (self.batch_size,1,self.seq_size,1)
self.x_list = mapping.keys()
self.size = len(self.x_list) #data size
self.cursors = [(self.size//self.batch_size)*i for i in range(self.batch_size)]
def getBatch(self):
#get the file names
x_path = [self.x_list[c] for c in self.cursors]
y_path = [self.mapping[xp] for xp in x_path]
#Get the x,y files and pad
x = np.empty(shape=(self.batch_size,1,self.seq_size,1))
y = np.empty(shape=(self.batch_size,1,self.seq_size,1))
for i,(xp,yp) in enumerate(zip(x_path,y_path)):
_,x_raw=wavfile.read(xp)
_,y_raw=wavfile.read(yp)
x_pad = padTo(x_raw,self.bin_max*SMP_RATE)
y_pad = padTo(y_raw,self.bin_max*SMP_RATE)
x[i,0,:,0]=x_pad
y[i,0,:,0]=y_pad
#update cursor positions
self.cursors = [(c+1)%self.size for c in self.cursors]
return (x,y)
```
# Ops
```
def prelu(x,name='prelu'):
in_shape = x.get_shape().as_list()
with tf.variable_scope(name):
alpha = tf.get_variable('alpha',in_shape[-1],
initializer=tf.constant_initializer(0.),
dtype=tf.float32)
pos = tf.nn.relu(x)
neg = alpha * (x-tf.abs(x))*.5
return pos + neg
def lrelu(x, alpha=0.3, name='lrelu'):
return tf.maximum(x, alpha * x, name=name)
def bn(x,ri=[0,1,2],eps=.0001,scaling=True):
xshp=x.get_shape().as_list()
with tf.variable_scope('bn'):
mu = tf.reduce_mean(x,ri,keep_dims=True)
sigma = tf.reduce_mean(tf.square(x-mu),ri,keep_dims=True)
x_hat = (x-mu)/(tf.sqrt(sigma+eps))
if scaling:
gamma = tf.get_variable('gamma',[xshp[-1]],initializer=tf.truncated_normal_initializer(0,.1))
beta = tf.get_variable('beta',[xshp[-1]],initializer=tf.constant_initializer(1.))
y = gamma*x_hat+beta
else:
y = x_hat
return y
b = Batch(mapping=bin_mapping,bin_max=bin_size_x[1],batch_size=32)
save_path = 'modelckpt/disc_raw.ckpt'
tb_path = '../tb_logs/'
```
# Model
```
g = tf.Graph()
with g.as_default():
x = tf.placeholder(tf.float32,shape=b.shape)
y = tf.placeholder(tf.float32,shape=b.shape)
with g.as_default():
def Dis(x,reuse=False):
with tf.variable_scope('Discriminator') as scope:
if reuse:
scope.reuse_variables()
#Down Colnvolutions
#create variables
with tf.variable_scope('in'):
# Shape (height,width,inchannels,out_channels)
#down convolutions operations
with tf.variable_scope('l1'):
dis_conv1=tf.get_variable('conv1',shape=[1,31,1,1],initializer=tf.truncated_normal_initializer(0,.02))
dis_b1 = tf.get_variable('b1',shape=1,initializer=tf.constant_initializer(0.1))
dis_l1=lrelu(bn(tf.nn.conv2d(x,dis_conv1,strides=[1,1,2,1],padding='SAME')+dis_b1))
with tf.variable_scope('l2'):
dis_conv2=tf.get_variable('conv2',shape=[1,31,1,16],initializer=tf.truncated_normal_initializer(0,.02))
dis_b2 = tf.get_variable('b2',shape=16,initializer=tf.constant_initializer(0.1))
dis_l2=lrelu(bn(tf.nn.conv2d(dis_l1,dis_conv2,strides=[1,1,2,1],padding='SAME')+dis_b2))
with tf.variable_scope('l3'):
dis_conv3=tf.get_variable('conv3',shape=[1,31,16,32],initializer=tf.truncated_normal_initializer(0,.02))
dis_b3 = tf.get_variable('b3',shape=32,initializer=tf.constant_initializer(0.1))
dis_l3=lrelu(bn(tf.nn.conv2d(dis_l2,dis_conv3,strides=[1,1,2,1],padding='SAME')+dis_b3))
with tf.variable_scope('l4'):
dis_conv4=tf.get_variable('conv4',shape=[1,31,32,32],initializer=tf.truncated_normal_initializer(0,.02))
dis_b4 = tf.get_variable('b4',shape=32,initializer=tf.constant_initializer(0.1))
dis_l4=lrelu(bn(tf.nn.conv2d(dis_l3,dis_conv4,strides=[1,1,2,1],padding='SAME')+dis_b4))
with tf.variable_scope('l5'):
dis_conv5=tf.get_variable('conv5',shape=[1,31,32,64],initializer=tf.truncated_normal_initializer(0,.02))
dis_b5 = tf.get_variable('b5',shape=64,initializer=tf.constant_initializer(0.1))
dis_l5=lrelu(bn(tf.nn.conv2d(dis_l4,dis_conv5,strides=[1,1,2,1],padding='SAME')+dis_b5))
with tf.variable_scope('l6'):
dis_conv6=tf.get_variable('conv6',shape=[1,31,64,64],initializer=tf.truncated_normal_initializer(0,.02))
dis_b6 = tf.get_variable('b6',shape=64,initializer=tf.constant_initializer(0.1))
dis_l6=lrelu(bn(tf.nn.conv2d(dis_l5,dis_conv6,strides=[1,1,2,1],padding='SAME')+dis_b6))
with tf.variable_scope('l7'):
dis_conv7=tf.get_variable('conv7',shape=[1,31,64,128],initializer=tf.truncated_normal_initializer(0,.02))
dis_b7 = tf.get_variable('b7',shape=128,initializer=tf.constant_initializer(0.1))
dis_l7=lrelu(bn(tf.nn.conv2d(dis_l6,dis_conv7,strides=[1,1,2,1],padding='SAME')+dis_b7))
dis_last_shape = dis_l7.get_shape()
dis_es=int(dis_last_shape[2])*int(dis_last_shape[3]) #size of encoding
dis_last = tf.reshape(bn(dis_l7),[b.batch_size,dis_es,1,1])
with tf.variable_scope('final_conv'):
dis_conv_na = tf.get_variable('conv',shape=[1,1,1,1],initializer=tf.truncated_normal_initializer(0,.02))
dis_b_na = tf.get_variable('b',shape=[1],initializer=tf.constant_initializer(0.0))
dis_na=tf.nn.conv2d(dis_last,dis_conv_na,strides=[1,1,1,1],padding='SAME')+dis_b_na
dis_na = tf.reshape(dis_na,[b.batch_size,dis_es])
#fully connected
with tf.variable_scope('fc'):
W_fc = tf.get_variable('W',shape=[dis_es,1],initializer=tf.truncated_normal_initializer(0.1))
b_fc = tf.get_variable('b',shape=[1],initializer=tf.constant_initializer(0.1))
#D_out = tf.sigmoid(bn(tf.matmul(dis_last,W_fc)+b_fc,ri=[0,1],scaling=True))
D_out = tf.matmul(dis_na,W_fc)+b_fc
return D_out
```
## Losses and optimization
```
with g.as_default():
x_logit=Dis(x)
y_logit=Dis(y,reuse=True)
x_pred = tf.sigmoid(x_logit)
y_pred = tf.sigmoid(y_logit)
#early losses
loss1 = tf.reduce_mean(tf.square(x_logit)) #regularized loss for x
loss2 = tf.reduce_mean(tf.square(y_logit-1)) #regularized loss for y
loss_early = loss1+loss2
#later
loss3 = tf.reduce_mean(tf.log(x_pred))
loss4 = tf.reduce_mean(tf.log(1-y_pred))
loss_later = loss3+loss4
#earlier optimizer
opt = tf.train.AdamOptimizer(learning_rate=0.0002).minimize(loss_early)
#late optimizer
# dvars = [e for e in g.get_collection('trainable_variables') if 'Discriminator' in e.name]
# # gradient clipping
# optimizer = tf.train.AdadeltaOptimizer(learning_rate=.0002)
# grad_d,var_d = zip(*optimizer.compute_gradients(loss_early,var_list=dvars))
# grad_d_clipped ,_= tf.clip_by_global_norm(grad_d,.5)
# opt_late=optimizer.apply_gradients(zip(grad_d_clipped,var_d))
```
## Summaries
```
with g.as_default():
#tf.summary.scalar('max_gradient',tf.maximum(tf.abs(grad_d)))
tf.summary.scalar('Early_Discriminator_Loss',loss_early)
tf.summary.scalar('Later_Discriminator_Loss',loss_later)
tf.summary.histogram("X_Predictions",x_pred)
tf.summary.histogram("Y_Predictions",y_pred)
merged = tf.summary.merge_all()
```
## Initialize Variables
```
with g.as_default():
init = tf.global_variables_initializer()
```
## Saver
```
with g.as_default():
saver = tf.train.Saver()
```
## Session and writer
```
sess=tf.Session(graph=g)
writer = tf.summary.FileWriter(tb_path,g)
sess.run(init)
losses = []
x_data,y_data = b.getBatch()
fd = {x:x_data,y:y_data}
sess.run(y_pred,feed_dict=fd)
```
# Training
```
num_steps = 300
save_step = 500
for i in tqdm(xrange(num_steps)):
x_data,y_data = b.getBatch()
fd = {x:x_data,y:y_data}
_,l = sess.run([opt,loss_early],feed_dict=fd)
# _ = sess.run([opt_late],feed_dict=fd)
losses.append(l)
if i % 10 == 0:
summary = sess.run(merged,feed_dict = fd)
writer.add_summary(summary,i)
if i % save_step == 0:
save_path = saver.save(sess, save_path)
plt.plot(losses)
```
| github_jupyter |
# Introduction to Data Science – Lecture 2 – Python
Hi there, welcome to our first coding lecture. We will be using Python, a popular data science programming language in the lectures, homeworks, and projects. As part of Homework 0, you should have already setup Python, IPython and Jupyter notebooks, so it's time to get started!
## Executing your first program
Now it's time to run python! Open a terminal and execute:
```bash
$ python
```
You'll see something like that:
```bash
$ python
Python 3.9.7 (default, Sep 16 2021, 08:50:36)
[Clang 10.0.0 ] :: Anaconda, Inc. on darwin
Type "help", "copyright", "credits" or "license" for more information
>>>
```
What does this tell us? It shows us the version number of Python (3.9.7), and we can see that we've installed Anaconda. At the end of this statement, you see the three `>>>` signs: these indicate a prompt, but it looks different from your console prompt (`$` or `%`), to indicate you're in an interactive python environment.
There are two fundamental ways you can run Python: in interactive mode (what we're doing here) or in batch mode.
In interactive mode you write your program interactively, i.e., each new statement is interpreted as you type it.
If you just run ```python``` without any other parameter, you enter the **interactive** mode. Let's write our very first program:
```python
>>> print("Hello World!")
Hello World!
```
**Note:** If you copy this code, don't include the leading `>>>`. We only show these here because it allows us to distinguish input from output.
"Hello World!" is by tradition the very first program that you should write in a new programming language! And see, when we instructed python to print the text "Hello World!", it did just that.
So, let's briefly take that statement apart: it contains a call to the `print()` function and passes a parameter to that print function, the string `Hello World!`.
The string is enclosed in quotation marks `"`; alternatively you can also use single quotes `'`. Given that information, python knows you want to print the string, and it does exactly that.
Print is a built-in function of python. There are many useful built-in functions, which you can check out [here](https://docs.python.org/3/library/functions.html).
If you're familiar with Python, you might have seen this syntax:
```python
>>> print "Hello World!"
```
This is Python 2 syntax and not legal in Python 3 anymore, now all parameters of a function have to be passed in brackets. Python 2 is now [officially retired](https://www.python.org/doc/sunset-python-2/), so you should not be using it anymore.
Let's define our first variable. Type
```python
>>> my_string_var = "Are you still spinning?"
```
This statement is executed without any feedback. What you're doing here, intuitively, is that first, you create a new variable of type string with the name ```my_string_var```, and then you assign a value to it, "Are you still spinning?".
Note that the equals sign `=` is NOT a test for equality here, but an ASSIGNMENT. This can be confusing for beginning programmers.
Equality is tested with a double equals sign `==` in many programming languages including python. Arguably, a different assignment operator such as `:=` would be a better idea and is implemented in other programming languages.
We now can print this variable:
```python
>>> print(my_string_var)
Are you still spinning?
```
which produces the result we expected!
There are many different types of variables, not only strings. For example, Python has three different data types for numbers (integers, floats – that represent real numbers, and complex). Check out the details about the built-in data types [here](https://docs.python.org/3/library/stdtypes.html).
Let's start with a simple example:
```python
>>> a = 3
>>> b = 2.5
>>> c = a + b
>>> print(c)
5.5
```
Here we've created three variables (`a, b, c`) and executed an operation, the addition of `a` and `b` using the `+` operator, which we have then assigned to `c`. Finally, we've printed `c`.
The data types of `a` and `b`, however, are subtly different. `a` is an integer and `b` is a float. We can check the data type of any variable using the `type()` function:
```python
>>> a = 3
>>> type(a)
<class 'int'>
>>> b = 2.5
<class 'float'>
>>> c = "hello"
>>> type(c)
<class 'str'>
```
Python supports many operations, including mathematical operations (addition, subtraction, division, modulo), type conversions, etc. – we'll explore those soon.
## Writing code in a file
Let's look at another way to run python: by executing a file. Exit the interactive environment, by calling the exit function:
```python
exit()
```
Now, open up your favorite text editor (if you don't have one, check out, e.g., [Sublime](https://www.sublimetext.com/)) and create a new file called "first_steps.py". We've created such a file for you [here](first_steps.py).
You can also copy and paste this code into the file:
```python
def double_number(a):
# btw, here is a comment! Use the # symbol to add comments or temporarily remove code
# shorthand operator for 'a = a * 2'
a *= 2
return a
print(double_number(3))
print(double_number(14.22))
```
Here we've also defined or first function! We'll go into details about functions at a later time. For now, just notice that the indentation matters!
Now, run
```bash
$ python first_steps.py
6
28.44
```
What happened here? Python executed the commands in the file, and then terminated. You saw the result, but it was not interactive anymore, but executed in a couple of milliseconds.
Larger and bigger programs are commonly written using source code files and are not run interactively. They will read data from files, wait for user input, etc.
In this class, we will neither work with the interactive mode nor with straight-up python files much, but instead will use Jupyter Notebooks, which we'll look at next!
| github_jupyter |
# Lesson 4 - Euler—McLaurin evaluation
time: 30 min
## Learning outcomes
Python:
- lambda functions
- recursive functions
SageMath:
- symbolic and numerical integrals
- plotting
To check on the Riemann Hypothesis we need to be able to evaluate $\zeta$
to the left of the real part = 1. We can not rely on the definition as a sum
here but need to use a method of analytic continuation. For instance
using the Euler—McLaurin Formula for $\zeta$.
Letting $f(n)=n^{-s}$, and $b=\infty$ and using that
$$
f^{(j)}(x)=(-1)^j (s)_j x^{-s-j},
$$
where $(s)_k = s(s+1)\cdots (s+k-1)$ is the **Pochhammer** symbol:
$$
(s)_{k} = \frac{\Gamma(s+k)}{\Gamma(s)}
$$
we obtain the specialized E-M formula, valid for $\Re(s)>1$:
$$
\sum_{n=a}^{\infty} n^{-s} = \frac{1}{s-1} a^{1-s} + \frac{a^{-s}}{2} -
\sum_{j=2}^{k}\frac{(s)_{j-1} (-1)^{j-1}B_{j}}{j!}a^{-s-j+1}
- \frac{(s)_k}{k!} \int_{a}^{\infty} B_{k}(x-[x]) x^{-s-k} dx.
$$
and we conclude that
$$
\zeta(s) = \sum_{n=1}^{\infty} n^{-s} = \sum_{n=1}^{a-1} n^{-s}
+
\frac{1}{s-1} a^{1-s} + \frac{a^{-s}}{2} -
\sum_{j=2}^{k}\frac{(s)_{j-1} (-1)^{j-1}B_{j}}{j!}a^{-s-j+1}
- \frac{(s)_k}{k!} \int_{a}^{\infty} B_{k}(x-[x])x^{-s-k} dx.
$$
And here the right-hand side is convergent for $\Re(s)>1-k$
and moreover decays with both N and a.
This formula is now in a form that can be used to approximate $\zeta$.
To evaluate zeta at the half-line we thus need to have k at least 1.
Let's experiment with k=a=1.
```
def EM_main_terms1(s):
"""
Euler—McLaurin with k=1 terms
"""
term1 = -1 / (1 - s)
term2 = -bernoulli(1) # This is really just 1/2
return term1 + term2
EM_main_terms1(2.0)
zeta(2.0)
```
To see that we get all terms correct we can evaluate the integral numerically.
The `numerical_integral` in Sage returns the value and an error estimate.
```
numerical_integral(x^-3, 1, 100)
(x^-3).nintegral(x, 1, 100)
```
It is also possible to do symbolic integration:
```
%display latex
integral(x^(-3), x)
```
## Integration and lambda functions
Sometimes it is useful to have an "anonymous" function, that can just be used once. These are called $\lambda$-functions. Although most of the time lambdas can be avoided they can be convenient.
```
f = lambda x: sin(x)^2
f(x)
# Unfortunately they don't work in symbolic integrals
integral(lambda x: sin(x)^2, x)
# But we can just use the symbolic function directly
integral(sin(x)^2, x)
# And in numerical integration
numerical_integral(lambda x: sin(x)^2, 0, 2*pi)
numerical_integral(f, 0, 2*RR.pi())
plot([f, lambda y: numerical_integral(f, 0, y)[0]], (0, 2*RR.pi()))
def ERR1(s):
f = lambda t: (t - floor(t) - 1/2) * t**(-s - 1)
value, error_estimate = numerical_integral(f, 1, infinity)
return -s * value, -s * error_estimate
ERR1(2.)
RR(zeta(2)) - (EM_main_terms1(2.0) + ERR1(2.0)[0])
```
# Recursive functions
A recursive function is a function that "calls itself".
Some (mathematical) functions are natural candidates for writing as recursive functions. For instance the factorial function:
```
def factorial_recursive(n):
if n == 1:
return 1
return n * factorial_recursive(n - 1)
factorial_recursive(100)
```
**Exercise**
Write a recursive function that computes the Pochhammer symbol
$$
(s)_k = s(s+1)(s+2)\cdots (s+k-1)
$$
$$
(s)_{0}=1
$$
**Exercise**
- Write a function that evaluates $\zeta(s)$ using the E-M formula (in other words, ignore the integral "error term").
- Input should be a complex number `s` and two parameters `a` and `k`.
**Additional Exercise (for later)**
- By comparing with the Builtin `zeta` determine the errors for various values of `a` and `k` and make a (combined) plot to illustrate that changing `a` doesn't affect the result as much as changing `k`.
There is in fact also a recursive definition of $B_k$:
$B_0 = 1$ and for $k\ge1$ we have
$$
B_k = - \sum_{j=0}^{k-1} {k \choose j} \frac{B_j}{k-j+1}
$$
**Additional Exercise (for later)**
Write a recursive function to compute $B_k$.
| github_jupyter |
```
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
!python -m pip install -q 'git+https://github.com/facebookresearch/detectron2.git'
import pandas as pd
import numpy as np
import pandas as pd
from tqdm import tqdm
from tqdm import tqdm_notebook as tqdm # progress bar
from datetime import datetime
import time
import matplotlib.pyplot as plt
from pycocotools.coco import COCO
import os, json, cv2, random
import skimage.io as io
import copy
from pathlib import Path
from typing import Optional
from tqdm import tqdm
import itertools
import torch
import albumentations as A
from albumentations.pytorch.transforms import ToTensorV2
from glob import glob
import numba
from numba import jit
import warnings
warnings.filterwarnings('ignore') #Ignore "future" warnings and Data-Frame-Slicing warnings.
# detectron2
from detectron2.structures import BoxMode
from detectron2 import model_zoo
from detectron2.config import get_cfg
from detectron2.data import DatasetCatalog, MetadataCatalog
from detectron2.engine import DefaultPredictor, DefaultTrainer, launch
from detectron2.evaluation import COCOEvaluator
from detectron2.structures import BoxMode
from detectron2.utils.visualizer import ColorMode
from detectron2.utils.logger import setup_logger
from detectron2.utils.visualizer import Visualizer
from detectron2.data import DatasetCatalog, MetadataCatalog, build_detection_test_loader, build_detection_train_loader
from detectron2.data import detection_utils as utils
from detectron2.data import DatasetCatalog, MetadataCatalog, build_detection_test_loader, build_detection_train_loader
from detectron2.data import detection_utils as utils
import detectron2.data.transforms as T
from detectron2.evaluation import COCOEvaluator, inference_on_dataset
setup_logger()
# !pip install orjson
import json
with open("/kaggle/input/sartorius-coco-dataset-notebook/train_dataset.json") as f:
val_data = json.loads(f.read())
all_val_img = []
for file_block in range(len(val_data["images"])):
all_val_img.append(val_data["images"][file_block]["file_name"].replace("..","/kaggle"))
len(all_val_img)
Data_Resister_training="sartorius_Cell_train";
Data_Resister_valid="sartorius_Cell_valid";
from detectron2.data.datasets import register_coco_instances
# dataDir=Path('/kaggle/input/sartorius-cell-instance-segmentation')
register_coco_instances(Data_Resister_training,{}, '/kaggle/input/sartorius-coco-dataset-notebook/train_dataset.json', '')
register_coco_instances(Data_Resister_valid,{},'/kaggle/input/sartorius-coco-dataset-notebook/val_dataset.json', '')
metadata = MetadataCatalog.get(Data_Resister_training)
dataset_train = DatasetCatalog.get(Data_Resister_training)
dataset_valid = DatasetCatalog.get(Data_Resister_valid)
# dataset_valid[2]
fig, ax = plt.subplots(figsize =(18,11))
d=dataset_valid[2]
img = cv2.imread(d["file_name"])
print(img.shape)
v = Visualizer(img[:, :, ::-1],
metadata=metadata,
scale=1,
instance_mode=ColorMode.IMAGE_BW # remove the colors of unsegmented pixels. This option is only available for segmentation models
)
out = v.draw_dataset_dict(d)
ax.grid(False)
ax.axis('off')
ax.imshow(out.get_image()[:, :, ::-1])
from detectron2.data import transforms as T
# Define a sequence of augmentations:
augs = T.AugmentationList([
T.RandomLighting(0.9)
# T.RandomBrightness(0.9, 1.1),
# T.RandomFlip(prob=0.5),
# T.RandomRotation(-15,15)
# T.RandomCrop("relative", (0.88, 0.9)),
# T.RandomCrop("absolute", (500, 600))
]) # type: T.Augmentation
# Define the augmentation input ("image" required, others optional):
input = T.AugInput(img)
# Apply the augmentation:
transform = augs(input) # type: T.Transform
image_transformed = input.image # new image
print(image_transformed.shape)
plt.imshow(image_transformed)
def custom_mapper(dataset_dict):
dataset_dict = copy.deepcopy(dataset_dict)
image = utils.read_image(dataset_dict["file_name"], format="BGR")
transform_list = [
# T.RandomBrightness(0.9, 1.1),
# T.RandomContrast(0.9, 1.1),
# T.RandomSaturation(0.9, 1.1),
# T.RandomLighting(0.9),
T.ResizeScale(0.8,1.4,520,704),
T.RandomCrop("relative", (0.88, 0.9)),
T.RandomFlip(prob=0.5, horizontal=False, vertical=True),
T.RandomFlip(prob=0.5, horizontal=True, vertical=False),
]
image, transforms = T.apply_transform_gens(transform_list, image)
dataset_dict["image"] = torch.as_tensor(image.transpose(2, 0, 1).astype("float32"))
annos = [
utils.transform_instance_annotations(obj, transforms, image.shape[:2])
for obj in dataset_dict.pop("annotations")
if obj.get("iscrowd", 0) == 0
]
# instances = utils.annotations_to_instances(annos, image.shape[:2])
instances = utils.annotations_to_instances(annos, image.shape[:2], mask_format="bitmask")
dataset_dict["instances"] = utils.filter_empty_instances(instances)
return dataset_dict
class AugTrainer(DefaultTrainer):
@classmethod
def build_train_loader(cls, cfg):
return build_detection_train_loader(cfg, mapper=custom_mapper)
@classmethod
def build_evaluator(cls, cfg, dataset_name, output_folder=None):
return MAPIOUEvaluator(dataset_name)
# Taken from https://www.kaggle.com/theoviel/competition-metric-map-iou
from detectron2.evaluation.evaluator import DatasetEvaluator
import pycocotools.mask as mask_util
def precision_at(threshold, iou):
matches = iou > threshold
true_positives = np.sum(matches, axis=1) == 1 # Correct objects
false_positives = np.sum(matches, axis=1) == 0 # Missed objects
false_negatives = np.sum(matches, axis=0) == 0 # Extra objects
return np.sum(true_positives), np.sum(false_positives), np.sum(false_negatives)
def score(pred, targ):
pred_masks = pred['instances'].pred_masks.cpu().numpy()
enc_preds = [mask_util.encode(np.asarray(p, order='F')) for p in pred_masks]
enc_targs = list(map(lambda x:x['segmentation'], targ))
ious = mask_util.iou(enc_preds, enc_targs, [0]*len(enc_targs))
prec = []
for t in np.arange(0.5, 1.0, 0.05):
tp, fp, fn = precision_at(t, ious)
p = tp / (tp + fp + fn)
prec.append(p)
return np.mean(prec)
class MAPIOUEvaluator(DatasetEvaluator):
def __init__(self, dataset_name):
dataset_dicts = DatasetCatalog.get(dataset_name)
self.annotations_cache = {item['image_id']:item['annotations'] for item in dataset_dicts}
def reset(self):
self.scores = []
def process(self, inputs, outputs):
for inp, out in zip(inputs, outputs):
if len(out['instances']) == 0:
self.scores.append(0)
else:
targ = self.annotations_cache[inp['image_id']]
self.scores.append(score(out, targ))
def evaluate(self):
return {"MaP IoU": np.mean(self.scores)}
class Trainer(DefaultTrainer):
@classmethod
def build_evaluator(cls, cfg, dataset_name, output_folder=None):
return MAPIOUEvaluator(dataset_name)
os.makedirs("detectron2cell/output")
cfg = get_cfg()
config_name = "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"
cfg.merge_from_file(model_zoo.get_config_file(config_name))
cfg.DATASETS.TRAIN = (Data_Resister_training,)
cfg.DATASETS.TEST = (Data_Resister_valid,)
# cfg.MODEL.WEIGHTS ="/kaggle/input/detectron2cell/output/model_final.pth"
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(config_name)
cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 64 # 64 is slower but more accurate (128 faster but less accurate)
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 3
cfg.SOLVER.IMS_PER_BATCH = 2 #(2 is per defaults)
cfg.INPUT.MASK_FORMAT='bitmask'
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5
# cfg.SOLVER.LR_SCHEDULER_NAME = 'WarmupCosineLR'
cfg.SOLVER.BASE_LR = 0.0005 #(quite high base learning rate but should drop)
#cfg.SOLVER.MOMENTUM = 0.9
#cfg.SOLVER.WEIGHT_DECAY = 0.0005
#cfg.SOLVER.GAMMA = 0.1
cfg.SOLVER.WARMUP_ITERS = 10 #How many iterations to go from 0 to reach base LR
cfg.SOLVER.MAX_ITER = 10000 #Maximum of iterations 1
cfg.SOLVER.STEPS = (500, 1000) #At which point to change the LR 0.25,0.5
cfg.TEST.EVAL_PERIOD = 250
cfg.SOLVER.CHECKPOINT_PERIOD=250
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
trainer = AugTrainer(cfg) # with data augmentation
# trainer = Trainer(cfg) # without data augmentation
trainer.resume_or_load(resume=False)
trainer.train()
```
| github_jupyter |
<img src = "./media/walmart.png" width = 400 height = 400>
```
import pandas as pd
sales = pd.read_csv('./dataset/walmart_data.csv')
sales
sales.drop(['MarkDown1', 'MarkDown2', 'MarkDown3', 'MarkDown4', 'MarkDown5', 'Size'], inplace=True, axis=1)
sales
sales.rename(columns={'Store':'store',
'Type' : 'type',
'Dept': 'department',
'Date' : 'date',
'Weekly_Sales' : 'weekly_sales',
'IsHoliday' : 'is_holiday',
'Temperature' : 'temperature_c',
'Fuel_Price' : 'fuel_price_usd_per_l',
'Unemployment' : 'unemployment'}, inplace = True)
sales
column_names = ['store',
'type',
'department',
'date',
'weekly_sales',
'is_holiday',
'temperature_c',
'fuel_price_usd_per_l',
'unemployment']
sales = sales.reindex(columns=column_names)
sales
sales.to_csv('./dataset/walmart_sales.csv', index= False)
```
### Avoiding double counting
#### Dropping duplicate names
```
sales.drop_duplicates(subset="type")
sales.drop_duplicates(subset="store").head()
```
#### Dropping duplicate pairs
```
# Drop duplicate store/type combinations
store_types = sales.drop_duplicates(subset = ['store', 'type'])
store_types.head()
# Drop duplicate store/department combinations
store_depts = sales.drop_duplicates(subset = ['store', 'department'])
store_depts.head()
# Subset the rows where is_holiday is True and drop duplicate dates
holiday_dates = sales[sales['is_holiday']].drop_duplicates('date')
# Print date col of holiday_dates
holiday_dates['date']
```
#### Counting categorical variables
```
# Count the number of stores of each type
store_counts = store_types['type'].value_counts()
store_counts
# Get the proportion of stores of each type
store_props = store_types.type.value_counts(normalize = True)
print(store_props)
# Count the number of each department number and sort
dept_counts_sorted = store_depts['department'].value_counts(sort = True)
print(dept_counts_sorted)
```
# Grouped summary statistics
#### Summaries by group
##### Task: What percent of sales occurred at each store type?
```
# Calc total weekly sales
sales_all = sales["weekly_sales"].sum()
# Subset for type A stores, calc total weekly sales
sales_A = sales[sales["type"] == "A"]["weekly_sales"].sum()
# Subset for type B stores, calc total weekly sales
sales_B = sales[sales["type"] == "B"]["weekly_sales"].sum()
# Subset for type C stores, calc total weekly sales
sales_C = sales[sales["type"] == "C"]["weekly_sales"].sum()
# Get proportion for each type
sales_propn_by_type = [sales_A, sales_B, sales_C] / sales_all
print(sales_propn_by_type)
# Group by type; calc total weekly sales
sales_by_type = sales.groupby("type")["weekly_sales"].sum()
# Get proportion for each type
sales_propn_by_type = sales_by_type / sum(sales_by_type)
print(sales_propn_by_type)
```
#### Multiple grouped summaries
```
# Import numpy with the alias np
import numpy as np
# For each store type, aggregate weekly_sales: get min, max, mean, and median
sales_stats = sales.groupby('type')['weekly_sales'].agg([np.min, np.max, np.mean, np.median])
# Print sales_stats
print(sales_stats)
# For each store type, aggregate unemployment and fuel_price_usd_per_l: get min, max, mean, and median
unemp_fuel_stats = sales.groupby('type')['unemployment', 'fuel_price_usd_per_l'].agg([np.min, np.max, np.mean, np.median])
# Print unemp_fuel_stats
unemp_fuel_stats
```
# Pivot Table
- Get the mean `weekly_sales` by `type` using `.pivot_table()` and store as `mean_sales_by_type`
```
# Pivot for mean weekly_sales for each store type
mean_sales_by_type = sales.pivot_table(values = 'weekly_sales', index ='type')
# Print mean_sales_by_type
print(mean_sales_by_type)
```
- Get the mean and median (using NumPy functions) of `weekly_sales` by `type` using `.pivot_table()` and store as `mean_med_sales_by_type`
```
# Import NumPy as np
import numpy as np
# Pivot for mean and median weekly_sales for each store type
mean_med_sales_by_type = sales.pivot_table(values = 'weekly_sales', index= 'type', aggfunc = [np.mean, np.median])
# Print mean_med_sales_by_type
print(mean_med_sales_by_type)
```
- Get the mean of `weekly_sales` by `type` and `is_holiday` using `.pivot_table()` and store as `mean_sales_by_type_holiday`.
```
# Pivot for mean weekly_sales by store type and holiday
mean_sales_by_type_holiday = sales.pivot_table(values = 'weekly_sales', index = 'type', columns = 'is_holiday')
# Print mean_sales_by_type_holiday
print(mean_sales_by_type_holiday)
```
#### Fill in missing values and sum values with pivot tables
- Print the mean `weekly_sales` by `department` and `type`, filling in any missing values with `0`.
```
# Print mean weekly_sales by department and type; fill missing values with 0
sales.pivot_table(values='weekly_sales', index='department', columns='type', fill_value=0).head()
```
- Print the mean `weekly_sales` by `department` and `type`, filling in any missing values with `0` and summing all rows and columns.
```
# Print the mean weekly_sales by department and type; fill missing values with 0s; sum all rows and cols
sales.pivot_table(values="weekly_sales", index="department", columns="type", fill_value = 0, margins = True)
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import seaborn as sns
import os
import pandas as pd
import numpy as np
%matplotlib inline
sns.set_context('talk')
sns.set_style('ticks')
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (10,6)
sim_name = 'test_schedule_v7'
outdir = f'fig/{sim_name}'
if not os.path.exists(outdir):
os.mkdir(outdir)
%run ../bin/analyze_sim.py test_schedule_v7
df = df_read_from_sqlite(sim_name, tablename='Summary', directory='sims')
df.columns
grp_night = df.groupby('night')
nobs_night = grp_night['expDate'].agg(len)
plt.bar(nobs_night.index, nobs_night.values)
plt.xlabel('Night of Year')
plt.ylabel('Number of Observations')
plt.xlim(0,360)
sns.despine()
plt.savefig(f'fig/{sim_name}/nobs_bynight.png')
plt.figure(figsize=(8,8))
grp_filter = df.groupby('filter')
nobs_perfilter = grp_filter['expDate'].agg(len)
plt.pie(nobs_perfilter, labels=nobs_perfilter.index, autopct='%.0f%%')
plt.savefig(f'fig/{sim_name}/nobs_byfilter.png')
plt.figure(figsize=(8,8))
grp_program = df.groupby('propID')
nobs_perprogram = grp_program['expDate'].agg(len)
plt.pie(nobs_perprogram, labels=nobs_perprogram.index, autopct='%.0f%%')
plt.savefig(f'fig/{sim_name}/nobs_byprogram.png')
plt.hist(df.loc[df.slewTime.notnull(),'slewTime'],bins=np.linspace(10,25,50))
plt.xlabel('Time Between Exposures (s)')
plt.ylabel('Number of Slews')
sns.despine()
plt.savefig(f'fig/{sim_name}/slew_time_hist.png')
plt.hist(np.degrees(df.loc[df.slewTime.notnull(),'slewDist']),bins=np.linspace(0,50,75))
plt.xlabel('Slew Distance (deg)')
plt.ylabel('Number of Slews')
sns.despine()
plt.savefig(f'fig/{sim_name}/slew_distance_hist.png')
from sqlalchemy import create_engine
engine = create_engine('sqlite:///../data/ptf.db')
ptf_df = pd.read_sql('Summary', engine)
from astropy.time import Time
iptf_start = Time('2013-01-01').mjd
wiptf = ptf_df.expMJD >= iptf_start
ptf_df.columns
_ = plt.hist(ptf_df[wiptf].airmass,bins=np.linspace(1,3.5,100),histtype='step',normed=True, label='iPTF')
plt.hist(df.airmass,bins=np.linspace(1,3,100))
plt.xlabel('Airmass')
plt.ylabel('Number of Images')
sns.despine()
plt.savefig(f'fig/{sim_name}/airmass_hist.png')
print(df.airmass.max())
print(np.sum(df.airmass > 2.5))
bins = np.linspace(1,3,100)
_ = plt.hist(df.airmass,bins=bins,normed=True,label='ZTF')
_ = plt.hist(ptf_df[~wiptf].airmass,bins=bins,histtype='step',normed=True, label='PTF',linewidth=2)
_ = plt.hist(ptf_df[wiptf].airmass,bins=bins,histtype='step',normed=True, label='iPTF',color='black',linewidth=2)
plt.xlabel('Airmass')
plt.ylabel('Normalized Number of Images')
plt.legend()
sns.despine()
plt.savefig(f'fig/{sim_name}/airmass_hist_vs_ptf.png')
intranight_grp = df.groupby(['night','propID','fieldID'])
intranight_gap = intranight_grp['expMJD'].agg(lambda x: np.median(np.diff(x)))
gap = intranight_gap*24.
gap = gap.reset_index()
grp_np = gap.groupby(['night','propID'])
intranight_gap_night_prop = grp_np.agg(np.median)
intranight_gap_night_prop = intranight_gap_night_prop.reset_index()
for prop in [1,2,3]:
w = intranight_gap_night_prop['propID'] == prop
plt.scatter(intranight_gap_night_prop.loc[w,'night'], intranight_gap_night_prop.loc[w,'expMJD'], label=prop)
plt.legend()
plt.xlabel('Day of Year')
plt.ylabel('Median Time Between Observations (hours)')
plt.savefig(f'fig/{sim_name}/intranight_gap_by_doy.png')
plt.hist(gap.loc[gap['expMJD'].notnull(),'expMJD'],bins=np.linspace(0.2,4,100))
plt.xlabel('Median Time Between Observations of a Field (hours)')
plt.ylabel('Number of Request Sets')
sns.despine()
plt.savefig(f'fig/{sim_name}/intranight_gap_hist.png')
night_min = intranight_grp['expMJD'].agg(np.min)
night_max = intranight_grp['expMJD'].agg(np.max)
night_max = night_max.reset_index('night')
night_min = night_min.reset_index('night')
# not quite right--need to subtract night from night
#intranight_gap = night_max.values[:-1] - night_min.values[1:]
intranight_gap = {}
for idx in night_min.index:
intranight_gap[idx] = np.median(night_min.loc[idx,'expMJD'].values[1:] - night_max.loc[idx,'expMJD'].values[:-1])
intranight_df = pd.DataFrame.from_dict(intranight_gap,orient='index')
intranight_df = intranight_df.reset_index()
intranight_df['propID'] = intranight_df['index'].apply(lambda x: x[0])
intranight_df['fieldID'] = intranight_df['index'].apply(lambda x: x[1])
intranight_df.head()
for prop in [1,2,3]:
w = (intranight_df['propID'] == prop) & intranight_df[0].notnull()
plt.hist(intranight_df.loc[w,0], bins=np.arange(0,30),label=f'{prop}',histtype='step',linewidth=2)
plt.legend()
plt.xlabel('Median Nights between Observations of a Field')
plt.ylabel('Number of Fields')
sns.despine()
plt.savefig(f'fig/{sim_name}/internight_gap_hist.png')
plt.hist(np.degrees(df.dist2Moon))
```
| github_jupyter |
# LAB 4a: Creating a Sampled Dataset.
**Learning Objectives**
1. Setup up the environment
1. Sample the natality dataset to create train/eval/test sets
1. Preprocess the data in Pandas dataframe
## Introduction
In this notebook, we'll read data from BigQuery into our notebook to preprocess the data within a Pandas dataframe for a small, repeatable sample.
We will set up the environment, sample the natality dataset to create train/eval/test splits, and preprocess the data in a Pandas dataframe.
Each learning objective will correspond to a __#TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/4a_sample_babyweight.ipynb).
## Set up environment variables and load necessary libraries
Check that the Google BigQuery library is installed and if not, install it.
```
%%bash
python3 -m pip freeze | grep google-cloud-bigquery==1.6.1 || \
python3 -m pip install google-cloud-bigquery==1.6.1
```
Import necessary libraries.
```
from google.cloud import bigquery
import pandas as pd
```
## Lab Task #1: Set environment variables.
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
```
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# TODO: Change environment variables
PROJECT = "cloud-training-demos" # Replace with your PROJECT
```
## Create ML datasets by sampling using BigQuery
We'll begin by sampling the BigQuery data to create smaller datasets. Let's create a BigQuery client that we'll use throughout the lab.
```
bq = bigquery.Client(project = PROJECT)
```
We need to figure out the right way to divide our hash values to get our desired splits. To do that we need to define some values to hash with in the modulo. Feel free to play around with these values to get the perfect combination.
```
modulo_divisor = 100
train_percent = 80.0
eval_percent = 10.0
train_buckets = int(modulo_divisor * train_percent / 100.0)
eval_buckets = int(modulo_divisor * eval_percent / 100.0)
```
We can make a series of queries to check if our bucketing values result in the correct sizes of each of our dataset splits and then adjust accordingly. Therefore, to make our code more compact and reusable, let's define a function to return the head of a dataframe produced from our queries up to a certain number of rows.
```
def display_dataframe_head_from_query(query, count=10):
"""Displays count rows from dataframe head from query.
Args:
query: str, query to be run on BigQuery, results stored in dataframe.
count: int, number of results from head of dataframe to display.
Returns:
Dataframe head with count number of results.
"""
df = bq.query(
query + " LIMIT {limit}".format(
limit=count)).to_dataframe()
return df.head(count)
```
For our first query, we're going to use the original query above to get our label, features, and columns to combine into our hash which we will use to perform our repeatable splitting. There are only a limited number of years, months, days, and states in the dataset. Let's see what the hash values are. We will need to include all of these extra columns to hash on to get a fairly uniform spread of the data. Feel free to try less or more in the hash and see how it changes your results.
```
# Get label, features, and columns to hash and split into buckets
hash_cols_fixed_query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
year,
month,
CASE
WHEN day IS NULL THEN
CASE
WHEN wday IS NULL THEN 0
ELSE wday
END
ELSE day
END AS date,
IFNULL(state, "Unknown") AS state,
IFNULL(mother_birth_state, "Unknown") AS mother_birth_state
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
"""
display_dataframe_head_from_query(hash_cols_fixed_query)
```
Using `COALESCE` would provide the same result as the nested `CASE WHEN`. This is preferable when all we want is the first non-null instance. To be precise the `CASE WHEN` would become `COALESCE(wday, day, 0) AS date`. You can read more about it [here](https://cloud.google.com/bigquery/docs/reference/standard-sql/conditional_expressions).
Next query will combine our hash columns and will leave us just with our label, features, and our hash values.
```
data_query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING),
CAST(date AS STRING),
CAST(state AS STRING),
CAST(mother_birth_state AS STRING)
)
) AS hash_values
FROM
({CTE_hash_cols_fixed})
""".format(CTE_hash_cols_fixed=hash_cols_fixed_query)
display_dataframe_head_from_query(data_query)
```
The next query is going to find the counts of each of the unique 657484 `hash_values`. This will be our first step at making actual hash buckets for our split via the `GROUP BY`.
```
# Get the counts of each of the unique hashs of our splitting column
first_bucketing_query = """
SELECT
hash_values,
COUNT(*) AS num_records
FROM
({CTE_data})
GROUP BY
hash_values
""".format(CTE_data=data_query)
display_dataframe_head_from_query(first_bucketing_query)
```
The query below performs a second layer of bucketing where now for each of these bucket indices we count the number of records.
```
# Get the number of records in each of the hash buckets
second_bucketing_query = """
SELECT
ABS(MOD(hash_values, {modulo_divisor})) AS bucket_index,
SUM(num_records) AS num_records
FROM
({CTE_first_bucketing})
GROUP BY
ABS(MOD(hash_values, {modulo_divisor}))
""".format(
CTE_first_bucketing=first_bucketing_query, modulo_divisor=modulo_divisor)
display_dataframe_head_from_query(second_bucketing_query)
```
The number of records is hard for us to easily understand the split, so we will normalize the count into percentage of the data in each of the hash buckets in the next query.
```
# Calculate the overall percentages
percentages_query = """
SELECT
bucket_index,
num_records,
CAST(num_records AS FLOAT64) / (
SELECT
SUM(num_records)
FROM
({CTE_second_bucketing})) AS percent_records
FROM
({CTE_second_bucketing})
""".format(CTE_second_bucketing=second_bucketing_query)
display_dataframe_head_from_query(percentages_query)
```
We'll now select the range of buckets to be used in training.
```
# Choose hash buckets for training and pull in their statistics
train_query = """
SELECT
*,
"train" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= 0
AND bucket_index < {train_buckets}
""".format(
CTE_percentages=percentages_query,
train_buckets=train_buckets)
display_dataframe_head_from_query(train_query)
```
We'll do the same by selecting the range of buckets to be used evaluation.
```
# Choose hash buckets for validation and pull in their statistics
eval_query = """
SELECT
*,
"eval" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= {train_buckets}
AND bucket_index < {cum_eval_buckets}
""".format(
CTE_percentages=percentages_query,
train_buckets=train_buckets,
cum_eval_buckets=train_buckets + eval_buckets)
display_dataframe_head_from_query(eval_query)
```
Lastly, we'll select the hash buckets to be used for the test split.
```
# Choose hash buckets for testing and pull in their statistics
test_query = """
SELECT
*,
"test" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= {cum_eval_buckets}
AND bucket_index < {modulo_divisor}
""".format(
CTE_percentages=percentages_query,
cum_eval_buckets=train_buckets + eval_buckets,
modulo_divisor=modulo_divisor)
display_dataframe_head_from_query(test_query)
```
In the below query, we'll `UNION ALL` all of the datasets together so that all three sets of hash buckets will be within one table. We added `dataset_id` so that we can sort on it in the query after.
```
# Union the training, validation, and testing dataset statistics
union_query = """
SELECT
0 AS dataset_id,
*
FROM
({CTE_train})
UNION ALL
SELECT
1 AS dataset_id,
*
FROM
({CTE_eval})
UNION ALL
SELECT
2 AS dataset_id,
*
FROM
({CTE_test})
""".format(CTE_train=train_query, CTE_eval=eval_query, CTE_test=test_query)
display_dataframe_head_from_query(union_query)
```
Lastly, we'll show the final split between train, eval, and test sets. We can see both the number of records and percent of the total data. It is really close to the 80/10/10 that we were hoping to get.
```
# Show final splitting and associated statistics
split_query = """
SELECT
dataset_id,
dataset_name,
SUM(num_records) AS num_records,
SUM(percent_records) AS percent_records
FROM
({CTE_union})
GROUP BY
dataset_id,
dataset_name
ORDER BY
dataset_id
""".format(CTE_union=union_query)
display_dataframe_head_from_query(split_query)
```
## Lab Task #1: Sample BigQuery dataset.
Sample the BigQuery result set (above) so that you have approximately 8,000 training examples and 1000 evaluation examples.
The training and evaluation datasets have to be well-distributed (not all the babies are born in Jan 2005, for example)
and should not overlap (no baby is part of both training and evaluation datasets).
Now that we know that our splitting values produce a good global splitting on our data, here's a way to get a well-distributed portion of the data in such a way that the train/eval/test sets do not overlap and takes a subsample of our global splits.
```
# every_n allows us to subsample from each of the hash values
# This helps us get approximately the record counts we want
every_n = # TODO: Experiment with values to get close to target counts
# TODO: Replace FUNC with correct function to split with
# TODO: Replace COLUMN with correct column to split on
splitting_string = "ABS(FUNC(COLUMN, {0} * {1}))".format(every_n, modulo_divisor)
def create_data_split_sample_df(query_string, splitting_string, lo, up):
"""Creates a dataframe with a sample of a data split.
Args:
query_string: str, query to run to generate splits.
splitting_string: str, modulo string to split by.
lo: float, lower bound for bucket filtering for split.
up: float, upper bound for bucket filtering for split.
Returns:
Dataframe containing data split sample.
"""
query = "SELECT * FROM ({0}) WHERE {1} >= {2} and {1} < {3}".format(
query_string, splitting_string, int(lo), int(up))
df = bq.query(query).to_dataframe()
return df
train_df = create_data_split_sample_df(
query_string, splitting_string,
lo=0, up=train_percent)
eval_df = create_data_split_sample_df(
query_string, splitting_string,
lo=train_percent, up=train_percent + eval_percent)
test_df = create_data_split_sample_df(
query_string, splitting_string,
lo=train_percent + eval_percent, up=modulo_divisor)
print("There are {} examples in the train dataset.".format(len(train_df)))
print("There are {} examples in the validation dataset.".format(len(eval_df)))
print("There are {} examples in the test dataset.".format(len(test_df)))
```
## Preprocess data using Pandas
We'll perform a few preprocessing steps to the data in our dataset. Let's add extra rows to simulate the lack of ultrasound. That is we'll duplicate some rows and make the `is_male` field be `Unknown`. Also, if there is more than child we'll change the `plurality` to `Multiple(2+)`. While we're at it, we'll also change the plurality column to be a string. We'll perform these operations below.
Let's start by examining the training dataset as is.
```
train_df.head()
```
Also, notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data)
```
train_df.describe()
```
It is always crucial to clean raw data before using in machine learning, so we have a preprocessing step. We'll define a `preprocess` function below. Note that the mother's age is an input to our model so users will have to provide the mother's age; otherwise, our service won't work. The features we use for our model were chosen because they are such good predictors and because they are easy enough to collect.
## Lab Task #2: Pandas preprocessing.
Use Pandas to:
* Clean up the data to remove rows that are missing any of the fields.
* Simulate the lack of ultrasound.
* Change the plurality column to be a string.
Hint (highlight to see): <p>
Filtering:
<pre style="color:white">
df = df[df.weight_pounds > 0]
</pre>
Modify plurality to be a string:
<pre style="color:white">
twins_etc = dict(zip([1,2,3,4,5],
["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"]))
df["plurality"].replace(twins_etc, inplace=True)
</pre>
Lack of ultrasound:
<pre style="color:white">
no_ultrasound = df.copy(deep=True)
no_ultrasound["is_male"] = "Unknown"
</pre>
</p>
```
def preprocess(df):
""" Preprocess pandas dataframe for augmented babyweight data.
Args:
df: Dataframe containing raw babyweight data.
Returns:
Pandas dataframe containing preprocessed raw babyweight data as well
as simulated no ultrasound data masking some of the original data.
"""
# Clean up raw data
# TODO: Filter out what we don"t want to use for training
# TODO: Modify plurality field to be a string
# TODO: Clone data and mask certain columns to simulate lack of ultrasound
# TODO: Modify is_male
# TODO: Modify plurality
# Concatenate both datasets together and shuffle
return pd.concat(
[df, no_ultrasound]).sample(frac=1).reset_index(drop=True)
```
Let's process the train/eval/test set and see a small sample of the training data after our preprocessing:
```
train_df = preprocess(train_df)
eval_df = preprocess(eval_df)
test_df = preprocess(test_df)
train_df.head()
train_df.tail()
```
Let's look again at a summary of the dataset. Note that we only see numeric columns, so `plurality` does not show up.
```
train_df.describe()
```
## Write to .csv files
In the final versions, we want to read from files, not Pandas dataframes. So, we write the Pandas dataframes out as csv files. Using csv files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers.
```
# Define columns
columns = ["weight_pounds",
"is_male",
"mother_age",
"plurality",
"gestation_weeks"]
# Write out CSV files
train_df.to_csv(
path_or_buf="train.csv", columns=columns, header=False, index=False)
eval_df.to_csv(
path_or_buf="eval.csv", columns=columns, header=False, index=False)
test_df.to_csv(
path_or_buf="test.csv", columns=columns, header=False, index=False)
%%bash
wc -l *.csv
%%bash
head *.csv
%%bash
tail *.csv
```
## Lab Summary:
In this lab, we set up the environment, sampled the natality dataset to create train/eval/test splits, and preprocessed the data in a Pandas dataframe.
Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
#Trends Places To Sheets Via Query
Move using a WOEID query.
#License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
#Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- **Command**: "python starthinker_ui/manage.py colab"
- **Command**: "python starthinker/tools/colab.py [JSON RECIPE]"
#1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
```
!pip install git+https://github.com/google/starthinker
```
#2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
1. If the recipe uses a Google Cloud Project:
- Set the configuration **project** value to the project identifier from [these instructions](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md).
1. If the recipe has **auth** set to **user**:
- If you have user credentials:
- Set the configuration **user** value to your user credentials JSON.
- If you DO NOT have user credentials:
- Set the configuration **client** value to [downloaded client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md).
1. If the recipe has **auth** set to **service**:
- Set the configuration **service** value to [downloaded service credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_service.md).
```
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
```
#3. Enter Trends Places To Sheets Via Query Recipe Parameters
1. Provide <a href='https://apps.twitter.com/' target='_blank'>Twitter credentials</a>.
1. Provide BigQuery WOEID source query.
1. Specify Sheet url and tab to write API call results to.
1. Writes: WOEID, Name, Url, Promoted_Content, Query, Tweet_Volume
1. Note Twitter API is rate limited to 15 requests per 15 minutes. So keep WOEID lists short.
Modify the values below for your use case, can be done multiple times, then click play.
```
FIELDS = {
'auth_write': 'service', # Credentials used for writing data.
'secret': '',
'key': '',
'places_dataset': '',
'places_query': '',
'places_legacy': False,
'destination_sheet': '',
'destination_tab': '',
}
print("Parameters Set To: %s" % FIELDS)
```
#4. Execute Trends Places To Sheets Via Query
This does NOT need to be modified unless you are changing the recipe, click play.
```
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'twitter': {
'auth': 'user',
'secret': {'field': {'name': 'secret', 'kind': 'string', 'order': 1, 'default': ''}},
'key': {'field': {'name': 'key', 'kind': 'string', 'order': 2, 'default': ''}},
'trends': {
'places': {
'single_cell': True,
'bigquery': {
'dataset': {'field': {'name': 'places_dataset', 'kind': 'string', 'order': 3, 'default': ''}},
'query': {'field': {'name': 'places_query', 'kind': 'string', 'order': 4, 'default': ''}},
'legacy': {'field': {'name': 'places_legacy', 'kind': 'boolean', 'order': 5, 'default': False}}
}
}
},
'out': {
'sheets': {
'sheet': {'field': {'name': 'destination_sheet', 'kind': 'string', 'order': 6, 'default': ''}},
'tab': {'field': {'name': 'destination_tab', 'kind': 'string', 'order': 7, 'default': ''}},
'range': 'A1'
}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
```
| github_jupyter |
# Train and Deploy Your First Machine Learning Model on Amazon SageMaker
## Create SageMaker session
A SageMaker session needs to be initialized in order to start interacting the SageMaker service.
```
import boto3
import re
import os
import numpy as np
import pandas as pd
import sagemaker as sage
boto_session = boto3.Session(profile_name="packt-sagemaker")
session = sage.Session(boto_session=boto_session)
```
## Upload the iris data for training
Using the SageMaker session, which was initialized in the previous cell, the Iris data will be upload to S3.
```
local_data_directory = 'data'
s3_prefix = 'packt-sagemaker-iris'
data_location = session.upload_data(local_data_directory, key_prefix=s3_prefix)
```
## Create an estimator and fit the model
In order to use SageMaker to fit our algorithm, we'll create an `Estimator` that defines how to use the container to train. This includes the configuration we need to invoke SageMaker training:
* The __container name__. This is constructed as in the shell commands above.
* The __role__. As defined above.
* The __instance count__ which is the number of machines to use for training.
* The __instance type__ which is the type of machine to use for training.
* The __output path__ determines where the model artifact will be written.
* The __session__ is the SageMaker session object that we defined above.
Then we use fit() on the estimator to train against the data that we uploaded above.
```
from sagemaker import get_execution_role
account = session.boto_session.client('sts').get_caller_identity()['Account']
region = session.boto_session.region_name
image = '{}.dkr.ecr.{}.amazonaws.com/first-ml-model:latest'.format(account, region)
role = get_execution_role(session)
estimator = sage.estimator.Estimator(
image,
role,
1,
'ml.c4.2xlarge',
output_path="s3://{}/output".format(session.default_bucket()),
sagemaker_session=session
)
estimator.fit(data_location)
```
## Deploy the model
Using the trained `Estimator`, a RESTful service will be initialized on Amazon SageMaker service.
```
from sagemaker.predictor import json_serializer
predictor = estimator.deploy(1, 'ml.t2.medium', serializer=json_serializer)
```
## Sample some data and use it for a prediction
```
data_df = pd.read_csv("data/iris.csv", header=None, names=["feat_1", "feat_2", "feat_3", "feat_4"])
sampled_df = data_df.sample(10)
sampled_post_request_body = sampled_df.to_dict(orient='list')
sampled_post_request_body
print(predictor.predict(sampled_post_request_body).decode('utf-8'))
```
## Delete the endpoint
```
session.delete_endpoint(predictor.endpoint)
```
| github_jupyter |
# Survey Analysis: Summary Tables and Statistical Tests
This notebook summarizes survey responses from the original and amended flu surveys. It generates Tables 1, 2, 3 and S1 and reproduces the statistical tests reported in the paper.
```
import pandas as pd
import numpy as np
import scipy.stats as stats
```
### Load datasets
Qualified researchers may download these datasets from Synapse: https://www.synapse.org/20192020ilisurveillanceprogram
```
#NOTE: UPDATE THIS CELL WITH THE APPROPRIATE PATHNAMES
#define pathnames of datasets
ili_events_path = '/data/covid19/manuscript_data_delivery_for_synapse/event-level_survey_data.csv.gz'
ili_days_path = '/data/covid19/manuscript_data_delivery_for_synapse/daily_symptom_survey_data.csv.gz'
demographics_path = '/data/covid19/manuscript_data_delivery_for_synapse/participant_characteristics.csv.gz'
ili_events = pd.read_csv(ili_events_path)
ili_events.tail()
ili_days = pd.read_csv(ili_days_path)
ili_days.tail()
demographics = pd.read_csv(demographics_path)
demographics.tail()
```
## Create a dictionary with cohort labels and users
```
#define dictionary with labels for each cohort
dense_activity_columns = ['valid_user__steps','valid_user__heart','valid_user__sleep']
dense_activity_users = set(demographics.loc[demographics[dense_activity_columns].sum(axis=1)>0,'user_id'])
cohort_dict = {}
cohort_dict['All COVID-19'] = set(demographics.loc[demographics['cohort']=='COVID-19','user_id'])
cohort_dict['All Non-COVID-19 Flu'] = set(demographics.loc[demographics['cohort']=='Non-COVID-19 Flu','user_id'])
cohort_dict['All Pre-COVID-19 Flu'] = set(demographics.loc[demographics['cohort']=='Pre-COVID-19 Flu','user_id'])
cohort_dict['Dense COVID-19'] = dense_activity_users.intersection(cohort_dict['All COVID-19'])
cohort_dict['Dense Non-COVID-19 Flu'] = dense_activity_users.intersection(cohort_dict['All Non-COVID-19 Flu'])
cohort_dict['Dense Pre-COVID-19 Flu'] = dense_activity_users.intersection(cohort_dict['All Pre-COVID-19 Flu'])
#print size of each cohort
for cohort_label, users in cohort_dict.items():
print('Number of unique users in', cohort_label, ':', len(users))
```
## Define functions for summarizing data values
```
#create a function that formats the count data in the way specified in the SAP
def format_count_data_for_table(x):
formatted_summary = pd.DataFrame({'count': x.value_counts(),
'percent': x.value_counts(normalize=True)*100})
#make sure that the counts are equal to the cohort size
assert(formatted_summary['count'].sum()==len(x))
return formatted_summary['count'].astype(str) + ' (' + formatted_summary['percent'].round(1).astype(str) + '%)'
#create an additional function that formats percentage data only to save space in the tables
def format_precentages_for_table(x):
formatted_summary = pd.DataFrame({'count': x.value_counts(),
'percent': x.value_counts(normalize=True)*100})
#make sure that the counts are equal to the cohort size
assert(formatted_summary['count'].sum()==len(x))
return formatted_summary['percent'].round(1).astype(str) + '%'
#create a function that formats continous data
def format_continous_data_for_table(x):
formatted_summary = pd.Series({'Mean ± SD': '{:.1f} ± {:.1f}'.format(x.mean(), x.std()),
'Median': '{}'.format(x.median()),
'Min - Max': '{} - {}'.format(x.min(), x.max())})
return formatted_summary
def create_summary_table(melted_response_df, user_col, question_col, response_col, cohort_dict, format_fxn):
"""
Create a table of summary statistics for categorical or continuous data
This function takes in a melted (longform) dataframe with survey data that will be summarized for the
cohorts defined in the cohort dictionary, as well as a function that maps a series to a formatted summary (string)
Parameters
----------
melted_response_df : pandas dataframe, long format
Columns for user id, question, and responses.
All responses should have the same data type (categorical or continuous)
user_col : str
Name of column with user ids
question_col : str
Name of column with questions
response_col : str
Name of column with user responses
cohort_dict: dict
Dictionary with cohort definitions. Keys should be cohort labels (strings) and values
should be lists of user ids that belong to the cohort.
format_fxn: function
Function that takes in a series and returns a formatted series where the index corresponds
to the labeled rows in the summary table and the values correspond to the formatted table entries.
Returns
----------
summary_table : pandas dataframe
A pandas dataframe with a multi-index corresponding to the question (level 0) and the appropriate
formatted data rows (level 1) and columns corresponding to cohorts.
"""
summary_table = pd.DataFrame()
for cohort_label, cohort_users in cohort_dict.items():
summary_table = pd.concat([summary_table,
melted_response_df.loc[melted_response_df[user_col].isin(cohort_users)]
.groupby(by=question_col)[response_col]
.apply(format_fxn)
.rename(cohort_label+' (N={})'.format(len(cohort_users)))],
axis=1)
summary_table.index = pd.MultiIndex.from_tuples(summary_table.index)
return summary_table
```
## TABLE 1: Demographic Summary
```
count_cols = ['gender','education','age_group','bmi_group','race']
melted_demogs = demographics[['user_id']+count_cols].melt(id_vars='user_id')
melted_demogs['value'] = melted_demogs['value'].str.title()
melted_demogs = melted_demogs.fillna('Unavailable')
demog_summary = create_summary_table(melted_response_df=melted_demogs,
user_col='user_id',
question_col='variable',
response_col='value',
cohort_dict=cohort_dict,
format_fxn=format_precentages_for_table)
#reorder the rows and make them more readable
manual_renaming = {'College Graduate, Associate Or Bachelor\'S Degree':'College Degree',
'Did Not Finish High School, No Diploma':'Did Not Finish High School',
'Doctorate Degree, Md':'Doctorate Degree or MD',
'High School Graduate, Diploma Or The Equivalent (For Example, Ged)':'High School Diploma or GED',
'I\'D Prefer Not To Answer':'Prefer Not To Answer',
}
edu_order = ['Did Not Finish High School','High School Diploma or GED','Some College, No Degree','Trade/Technical/Vocational Training',
'College Degree', 'Graduate Degree', 'Doctorate Degree or MD']
gender_order = ['Female','Male','Other']
race_order = ['White / Caucasian', 'Hispanic Or Latino', 'Black Or African American', 'Asian Or Pacific Islander',
'American Indian Or Alaskan Native', 'Prefer Not To Answer']
age_bin_labels = ['< 25','25-34','35-44','45-54','55 +']
bmi_bin_labels = ['< 18.5','18.5 - 24.9','25.0 - 29.9','30 +']
demog_summary = demog_summary.rename(index=manual_renaming)
demog_summary = demog_summary.reindex(age_bin_labels+bmi_bin_labels+edu_order+gender_order+race_order+['Unavailable'], level=1)
demog_summary = demog_summary.reindex(['gender','race','education','age_group','bmi_group'], level=0)
demog_summary = demog_summary.fillna('0.0%')
demog_summary
```
#### Check for statistically significant demographic differences (full cohorts only)
```
full_cohort_dict = {key: value for key, value in cohort_dict.items() if key.startswith('All')}
demog_counts = create_summary_table(melted_response_df=melted_demogs,
user_col='user_id',
question_col='variable',
response_col='value',
cohort_dict=full_cohort_dict,
format_fxn=(lambda x: x.value_counts()))
demog_counts = demog_counts.fillna(0)
for col in count_cols:
chi2, p, _, _ = stats.chi2_contingency(demog_counts.loc[col].values)
print(col, ': chi2=', chi2.round(2), ' p=', p.round(5))
print('\nbonferroni threshold for', len(count_cols), 'comparisons:', 0.05/len(count_cols))
def two_proportion_z_test(melted_df, variable, cohort1, cohort2):
proportions = pd.DataFrame(index=melted_df.loc[melted_df['variable']==variable,'value'].unique(), columns=['p1','p2','p_all'])
proportions['p1'] = melted_df.loc[(melted_df['user_id'].isin(cohort1))&(melted_df['variable']==variable), 'value'].value_counts(normalize=True)
proportions['p2'] = melted_df.loc[(melted_df['user_id'].isin(cohort2))&(melted_df['variable']==variable), 'value'].value_counts(normalize=True)
proportions['p_all'] = melted_df.loc[(melted_df['user_id'].isin(cohort1.union(cohort2)))&(melted_df['variable']==variable), 'value'].value_counts(normalize=True)
proportions = proportions.fillna(0)
proportions['z'] = proportions.apply(lambda x: (x['p1'] - x['p2']) / np.sqrt(x['p_all']*(1-x['p_all'])*(1/len(cohort1)+1/len(cohort2))), axis=1)
proportions['p_value'] = proportions['z'].apply(lambda z: stats.norm.sf(abs(z))*2)
return proportions
#follow-up two-proportion z-tests
variables = ['race','age_group']
for variable in variables:
print('\n\n', variable, ': Covid vs. Non-Covid')
print(two_proportion_z_test(melted_demogs, variable, cohort_dict['All COVID-19'], cohort_dict['All Non-COVID-19 Flu']))
print('\n\n', variable, ': Covid vs. Pre-Covid')
print(two_proportion_z_test(melted_demogs, variable, cohort_dict['All COVID-19'], cohort_dict['All Pre-COVID-19 Flu']))
print('\n\n', variable, ': Non-Covid vs. Pre-Covid')
print(two_proportion_z_test(melted_demogs, variable, cohort_dict['All Non-COVID-19 Flu'], cohort_dict['All Pre-COVID-19 Flu']))
```
### Check for statistically significant demographic differences between users with dense vs. non-dense activity
```
#first create lists of all users with dense activity data and all users without dense activity data
activity_dict = {'Dense Activity': dense_activity_users,
'Non-dense Activity': set(demographics['user_id']).difference(dense_activity_users)}
#print size of each cohort
for cohort_label, users in activity_dict.items():
print('Number of unique users in', cohort_label, ':', len(users))
demog_counts = create_summary_table(melted_response_df=melted_demogs,
user_col='user_id',
question_col='variable',
response_col='value',
cohort_dict=activity_dict,
format_fxn=(lambda x: x.value_counts()))
demog_counts = demog_counts.fillna(0)
for col in count_cols:
chi2, p, _, _ = stats.chi2_contingency(demog_counts.loc[col].values)
print(col, ': chi2=', chi2.round(2), ' p=', p.round(5))
print('\nbonferroni threshold for', len(count_cols), 'comparisons:', 0.05/len(count_cols))
#follow-up two-proportion z-tests
variables = ['gender','age_group','bmi_group','race']
for variable in variables:
print('\n\n', variable, ': Dense Activity vs. Not')
print(two_proportion_z_test(melted_demogs, variable, activity_dict['Dense Activity'], activity_dict['Non-dense Activity']))
```
## Table 2: Healthcare-related behaviors and outcomes
```
#clean response options for the question about seeking medical care
care_location_options = {'primary care clinic e g family medicine internal medicine':'Primary care clinic',
'urgent care facility':'Urgent care facility',
'emergency room er':'Emergency room',
'ear nose and throat otolaryngology clinic':'Ear, nose, and throat clinic',
'infectious disease clinic':'Infectious disease clinic',
'other':'Other',
'multiple':'Multiple locations',
'unavailable':'Unavailable',}
ili_events['care_location__cleaned'] = ili_events['medical__sought_attention_where'].copy()
ili_events.loc[ili_events['care_location__cleaned'].str.contains('__'), 'care_location__cleaned'] = 'multiple'
ili_events.loc[~ili_events['care_location__cleaned'].isin(care_location_options.keys()), 'care_location__cleaned'] = 'other'
ili_events['care_location__cleaned'].value_counts(dropna=False)
healthcare_cols = ['care_location__cleaned', 'medical__hospitalized', 'medical__medication']
melted_events = ili_events[['user_id']+healthcare_cols].melt(id_vars='user_id')
melted_events['value'] = melted_events['value'].fillna('unavailable')
med_care_summary = create_summary_table(melted_response_df=melted_events,
user_col='user_id',
question_col='variable',
response_col='value',
cohort_dict=full_cohort_dict,
format_fxn=format_precentages_for_table)
med_care_summary = med_care_summary.fillna('0.0%')
#make index more readable
med_care_mappings = {'care_location__cleaned':'Medical care location', 'medical__hospitalized':'Hospitalized', 'medical__medication':'Prescribed medication'}
med_care_response_mappings = {'yes':'Yes', 'no':'No', 'dont know or remember':'Do not know/remember', 'unavailable':'Unavailable'}
med_care_summary = med_care_summary.rename(index=med_care_mappings)
med_care_summary = med_care_summary.rename(index=med_care_response_mappings)
med_care_summary = med_care_summary.rename(index=care_location_options)
med_care_summary = med_care_summary.reindex(pd.Series(list(care_location_options.values())+
list(med_care_response_mappings.values())).drop_duplicates(keep='last'), level=1)
med_care_summary
```
### Check for differences between healthcare questions
```
care_counts = create_summary_table(melted_response_df=melted_events,
user_col='user_id',
question_col='variable',
response_col='value',
cohort_dict=full_cohort_dict,
format_fxn=(lambda x: x.value_counts()))
care_counts = care_counts.fillna(0)
for col in healthcare_cols:
chi2, p, _, _ = stats.chi2_contingency(care_counts.loc[col].values)
print(col, ': chi2=', chi2.round(2), ' p=', p.round(5))
print('\nbonferroni threshold for', len(healthcare_cols), 'comparisons:', 0.05/len(healthcare_cols))
#follow-up two-proportion z-tests
variables = healthcare_cols
for variable in variables:
print('\n\n', variable, ': Covid vs. Non-Covid')
print(two_proportion_z_test(melted_events, variable, cohort_dict['All COVID-19'], cohort_dict['All Non-COVID-19 Flu']))
print('\n\n', variable, ': Covid vs. Pre-Covid')
print(two_proportion_z_test(melted_events, variable, cohort_dict['All COVID-19'], cohort_dict['All Pre-COVID-19 Flu']))
print('\n\n', variable, ': Non-Covid vs. Pre-Covid')
print(two_proportion_z_test(melted_events, variable, cohort_dict['All Non-COVID-19 Flu'], cohort_dict['All Pre-COVID-19 Flu']))
```
## Table 3: Symptom Summary
```
#pull out list of symptoms
symptom_cols = ili_events.columns[ili_events.columns.str.contains('symptoms__')].tolist()
symptom_cols.remove('symptoms__other')
symptom_cols.remove('symptoms__none')
symptom_cols.remove('symptoms__no_symptoms')
#melt symptoms
melted_symptoms = ili_events[['user_id']+symptom_cols].melt(id_vars='user_id')
melted_symptoms['value'] = melted_symptoms['value'].fillna('')
#create summary table of symptom prevalances
symptoms_summary = create_summary_table(melted_response_df=melted_symptoms,
user_col='user_id',
question_col='variable',
response_col='value',
cohort_dict=full_cohort_dict,
format_fxn=format_precentages_for_table)
symptoms_summary = symptoms_summary.loc[symptoms_summary.index.get_level_values(1)!=''].droplevel(1)
#use daily symptom data to find peak symptom day across each cohort
daily_symp_cols = [col.replace('symptoms','daily_symp') for col in symptom_cols]
symp_counts_by_day = ili_days.groupby(by=['cohort','days_since_symptoms_onset'])[daily_symp_cols].sum()
#record peak symptom day for each cohort (use a centered 5-day rolling window)
peak_symptom_days = (symp_counts_by_day.groupby('cohort')
.apply(lambda x: x.rolling(5, center=True).mean().idxmax())
.applymap(lambda x: x[1])
.transpose()
.replace(-8, np.nan))
#format the table
symptom_map = {'chills_shiver':'Chills or Shivering',
'cough':'Cough',
'fatigue':'Fatigue',
'fever':'Fever',
'headache':'Headache',
'bodyache':'Body Muscle Ache',
'nasal':'Nasal Congestion',
'sneezing':'Sneezing',
'sore_throat':'Sore Throat',
'sweats':'Sweats',
'anosmia':'Anosmia',
'chest_pain':'Chest Pain/Pressure',
'short_breath':'Shortness of Breath'}
symptoms_summary.index = symptoms_summary.reset_index()['index'].apply(lambda x: x.split('__')[-1])
symptoms_summary = symptoms_summary.rename(index=symptom_map)
peak_symptom_days.index = peak_symptom_days.reset_index()['index'].apply(lambda x: x.split('__')[-1])
peak_symptom_days = peak_symptom_days.rename(index=symptom_map)
#merge tables
symptoms_summary = symptoms_summary.merge(peak_symptom_days, left_index=True, right_index=True)
#sort from most to least common in covid cases
symptoms_summary = symptoms_summary.loc[symptoms_summary['All COVID-19 (N=230)'].sort_values(ascending=False).index]
#fill in NaNs
symptoms_summary = symptoms_summary.fillna('NA')
symptoms_summary
#print(symptoms_summary.to_latex())
```
### Symptom statistical testing
```
#chi-squared
symptom_counts = create_summary_table(melted_response_df=melted_symptoms,
user_col='user_id',
question_col='variable',
response_col='value',
cohort_dict=full_cohort_dict,
format_fxn=(lambda x: x.value_counts()))
symptom_counts = symptom_counts.fillna(0)
symptom_counts = symptom_counts.loc[symptom_counts.index.get_level_values(1)!=''].droplevel(1)
chi2, p, _, _ = stats.chi2_contingency(symptom_counts.values)
print('symptoms', ': chi2=', chi2.round(2), ' p=', p.round(5))
#follow-up two-proportion z-tests
variables = symptom_cols
for variable in variables:
print('\n\n', variable, ': Covid vs. Non-Covid')
print(two_proportion_z_test(melted_symptoms, variable, cohort_dict['All COVID-19'], cohort_dict['All Non-COVID-19 Flu']))
print('\n\n', variable, ': Covid vs. Pre-Covid')
print(two_proportion_z_test(melted_symptoms, variable, cohort_dict['All COVID-19'], cohort_dict['All Pre-COVID-19 Flu']))
print('\n\n', variable, ': Non-Covid vs. Pre-Covid')
print(two_proportion_z_test(melted_symptoms, variable, cohort_dict['All Non-COVID-19 Flu'], cohort_dict['All Pre-COVID-19 Flu']))
```
## Supplement Table: Self-reported medication conditions
```
#IBD and IBS are listed in separate columns, but also in a column called ibs_ibd --> investigate
print(demographics[['user_id','ibd','ibs','ibs_ibd']].fillna(False).groupby(['ibs_ibd','ibd','ibs']).nunique())
#it looks disjoint -- no one has ibs_ibd AND ibd or ibs --> collapse ibd and ibs into ibs_ibd
demographics['ibs_ibd_collapsed'] = demographics[['ibd','ibs','ibs_ibd']].any(axis=1)
demographics.columns
conditions = ['alzheimers', 'anxiety', 'arrhythmia', 'arthritis', 'asthma', 'cancer',
'chronic_pain', 'copd', 'coronary_heart_disease', 'depression',
'diabetes_t1', 'diabetes_t2', 'fibromyalgia', 'gerd',
'gestational_diabetes', 'heart_attack', 'heart_failure',
'high_colesterol', 'hypertension', 'hypo_hyper_thyroidism', 'ibs_ibd_collapsed',
'insomnia', 'lupus',
'mental_health_not_depression_anxiety', 'migraines',
'multiple_sclerosis', 'neurodegenerative',
'obstructive_coronary_heart_disease', 'osteoporosis', 'pcos',
'psoriasis', 'restless_leg', 'rheumatoid_arthritis',
'seasonal_allergies', 'sleep_apnea', 'stroke',]
#sort by prevalance in covid cohort
condition_order = (demographics.loc[demographics['user_id'].isin(cohort_dict['All COVID-19']), conditions]
.sum()
.sort_values(ascending=False)
.index.tolist())
#for people who took the health survey, fill in "False" for conditions they did not select
demographics.loc[demographics['hs_completed']==True,conditions].fillna(False, inplace=True)
melted_conditions = demographics[['user_id']+conditions].melt(id_vars='user_id')
melted_conditions['value'] = melted_conditions['value'].fillna('unavailable')
#melted_fluvey = melted_fluvey.loc[melted_fluvey['value']!='']
conditions_summary = create_summary_table(melted_response_df=melted_conditions,
user_col='user_id',
question_col='variable',
response_col='value',
cohort_dict=full_cohort_dict,
format_fxn=format_count_data_for_table)
conditions_summary = conditions_summary.loc[conditions_summary.index.get_level_values(1)==True].droplevel(1).fillna('0 (0.0%)')
#reorder symptoms based on prevalance
symptoms_summary = symptoms_summary.loc[[cond for cond in condition_order if cond in symptoms_summary.index]]
#clean up condition names for publication
conditions_summary.index = conditions_summary.index.str.title().str.replace('_',' ')
manual_condition_naming = {'Pcos':'PCOS', 'Mental Health Not Depression Anxiety':'Mental Health (Excluding Depression/Anxiety)',
'Gerd':'GERD', 'Restless Leg':'Restless Leg Syndrome', 'Hypo Hyper Thyroidism':'Hypo- or Hyperthyrodism',
'High Colesterol':'High Cholesterol', 'Ibs Ibd Collapsed':'IBS or IBD', 'Alzheimers':'Alzheimer\'s Disease',
'Copd':'COPD', 'Diabetes T1':'Type 1 Diabetes', 'Diabetes T2':'Type 2 Diabetes'}
conditions_summary = conditions_summary.rename(index=manual_condition_naming)
conditions_summary
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Many Models with Backtesting - Automated ML
**_Backtest many models time series forecasts with Automated Machine Learning_**
---
For this notebook we are using a synthetic dataset to demonstrate the back testing in many model scenario. This allows us to check historical performance of AutoML on a historical data. To do that we step back on the backtesting period by the data set several times and split the data to train and test sets. Then these data sets are used for training and evaluation of model.<br>
Thus, it is a quick way of evaluating AutoML as if it was in production. Here, we do not test historical performance of a particular model, for this see the [notebook](../forecasting-backtest-single-model/auto-ml-forecasting-backtest-single-model.ipynb). Instead, the best model for every backtest iteration can be different since AutoML chooses the best model for a given training set.

**NOTE: There are limits on how many runs we can do in parallel per workspace, and we currently recommend to set the parallelism to maximum of 320 runs per experiment per workspace. If users want to have more parallelism and increase this limit they might encounter Too Many Requests errors (HTTP 429).**
### Prerequisites
You'll need to create a compute Instance by following the instructions in the [EnvironmentSetup.md](../Setup_Resources/EnvironmentSetup.md).
## 1.0 Set up workspace, datastore, experiment
```
import os
import azureml.core
from azureml.core import Workspace, Datastore
import numpy as np
import pandas as pd
from pandas.tseries.frequencies import to_offset
# Set up your workspace
ws = Workspace.from_config()
ws.get_details()
# Set up your datastores
dstore = ws.get_default_datastore()
output = {}
output["SDK version"] = azureml.core.VERSION
output["Subscription ID"] = ws.subscription_id
output["Workspace"] = ws.name
output["Resource Group"] = ws.resource_group
output["Location"] = ws.location
output["Default datastore name"] = dstore.name
pd.set_option("display.max_colwidth", None)
outputDf = pd.DataFrame(data=output, index=[""])
outputDf.T
```
This notebook is compatible with Azure ML SDK version 1.35.1 or later.
```
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
```
### Choose an experiment
```
from azureml.core import Experiment
experiment = Experiment(ws, "automl-many-models-backtest")
print("Experiment name: " + experiment.name)
```
## 2.0 Data
#### 2.1 Data generation
For this notebook we will generate the artificial data set with two [time series IDs](https://docs.microsoft.com/en-us/python/api/azureml-automl-core/azureml.automl.core.forecasting_parameters.forecastingparameters?view=azure-ml-py). Then we will generate backtest folds and will upload it to the default BLOB storage and create a [TabularDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabular_dataset.tabulardataset?view=azure-ml-py)
```
# simulate data: 2 grains - 700
TIME_COLNAME = "date"
TARGET_COLNAME = "value"
TIME_SERIES_ID_COLNAME = "ts_id"
sample_size = 700
# Set the random seed for reproducibility of results.
np.random.seed(20)
X1 = pd.DataFrame(
{
TIME_COLNAME: pd.date_range(start="2018-01-01", periods=sample_size),
TARGET_COLNAME: np.random.normal(loc=100, scale=20, size=sample_size),
TIME_SERIES_ID_COLNAME: "ts_A",
}
)
X2 = pd.DataFrame(
{
TIME_COLNAME: pd.date_range(start="2018-01-01", periods=sample_size),
TARGET_COLNAME: np.random.normal(loc=100, scale=20, size=sample_size),
TIME_SERIES_ID_COLNAME: "ts_B",
}
)
X = pd.concat([X1, X2], ignore_index=True, sort=False)
print("Simulated dataset contains {} rows \n".format(X.shape[0]))
X.head()
```
Now we will generate 8 backtesting folds with backtesting period of 7 days and with the same forecasting horizon. We will add the column "backtest_iteration", which will identify the backtesting period by the last training date.
```
offset_type = "7D"
NUMBER_OF_BACKTESTS = 8 # number of train/test sets to generate
dfs_train = []
dfs_test = []
for ts_id, df_one in X.groupby(TIME_SERIES_ID_COLNAME):
data_end = df_one[TIME_COLNAME].max()
for i in range(NUMBER_OF_BACKTESTS):
train_cutoff_date = data_end - to_offset(offset_type)
df_one = df_one.copy()
df_one["backtest_iteration"] = "iteration_" + str(train_cutoff_date)
train = df_one[df_one[TIME_COLNAME] <= train_cutoff_date]
test = df_one[
(df_one[TIME_COLNAME] > train_cutoff_date)
& (df_one[TIME_COLNAME] <= data_end)
]
data_end = train[TIME_COLNAME].max()
dfs_train.append(train)
dfs_test.append(test)
X_train = pd.concat(dfs_train, sort=False, ignore_index=True)
X_test = pd.concat(dfs_test, sort=False, ignore_index=True)
```
#### 2.2 Create the Tabular Data Set.
A Datastore is a place where data can be stored that is then made accessible to a compute either by means of mounting or copying the data to the compute target.
Please refer to [Datastore](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore(class)?view=azure-ml-py) documentation on how to access data from Datastore.
In this next step, we will upload the data and create a TabularDataset.
```
from azureml.data.dataset_factory import TabularDatasetFactory
ds = ws.get_default_datastore()
# Upload saved data to the default data store.
train_data = TabularDatasetFactory.register_pandas_dataframe(
X_train, target=(ds, "data_mm"), name="data_train"
)
test_data = TabularDatasetFactory.register_pandas_dataframe(
X_test, target=(ds, "data_mm"), name="data_test"
)
```
## 3.0 Build the training pipeline
Now that the dataset, WorkSpace, and datastore are set up, we can put together a pipeline for training.
> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.
### Choose a compute target
You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.
\*\*Creation of AmlCompute takes approximately 5 minutes.**
If the AmlCompute with that name is already in your workspace this code will skip the creation process. As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this [article](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-quotas) on the default limits and how to request more quota.
```
from azureml.core.compute import ComputeTarget, AmlCompute
# Name your cluster
compute_name = "backtest-mm"
if compute_name in ws.compute_targets:
compute_target = ws.compute_targets[compute_name]
if compute_target and type(compute_target) is AmlCompute:
print("Found compute target: " + compute_name)
else:
print("Creating a new compute target...")
provisioning_config = AmlCompute.provisioning_configuration(
vm_size="STANDARD_DS12_V2", max_nodes=6
)
# Create the compute target
compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min node count is provided it will use the scale settings for the cluster
compute_target.wait_for_completion(
show_output=True, min_node_count=None, timeout_in_minutes=20
)
# For a more detailed view of current cluster status, use the 'status' property
print(compute_target.status.serialize())
```
### Set up training parameters
This dictionary defines the AutoML and many models settings. For this forecasting task we need to define several settings including the name of the time column, the maximum forecast horizon, and the partition column name definition. Please note, that in this case we are setting grain_column_names to be the time series ID column plus iteration, because we want to train a separate model for each time series and iteration.
| Property | Description|
| :--------------- | :------------------- |
| **task** | forecasting |
| **primary_metric** | This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>normalized_root_mean_squared_error</i><br><i>normalized_mean_absolute_error</i> |
| **iteration_timeout_minutes** | Maximum amount of time in minutes that the model can train. This is optional but provides customers with greater control on exit criteria. |
| **iterations** | Number of models to train. This is optional but provides customers with greater control on exit criteria. |
| **experiment_timeout_hours** | Maximum amount of time in hours that the experiment can take before it terminates. This is optional but provides customers with greater control on exit criteria. |
| **label_column_name** | The name of the label column. |
| **max_horizon** | The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly). Periods are inferred from your data. |
| **n_cross_validations** | Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way. |
| **time_column_name** | The name of your time column. |
| **grain_column_names** | The column names used to uniquely identify timeseries in data that has multiple rows with the same timestamp. |
| **track_child_runs** | Flag to disable tracking of child runs. Only best run is tracked if the flag is set to False (this includes the model and metrics of the run). |
| **partition_column_names** | The names of columns used to group your models. For timeseries, the groups must not split up individual time-series. That is, each group must contain one or more whole time-series. |
```
from azureml.train.automl.runtime._many_models.many_models_parameters import (
ManyModelsTrainParameters,
)
partition_column_names = [TIME_SERIES_ID_COLNAME, "backtest_iteration"]
automl_settings = {
"task": "forecasting",
"primary_metric": "normalized_root_mean_squared_error",
"iteration_timeout_minutes": 10, # This needs to be changed based on the dataset. We ask customer to explore how long training is taking before settings this value
"iterations": 15,
"experiment_timeout_hours": 0.25, # This also needs to be changed based on the dataset. For larger data set this number needs to be bigger.
"label_column_name": TARGET_COLNAME,
"n_cross_validations": 3,
"time_column_name": TIME_COLNAME,
"max_horizon": 6,
"grain_column_names": partition_column_names,
"track_child_runs": False,
}
mm_paramters = ManyModelsTrainParameters(
automl_settings=automl_settings, partition_column_names=partition_column_names
)
```
### Set up many models pipeline
Parallel run step is leveraged to train multiple models at once. To configure the ParallelRunConfig you will need to determine the appropriate number of workers and nodes for your use case. The process_count_per_node is based off the number of cores of the compute VM. The node_count will determine the number of master nodes to use, increasing the node count will speed up the training process.
| Property | Description|
| :--------------- | :------------------- |
| **experiment** | The experiment used for training. |
| **train_data** | The file dataset to be used as input to the training run. |
| **node_count** | The number of compute nodes to be used for running the user script. We recommend to start with 3 and increase the node_count if the training time is taking too long. |
| **process_count_per_node** | Process count per node, we recommend 2:1 ratio for number of cores: number of processes per node. eg. If node has 16 cores then configure 8 or less process count per node or optimal performance. |
| **train_pipeline_parameters** | The set of configuration parameters defined in the previous section. |
Calling this method will create a new aggregated dataset which is generated dynamically on pipeline execution.
```
from azureml.contrib.automl.pipeline.steps import AutoMLPipelineBuilder
training_pipeline_steps = AutoMLPipelineBuilder.get_many_models_train_steps(
experiment=experiment,
train_data=train_data,
compute_target=compute_target,
node_count=2,
process_count_per_node=2,
run_invocation_timeout=920,
train_pipeline_parameters=mm_paramters,
)
from azureml.pipeline.core import Pipeline
training_pipeline = Pipeline(ws, steps=training_pipeline_steps)
```
### Submit the pipeline to run
Next we submit our pipeline to run. The whole training pipeline takes about 20 minutes using a STANDARD_DS12_V2 VM with our current ParallelRunConfig setting.
```
training_run = experiment.submit(training_pipeline)
training_run.wait_for_completion(show_output=False)
```
Check the run status, if training_run is in completed state, continue to next section. Otherwise, check the portal for failures.
## 4.0 Backtesting
Now that we selected the best AutoML model for each backtest fold, we will use these models to generate the forecasts and compare with the actuals.
### Set up output dataset for inference data
Output of inference can be represented as [OutputFileDatasetConfig](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.output_dataset_config.outputdatasetconfig?view=azure-ml-py) object and OutputFileDatasetConfig can be registered as a dataset.
```
from azureml.data import OutputFileDatasetConfig
output_inference_data_ds = OutputFileDatasetConfig(
name="many_models_inference_output",
destination=(dstore, "backtesting/inference_data/"),
).register_on_complete(name="backtesting_data_ds")
```
For many models we need to provide the ManyModelsInferenceParameters object.
#### ManyModelsInferenceParameters arguments
| Property | Description|
| :--------------- | :------------------- |
| **partition_column_names** | List of column names that identifies groups. |
| **target_column_name** | \[Optional\] Column name only if the inference dataset has the target. |
| **time_column_name** | Column name only if it is timeseries. |
| **many_models_run_id** | \[Optional\] Many models pipeline run id where models were trained. |
#### get_many_models_batch_inference_steps arguments
| Property | Description|
| :--------------- | :------------------- |
| **experiment** | The experiment used for inference run. |
| **inference_data** | The data to use for inferencing. It should be the same schema as used for training.
| **compute_target** | The compute target that runs the inference pipeline.|
| **node_count** | The number of compute nodes to be used for running the user script. We recommend to start with the number of cores per node (varies by compute sku). |
| **process_count_per_node** | The number of processes per node.
| **train_run_id** | \[Optional\] The run id of the hierarchy training, by default it is the latest successful training many model run in the experiment. |
| **train_experiment_name** | \[Optional\] The train experiment that contains the train pipeline. This one is only needed when the train pipeline is not in the same experiement as the inference pipeline. |
| **process_count_per_node** | \[Optional\] The number of processes per node, by default it's 4. |
```
from azureml.contrib.automl.pipeline.steps import AutoMLPipelineBuilder
from azureml.train.automl.runtime._many_models.many_models_parameters import (
ManyModelsInferenceParameters,
)
mm_parameters = ManyModelsInferenceParameters(
partition_column_names=partition_column_names,
time_column_name=TIME_COLNAME,
target_column_name=TARGET_COLNAME,
)
inference_steps = AutoMLPipelineBuilder.get_many_models_batch_inference_steps(
experiment=experiment,
inference_data=test_data,
node_count=2,
process_count_per_node=2,
compute_target=compute_target,
run_invocation_timeout=300,
output_datastore=output_inference_data_ds,
train_run_id=training_run.id,
train_experiment_name=training_run.experiment.name,
inference_pipeline_parameters=mm_parameters,
)
from azureml.pipeline.core import Pipeline
inference_pipeline = Pipeline(ws, steps=inference_steps)
inference_run = experiment.submit(inference_pipeline)
inference_run.wait_for_completion(show_output=False)
```
## 5.0 Retrieve results and calculate metrics
The pipeline returns one file with the predictions for each times series ID and outputs the result to the forecasting_output Blob container. The details of the blob container is listed in 'forecasting_output.txt' under Outputs+logs.
The next code snippet does the following:
1. Downloads the contents of the output folder that is passed in the parallel run step
2. Reads the parallel_run_step.txt file that has the predictions as pandas dataframe
3. Saves the table in csv format and
4. Displays the top 10 rows of the predictions
```
from azureml.contrib.automl.pipeline.steps.utilities import get_output_from_mm_pipeline
forecasting_results_name = "forecasting_results"
forecasting_output_name = "many_models_inference_output"
forecast_file = get_output_from_mm_pipeline(
inference_run, forecasting_results_name, forecasting_output_name
)
df = pd.read_csv(forecast_file, delimiter=" ", header=None, parse_dates=[0])
df.columns = list(X_train.columns) + ["predicted_level"]
print(
"Prediction has ", df.shape[0], " rows. Here the first 10 rows are being displayed."
)
# Save the scv file with header to read it in the next step.
df.rename(columns={TARGET_COLNAME: "actual_level"}, inplace=True)
df.to_csv(os.path.join(forecasting_results_name, "forecast.csv"), index=False)
df.head(10)
```
## View metrics
We will read in the obtained results and run the helper script, which will generate metrics and create the plots of predicted versus actual values.
```
from assets.score import calculate_scores_and_build_plots
backtesting_results = "backtesting_mm_results"
os.makedirs(backtesting_results, exist_ok=True)
calculate_scores_and_build_plots(
forecasting_results_name, backtesting_results, automl_settings
)
pd.DataFrame({"File": os.listdir(backtesting_results)})
```
The directory contains a set of files with results:
- forecast.csv contains forecasts for all backtest iterations. The backtest_iteration column contains iteration identifier with the last training date as a suffix
- scores.csv contains all metrics. If data set contains several time series, the metrics are given for all combinations of time series id and iterations, as well as scores for all iterations and time series ids, which are marked as "all_sets"
- plots_fcst_vs_actual.pdf contains the predictions vs forecast plots for each iteration and, eash time series is saved as separate plot.
For demonstration purposes we will display the table of metrics for one of the time series with ID "ts0". We will create the utility function, which will build the table with metrics.
```
def get_metrics_for_ts(all_metrics, ts):
"""
Get the metrics for the time series with ID ts and return it as pandas data frame.
:param all_metrics: The table with all the metrics.
:param ts: The ID of a time series of interest.
:return: The pandas DataFrame with metrics for one time series.
"""
results_df = None
for ts_id, one_series in all_metrics.groupby("time_series_id"):
if not ts_id.startswith(ts):
continue
iteration = ts_id.split("|")[-1]
df = one_series[["metric_name", "metric"]]
df.rename({"metric": iteration}, axis=1, inplace=True)
df.set_index("metric_name", inplace=True)
if results_df is None:
results_df = df
else:
results_df = results_df.merge(
df, how="inner", left_index=True, right_index=True
)
results_df.sort_index(axis=1, inplace=True)
return results_df
metrics_df = pd.read_csv(os.path.join(backtesting_results, "scores.csv"))
ts = "ts_A"
get_metrics_for_ts(metrics_df, ts)
```
Forecast vs actuals plots.
```
from IPython.display import IFrame
IFrame("./backtesting_mm_results/plots_fcst_vs_actual.pdf", width=800, height=300)
```
| github_jupyter |
# GPU
:label:`sec_use_gpu`
在 :numref:`tab_intro_decade`中,
我们回顾了过去20年计算能力的快速增长。
简而言之,自2000年以来,GPU性能每十年增长1000倍。
本节,我们将讨论如何利用这种计算性能进行研究。
首先是如何使用单个GPU,然后是如何使用多个GPU和多个服务器(具有多个GPU)。
我们先看看如何使用单个NVIDIA GPU进行计算。
首先,确保你至少安装了一个NVIDIA GPU。
然后,下载[NVIDIA驱动和CUDA](https://developer.nvidia.com/cuda-downloads)
并按照提示设置适当的路径。
当这些准备工作完成,就可以使用`nvidia-smi`命令来(**查看显卡信息。**)
```
!nvidia-smi
```
要运行此部分中的程序,至少需要两个GPU。
注意,对于大多数桌面计算机来说,这可能是奢侈的,但在云中很容易获得。
例如,你可以使用AWS EC2的多GPU实例。
本书的其他章节大都不需要多个GPU,
而本节只是为了展示数据如何在不同的设备之间传递。
## [**计算设备**]
我们可以指定用于存储和计算的设备,如CPU和GPU。
默认情况下,张量是在内存中创建的,然后使用CPU计算它。
```
import tensorflow as tf
tf.device('/CPU:0'), tf.device('/GPU:0'), tf.device('/GPU:1')
```
我们可以(**查询可用gpu的数量。**)
```
len(tf.config.experimental.list_physical_devices('GPU'))
```
现在我们定义了两个方便的函数,
[**这两个函数允许我们在不存在所需所有GPU的情况下运行代码。**]
```
def try_gpu(i=0): #@save
"""如果存在,则返回gpu(i),否则返回cpu()"""
if len(tf.config.experimental.list_physical_devices('GPU')) >= i + 1:
return tf.device(f'/GPU:{i}')
return tf.device('/CPU:0')
def try_all_gpus(): #@save
"""返回所有可用的GPU,如果没有GPU,则返回[cpu(),]"""
num_gpus = len(tf.config.experimental.list_physical_devices('GPU'))
devices = [tf.device(f'/GPU:{i}') for i in range(num_gpus)]
return devices if devices else [tf.device('/CPU:0')]
try_gpu(), try_gpu(10), try_all_gpus()
```
## 张量与GPU
我们可以[**查询张量所在的设备。**]
默认情况下,张量是在CPU上创建的。
```
x = tf.constant([1, 2, 3])
x.device
```
需要注意的是,无论何时我们要对多个项进行操作,
它们都必须在同一个设备上。
例如,如果我们对两个张量求和,
我们需要确保两个张量都位于同一个设备上,
否则框架将不知道在哪里存储结果,甚至不知道在哪里执行计算。
### [**存储在GPU上**]
有几种方法可以在GPU上存储张量。
例如,我们可以在创建张量时指定存储设备。接
下来,我们在第一个`gpu`上创建张量变量`X`。
在GPU上创建的张量只消耗这个GPU的显存。
我们可以使用`nvidia-smi`命令查看显存使用情况。
一般来说,我们需要确保不创建超过GPU显存限制的数据。
```
with try_gpu():
X = tf.ones((2, 3))
X
```
假设你至少有两个GPU,下面的代码将在(**第二个GPU上创建一个随机张量。**)
```
with try_gpu(1):
Y = tf.random.uniform((2, 3))
Y
```
### 复制
如果我们[**要计算`X + Y`,我们需要决定在哪里执行这个操作**]。
例如,如 :numref:`fig_copyto`所示,
我们可以将`X`传输到第二个GPU并在那里执行操作。
*不要*简单地`X`加上`Y`,因为这会导致异常,
运行时引擎不知道该怎么做:它在同一设备上找不到数据会导致失败。
由于`Y`位于第二个GPU上,所以我们需要将`X`移到那里,
然后才能执行相加运算。

:label:`fig_copyto`
```
with try_gpu(1):
Z = X
print(X)
print(Z)
```
[**现在数据在同一个GPU上(`Z`和`Y`都在),我们可以将它们相加。**]
```
Y + Z
```
假设变量`Z`已经存在于第二个GPU上。
如果我们仍然在同一个设备作用域下调用`Z2 = Z`会发生什么?
它将返回`Z`,而不会复制并分配新内存。
```
with try_gpu(1):
Z2 = Z
Z2 is Z
```
### 旁注
人们使用GPU来进行机器学习,因为单个GPU相对运行速度快。
但是在设备(CPU、GPU和其他机器)之间传输数据比计算慢得多。
这也使得并行化变得更加困难,因为我们必须等待数据被发送(或者接收),
然后才能继续进行更多的操作。
这就是为什么拷贝操作要格外小心。
根据经验,多个小操作比一个大操作糟糕得多。
此外,一次执行几个操作比代码中散布的许多单个操作要好得多(除非你确信自己在做什么)。
如果一个设备必须等待另一个设备才能执行其他操作,
那么这样的操作可能会阻塞。
这有点像排队订购咖啡,而不像通过电话预先订购:
当你到店的时候,咖啡已经准备好了。
最后,当我们打印张量或将张量转换为NumPy格式时,
如果数据不在内存中,框架会首先将其复制到内存中,
这会导致额外的传输开销。
更糟糕的是,它现在受制于全局解释器锁,使得一切都得等待Python完成。
## [**神经网络与GPU**]
类似地,神经网络模型可以指定设备。
下面的代码将模型参数放在GPU上。
```
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
net = tf.keras.models.Sequential([
tf.keras.layers.Dense(1)])
```
在接下来的几章中,
我们将看到更多关于如何在GPU上运行模型的例子,
因为它们将变得更加计算密集。
当输入为GPU上的张量时,模型将在同一GPU上计算结果。
```
net(X)
```
让我们(**确认模型参数存储在同一个GPU上。**)
```
net.layers[0].weights[0].device, net.layers[0].weights[1].device
```
总之,只要所有的数据和参数都在同一个设备上,
我们就可以有效地学习模型。
在下面的章节中,我们将看到几个这样的例子。
## 小结
* 我们可以指定用于存储和计算的设备,例如CPU或GPU。默认情况下,数据在主内存中创建,然后使用CPU进行计算。
* 深度学习框架要求计算的所有输入数据都在同一设备上,无论是CPU还是GPU。
* 不经意地移动数据可能会显著降低性能。一个典型的错误如下:计算GPU上每个小批量的损失,并在命令行中将其报告给用户(或将其记录在NumPy `ndarray`中)时,将触发全局解释器锁,从而使所有GPU阻塞。最好是为GPU内部的日志分配内存,并且只移动较大的日志。
## 练习
1. 尝试一个计算量更大的任务,比如大矩阵的乘法,看看CPU和GPU之间的速度差异。再试一个计算量很小的任务呢?
1. 我们应该如何在GPU上读写模型参数?
1. 测量计算1000个$100 \times 100$矩阵的矩阵乘法所需的时间,并记录输出矩阵的Frobenius范数,一次记录一个结果,而不是在GPU上保存日志并仅传输最终结果。
1. 测量同时在两个GPU上执行两个矩阵乘法与在一个GPU上按顺序执行两个矩阵乘法所需的时间。提示:你应该看到近乎线性的缩放。
[Discussions](https://discuss.d2l.ai/t/1842)
| github_jupyter |
```
import pickle
import os
import numpy as np
import pandas as pd
import os
import glob
def read_lines(fn):
if not os.path.exists(fn):
return []
with open(fn, 'r', encoding='utf-8') as f:
text = f.read()
lines = text.split("\n")
if lines[-1] == '':
return lines[:-1]
else:
return lines
```
## WL
```
wl_src = read_lines("../../data_parallel/wi+locness/train_src")
wl_tgt = read_lines("../../data_parallel/wi+locness/train_tgt")
```
### FCE
```
fce_texts_src = read_lines("../../data_parallel/fce/fce_train_src")
fce_texts_tgt = read_lines("../../data_parallel/fce/fce_train_tgt")
with open("fce_cos.pickle", "rb") as f:
fce_cos = pickle.load(f)
arg_s = fce_cos.argsort()
def create_df(text_src,text_tgt, cos_sim, only_change=True):
df = pd.DataFrame({"text_src": text_src, "text_tgt": text_tgt, "cos_sim":cos_sim})
df = df.drop_duplicates(subset=['text_src','text_tgt'])
if only_change:
df = df[df["text_src"] != df["text_tgt"]]
df = df.sort_values('cos_sim')
return df
fce_df = create_df(fce_texts_src,fce_texts_tgt, fce_cos)
fce_df.head(50)
fce_df.tail(50)
# for i in arg_s[:800]:
# print(fce_cos[i])
# print(fce_texts_src[i])
# print(fce_texts_tgt[i])
# print()
```
### Nucle
```
nucle_texts_src = read_lines("../../data_parallel/nucle/nucle_src")
nucle_texts_tgt = read_lines("../../data_parallel/nucle/nucle_tgt")
with open("nucle_cos.pickle", "rb") as f:
nucle_cos = pickle.load(f)
df_nucle = pd.DataFrame({"text_src": nucle_texts_src, "text_tgt": nucle_texts_tgt, "cos_sim":nucle_cos})
df_nucle = df_nucle.drop_duplicates()
len(df_nucle)
len(df_nucle)
df_nucle_use = df_nucle[df_nucle["text_src"] != df_nucle["text_tgt"]].sort_values('cos_sim')
df_nucle_use
df_nucle_use.head(20)
df_nucle_use.reset_index(inplace=True)
df_nucle_use.loc[100:120]
df_nucle_use.tail(20)
nucle_df = create_df(nucle_texts_src,nucle_texts_tgt, nucle_cos)
nucle_df.tail(30)
# for i in arg_nucle[:500]:
# print(nucle_cos[i])
# print(nucle_texts_src[i])
# print(nucle_texts_tgt[i])
# print()
# for i in arg_nucle[-500:]:
# print(nucle_cos[i])
# print(nucle_texts_src[i])
# print(nucle_texts_tgt[i])
# print()
```
### Lang8
```
lang8_texts_src = read_lines("../../data_parallel/lang8/lang8_src")
lang8_texts_tgt = read_lines("../../data_parallel/lang8/lang8_tgt")
with open("lang8_cos.pickle", "rb") as f:
lang8_cos = pickle.load(f)
lang8_df = create_df(lang8_texts_src,lang8_texts_tgt, lang8_cos)
lang8_df.head(50)
lang8_df.tail(50)
fce_df.cos_sim.describe()
lang8_df.cos_sim.describe()
nucle_df.cos_sim.describe()
fce_df.reset_index(inplace=True)
nucle_df.reset_index(inplace=True)
lang8_df.reset_index(inplace=True)
fce_df = fce_df.loc[10000:]
nucle_df = nucle_df.loc[10000:]
lang8_df = lang8_df.loc[10000:]
nucle_df
lang8_df
final_df = lang8_df.head(2000)
final_df = final_df.append(nucle_df.head(1500))
final_df = final_df.append(fce_df.head(1500))
final_src = []
final_tgt = []
final_src.extend(final_df.text_src.values)
final_tgt.extend(final_df.text_tgt.values)
final_src.extend(wl_src)
final_tgt.extend(wl_tgt)
len(final_src)
from sklearn.model_selection import train_test_split
train_tgt, dev_tgt, train_src, dev_src = train_test_split(final_tgt, final_src, test_size=0.02, random_state=4)
path = "../../data_parallel/checkpoints/"
def write_lines(fn, lines, mode='w'):
text_to_write = "\n".join(lines)
with open(fn, encoding='utf-8', mode=mode) as f:
f.write(text_to_write)
# write_lines(path + "check_5k_train_src", train_src, mode='w')
# write_lines(path + "check_5k_train_tgt", train_tgt, mode='w')
# write_lines(path + "check_5k_dev_src", dev_src, mode='w')
# write_lines(path + "check_5k_dev_tgt", dev_tgt, mode='w')
write_lines(path + "check_5k_train_src_v3", train_src, mode='w')
write_lines(path + "check_5k_train_tgt_v3", train_tgt, mode='w')
write_lines(path + "check_5k_dev_src_v3", dev_src, mode='w')
write_lines(path + "check_5k_dev_tgt_v3", dev_tgt, mode='w')
write_lines(path + "check_10k_train_src", train_src, mode='w')
write_lines(path + "check_10k_train_tgt", train_tgt, mode='w')
write_lines(path + "check_10k_dev_src", dev_src, mode='w')
write_lines(path + "check_10k_dev_tgt", dev_tgt, mode='w')
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Automated Machine Learning
_**Classification with Deployment using a Bank Marketing Dataset**_
## Contents
1. [Introduction](#Introduction)
1. [Setup](#Setup)
1. [Train](#Train)
1. [Results](#Results)
1. [Deploy](#Deploy)
1. [Test](#Test)
1. [Acknowledgements](#Acknowledgements)
## Introduction
In this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.
If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace.
Please find the ONNX related documentations [here](https://github.com/onnx/onnx).
In this notebook you will learn how to:
1. Create an experiment using an existing workspace.
2. Configure AutoML using `AutoMLConfig`.
3. Train the model using local compute with ONNX compatible config on.
4. Explore the results, featurization transparency options and save the ONNX model
5. Inference with the ONNX model.
6. Register the model.
7. Create a container image.
8. Create an Azure Container Instance (ACI) service.
9. Test the ACI service.
In addition this notebook showcases the following features
- **Blocking** certain pipelines
- Specifying **target metrics** to indicate stopping criteria
- Handling **missing data** in the input
## Setup
As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
```
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
from azureml.interpret import ExplanationClient
```
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
```
print("This notebook was created using version 1.31.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
```
Accessing the Azure ML workspace requires authentication with Azure.
The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.
If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:
```
from azureml.core.authentication import InteractiveLoginAuthentication
auth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')
ws = Workspace.from_config(auth = auth)
```
If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:
```
from azureml.core.authentication import ServicePrincipalAuthentication
auth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')
ws = Workspace.from_config(auth = auth)
```
For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)
```
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-bmarketing-all'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
```
## Create or Attach existing AmlCompute
You will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.
> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.
#### Creation of AmlCompute takes approximately 5 minutes.
If the AmlCompute with that name is already in your workspace this code will skip the creation process.
As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
```
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster-4"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
```
# Data
### Load Data
Leverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable.
### Training Data
```
data = pd.read_csv("https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv")
data.head()
# Add missing values in 75% of the lines.
import numpy as np
missing_rate = 0.75
n_missing_samples = int(np.floor(data.shape[0] * missing_rate))
missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))
rng = np.random.RandomState(0)
rng.shuffle(missing_samples)
missing_features = rng.randint(0, data.shape[1], n_missing_samples)
data.values[np.where(missing_samples)[0], missing_features] = np.nan
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train data to a csv to be uploaded to the datastore
pd.DataFrame(data).to_csv("data/train_data.csv", index=False)
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)
# Upload the training data as a tabular dataset for access during training on remote compute
train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))
label = "y"
```
### Validation Data
```
validation_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv"
validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)
```
### Test Data
```
test_data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv"
test_dataset = Dataset.Tabular.from_delimited_files(test_data)
```
## Train
Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.
|Property|Description|
|-|-|
|**task**|classification or regression or forecasting|
|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|
|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|
|**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. <br><br> Allowed values for **Classification**<br><i>LogisticRegression</i><br><i>SGD</i><br><i>MultinomialNaiveBayes</i><br><i>BernoulliNaiveBayes</i><br><i>SVM</i><br><i>LinearSVM</i><br><i>KNN</i><br><i>DecisionTree</i><br><i>RandomForest</i><br><i>ExtremeRandomTrees</i><br><i>LightGBM</i><br><i>GradientBoosting</i><br><i>TensorFlowDNN</i><br><i>TensorFlowLinearClassifier</i><br><br>Allowed values for **Regression**<br><i>ElasticNet</i><br><i>GradientBoosting</i><br><i>DecisionTree</i><br><i>KNN</i><br><i>LassoLars</i><br><i>SGD</i><br><i>RandomForest</i><br><i>ExtremeRandomTrees</i><br><i>LightGBM</i><br><i>TensorFlowLinearRegressor</i><br><i>TensorFlowDNN</i><br><br>Allowed values for **Forecasting**<br><i>ElasticNet</i><br><i>GradientBoosting</i><br><i>DecisionTree</i><br><i>KNN</i><br><i>LassoLars</i><br><i>SGD</i><br><i>RandomForest</i><br><i>ExtremeRandomTrees</i><br><i>LightGBM</i><br><i>TensorFlowLinearRegressor</i><br><i>TensorFlowDNN</i><br><i>Arima</i><br><i>Prophet</i>|
|**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.|
|**experiment_exit_score**| Value indicating the target for *primary_metric*. <br>Once the target is surpassed the run terminates.|
|**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.|
|**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.|
|**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.|
|**n_cross_validations**|Number of cross validation splits.|
|**training_data**|Input dataset, containing both features and label column.|
|**label_column_name**|The name of the label column.|
**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)
```
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = train_data,
label_column_name = label,
validation_data = validation_dataset,
**automl_settings
)
```
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
```
remote_run = experiment.submit(automl_config, show_output = False)
```
Run the following cell to access previous runs. Uncomment the cell below and update the run_id.
```
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment=experiment, run_id='<run_ID_goes_here')
#remote_run
# Wait for the remote run to complete
remote_run.wait_for_completion()
best_run_customized, fitted_model_customized = remote_run.get_output()
```
## Transparency
View updated featurization summary
```
custom_featurizer = fitted_model_customized.named_steps['datatransformer']
df = custom_featurizer.get_featurization_summary()
pd.DataFrame(data=df)
```
Set `is_user_friendly=False` to get a more detailed summary for the transforms being applied.
```
df = custom_featurizer.get_featurization_summary(is_user_friendly=False)
pd.DataFrame(data=df)
df = custom_featurizer.get_stats_feature_type_summary()
pd.DataFrame(data=df)
```
## Results
```
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
```
### Retrieve the Best Model's explanation
Retrieve the explanation from the best_run which includes explanations for engineered features and raw features. Make sure that the run for generating explanations for the best model is completed.
```
# Wait for the best model explanation run to complete
from azureml.core.run import Run
model_explainability_run_id = remote_run.id + "_" + "ModelExplain"
print(model_explainability_run_id)
model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)
model_explainability_run.wait_for_completion()
# Get the best run object
best_run, fitted_model = remote_run.get_output()
```
#### Download engineered feature importance from artifact store
You can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run.
```
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=False)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
```
#### Download raw feature importance from artifact store
You can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run.
```
client = ExplanationClient.from_run(best_run)
engineered_explanations = client.download_model_explanation(raw=True)
exp_data = engineered_explanations.get_feature_importance_dict()
exp_data
```
### Retrieve the Best ONNX Model
Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.
Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model.
```
best_run, onnx_mdl = remote_run.get_output(return_onnx_model=True)
```
### Save the best ONNX model
```
from azureml.automl.runtime.onnx_convert import OnnxConverter
onnx_fl_path = "./best_model.onnx"
OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
```
### Predict with the ONNX model, using onnxruntime package
```
import sys
import json
from azureml.automl.core.onnx_convert import OnnxConvertConstants
from azureml.train.automl import constants
if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:
python_version_compatible = True
else:
python_version_compatible = False
import onnxruntime
from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper
def get_onnx_res(run):
res_path = 'onnx_resource.json'
run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)
with open(res_path) as f:
onnx_res = json.load(f)
return onnx_res
if python_version_compatible:
test_df = test_dataset.to_pandas_dataframe()
mdl_bytes = onnx_mdl.SerializeToString()
onnx_res = get_onnx_res(best_run)
onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)
pred_onnx, pred_prob_onnx = onnxrt_helper.predict(test_df)
print(pred_onnx)
print(pred_prob_onnx)
else:
print('Please use Python version 3.6 or 3.7 to run the inference helper.')
```
## Deploy
### Retrieve the Best Model
Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.
#### Widget for Monitoring Runs
The widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.
**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
```
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')
```
### Register the Fitted Model for Deployment
If neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered.
```
description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
```
### Deploy the model as a Web Service on Azure Container Instance
```
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment
inference_config = InferenceConfig(entry_script=script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "bmData", 'type': "automl_classification"},
description = 'sample service for Automl Classification')
aci_service_name = 'automl-sample-bankmarketing-all'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
```
### Get Logs from a Deployed Web Service
Gets logs from a deployed web service.
```
#aci_service.get_logs()
```
## Test
Now that the model is trained, run the test data through the trained model to get the predicted values. This calls the ACI web service to do the prediction.
Note that the JSON passed to the ACI web service is an array of rows of data. Each row should either be an array of values in the same order that was used for training or a dictionary where the keys are the same as the column names used for training. The example below uses dictionary rows.
```
# Load the bank marketing datasets.
from numpy import array
X_test = test_dataset.drop_columns(columns=['y'])
y_test = test_dataset.keep_columns(columns=['y'], validate=True)
test_dataset.take(5).to_pandas_dataframe()
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
import json
import requests
X_test_json = X_test.to_json(orient='records')
data = "{\"data\": " + X_test_json +"}"
headers = {'Content-Type': 'application/json'}
resp = requests.post(aci_service.scoring_uri, data, headers=headers)
y_pred = json.loads(json.loads(resp.text))['result']
actual = array(y_test)
actual = actual[:,0]
print(len(y_pred), " ", len(actual))
```
### Calculate metrics for the prediction
Now visualize the data as a confusion matrix that compared the predicted values against the actual values.
```
%matplotlib notebook
from sklearn.metrics import confusion_matrix
import numpy as np
import itertools
cf =confusion_matrix(actual,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['no','yes']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','no','yes',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
```
### Delete a Web Service
Deletes the specified web service.
```
aci_service.delete()
```
## Acknowledgements
This Bank Marketing dataset is made available under the Creative Commons (CCO: Public Domain) License: https://creativecommons.org/publicdomain/zero/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: https://creativecommons.org/publicdomain/zero/1.0/ and is available at: https://www.kaggle.com/janiobachmann/bank-marketing-dataset .
_**Acknowledgements**_
This data set is originally available within the UCI Machine Learning Database: https://archive.ics.uci.edu/ml/datasets/bank+marketing
[Moro et al., 2014] S. Moro, P. Cortez and P. Rita. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014
| github_jupyter |
```
# Useful for debugging
%load_ext autoreload
%autoreload 2
import sys, os
root = os.path.join(os.getcwd(), '../')
sys.path.append(root)
from matplotlib import pyplot as plt
import matplotlib
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
from gpt.gpt_distgen import run_gpt_with_distgen
GPT_INPUT_FILE = os.path.join(root,'templates/cu_injector/gpt.in')
DISTGEN_INPUT_FILE = os.path.join(root,'templates/cu_injector/distgen.in.yaml')
settings={'n_particle':200,
'space_charge':0,
'Ntout':500,
'gun_voltage':400,
'buncher_voltage':50,
'phiCTB01_off':-90,
'cavity_1_voltage':1000,
'cavity_2_voltage':1000,
'cavity_3_voltage':1000,
'cavity_4_voltage':1000,
'cavity_5_voltage':1000,
'A3Quad_1_current':3,
'A3Quad_2_current':-3,
'A3Quad_3_current':3
}
G1 = run_gpt_with_distgen(settings,
gpt_input_file=GPT_INPUT_FILE,
distgen_input_file=DISTGEN_INPUT_FILE,
verbose=True,
auto_phase=True)
print(G1)
settings={'n_particle':200,
'space_charge':0,
'Ntout':500,
'gun_voltage':400,
'buncher_voltage':50,
'phiCTB01_off':-90,
'cavity_1_voltage':1000,
'cavity_2_voltage':1000,
'cavity_3_voltage':1000,
'cavity_4_voltage':1000,
'cavity_5_voltage':1000,
'A3Quad_1_current':3,
'A3Quad_2_current':-3,
'A3Quad_3_current':3,
'icm_1d_maps':1
}
G2 = run_gpt_with_distgen(settings,
gpt_input_file=GPT_INPUT_FILE,
distgen_input_file=DISTGEN_INPUT_FILE,
verbose=True,
auto_phase=True)
print(G2)
fig, ax = plt.subplots(1,2, figsize=(12,4))
ax[0].plot(G1.tout_stat('mean_z'), G1.tout_stat('mean_kinetic_energy')/1e6, color='tab:blue')
ax[0].plot(G2.tout_stat('mean_z'), G2.tout_stat('mean_kinetic_energy')/1e6, '--', color='tab:orange')
ax[0].set_xlabel('mean_z (m)');
ax[0].set_ylabel('KE (MeV)');
ax[0].legend(['3d map','1d map'])
ax[1].plot(G1.tout_stat('mean_z'), G1.tout_stat('sigma_x')*1e3, color='tab:blue');
ax[1].plot(G1.tout_stat('mean_z'), G1.tout_stat('sigma_y')*1e3, color='tab:orange');
ax[1].plot(G2.tout_stat('mean_z'), G2.tout_stat('sigma_x')*1e3, '--', color='tab:blue');
ax[1].plot(G2.tout_stat('mean_z'), G2.tout_stat('sigma_y')*1e3, '--', color='tab:orange');
#ax[1].plot(tzs, tstdy*1e3, pzs, pstdy*1e3, 'o', color='#ff7f0e')
ax[1].set_xlabel('mean_z (m)');
ax[1].set_ylabel('$\sigma_{x,y}$ (mm)');
ax[1].legend(['$\sigma_x$ 3d map','$\sigma_y$ 3d map','$\sigma_x$ 1d map','$\sigma_y$ 1d map']);
```
| github_jupyter |
<a href="https://colab.research.google.com/github/ritesh-chafer/coronavirus-analysis/blob/master/Coronavirus_Dataset_Enrichment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('/content/drive')
ls '/content/drive/My Drive/Colab Notebooks/Twitter.properties'
!pip install ConfigParser
import pandas as pd
import configparser
corona_confirmed_df = pd.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv')
corona_confirmed_df.head()
corona_death_df = pd.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv')
corona_death_df.head()
corona_recovered_df = pd.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_recovered_global.csv')
corona_recovered_df.head()
corona_confirmed_df = corona_confirmed_df.melt(id_vars=['Province/State','Country/Region','Lat','Long'])
corona_confirmed_df.head()
corona_confirmed_df = corona_confirmed_df.rename({'variable':'Date','value':'Confirmed'},axis = 'columns')
corona_confirmed_df.head()
corona_death_df = corona_death_df.melt(id_vars=['Province/State','Country/Region','Lat','Long'])
corona_death_df = corona_death_df.rename({'variable':'Date','value':'Death'},axis = 'columns')
corona_death_df.head()
corona_recovered_df = corona_recovered_df.melt(id_vars=['Province/State','Country/Region','Lat','Long'])
corona_recovered_df = corona_recovered_df.rename({'variable':'Date','value':'Recovered'},axis = 'columns')
corona_recovered_df.head()
combined_df = [corona_confirmed_df, corona_death_df, corona_recovered_df]
combined_df = [df.set_index(['Province/State','Country/Region','Lat','Long','Date']) for df in combined_df]
combined_df = combined_df[0].join(combined_df[1:])
combined_df.head()
combined_df = combined_df.reset_index()
combined_df.head()
combined_df[['Lat','Long','Confirmed','Death','Recovered']] = combined_df[['Lat','Long','Confirmed','Death','Recovered']].apply(pd.to_numeric)
combined_df[['Date']] = combined_df[['Date']].apply(pd.to_datetime)
combined_df.Lat = combined_df.Lat.astype(int)
combined_df.Long = combined_df.Long.astype(int)
combined_df.Confirmed = combined_df.Confirmed.astype(int)
combined_df.Death = combined_df.Death.astype(int)
combined_df.Recovered = combined_df.Recovered.fillna(0.0).astype(int)
combined_df.dtypes
combined_df.head()
combined_df = combined_df.rename({'Province/State':'State','Country/Region':'Country'},axis = 'columns')
combined_df.head()
combined_df.isnull().any()
combined_df.count()
combined_df.set_index('Country').filter(like="India",axis=0)
combined_df[combined_df.State.isnull()]['Country'].value_counts()
!pip install -U googlemaps
import googlemaps
gmaps = googlemaps.Client(key=config.get('twitter','googleapikey'))
geocode_result = gmaps.reverse_geocode((-24.4425, 58.4438))
print(geocode_result[0])
print(geocode_result[0]['formated_address'])
print(geocode_result[0]['formated_address'].split(",")[-2].strip(" "))
```
| github_jupyter |
```
library(caret, quiet=TRUE);
library(base64enc)
library(httr, quiet=TRUE)
```
# Build a Model
```
set.seed(1960)
create_model = function() {
model <- train(Species ~ ., data = iris, method = "nnet", trace = FALSE)
return(model)
}
# dataset
model = create_model()
pred <- predict(model, as.matrix(iris[, -5]) , type="prob")
pred_labels <- predict(model, as.matrix(iris[, -5]) , type="raw")
sum(pred_labels != iris$Species)/length(pred_labels)
```
# SQL Code Generation
```
test_ws_sql_gen = function(mod) {
WS_URL = "https://sklearn2sql.herokuapp.com/model"
WS_URL = "http://localhost:1888/model"
model_serialized <- serialize(mod, NULL)
b64_data = base64encode(model_serialized)
data = list(Name = "caret_nnet_test_model", SerializedModel = b64_data , SQLDialect = "postgresql" , Mode="caret")
r = POST(WS_URL, body = data, encode = "json")
# print(r)
content = content(r)
# print(content)
lSQL = content$model$SQLGenrationResult[[1]]$SQL # content["model"]["SQLGenrationResult"][0]["SQL"]
return(lSQL);
}
lModelSQL = test_ws_sql_gen(model)
cat(lModelSQL)
```
# Execute the SQL Code
```
library(RODBC)
conn = odbcConnect("pgsql", uid="db", pwd="db", case="nochange")
odbcSetAutoCommit(conn , autoCommit = TRUE)
dataset = iris[,-5]
df_sql = as.data.frame(dataset)
names(df_sql) = sprintf("Feature_%d",0:(ncol(df_sql)-1))
df_sql$KEY = seq.int(nrow(dataset))
sqlDrop(conn , "INPUT_DATA" , errors = FALSE)
sqlSave(conn, df_sql, tablename = "INPUT_DATA", verbose = FALSE)
head(df_sql)
# colnames(df_sql)
# odbcGetInfo(conn)
# sqlTables(conn)
df_sql_out = sqlQuery(conn, lModelSQL)
head(df_sql_out)
```
# R Caret Rpart Output
```
pred_proba = predict(model, as.matrix(iris[,-5]), type = "prob")
df_r_out = data.frame(pred_proba)
names(df_r_out) = sprintf("Proba_%s",model$levels)
df_r_out$KEY = seq.int(nrow(dataset))
df_r_out$Score_setosa = NA
df_r_out$Score_versicolor = NA
df_r_out$Score_virginica = NA
df_r_out$LogProba_setosa = log(df_r_out$Proba_setosa)
df_r_out$LogProba_versicolor = log(df_r_out$Proba_versicolor)
df_r_out$LogProba_virginica = log(df_r_out$Proba_virginica)
df_r_out$Decision = predict(model, as.matrix(iris[,-5]), type = "raw")
df_r_out$DecisionProba = apply(pred_proba, 1, function(x) max(x))
head(df_r_out)
```
# Compare R and SQL output
```
df_merge = merge(x = df_r_out, y = df_sql_out, by = "KEY", all = TRUE, , suffixes = c("_1","_2"))
head(df_merge)
diffs_df = df_merge[df_merge$Decision_1 != df_merge$Decision_2,]
head(diffs_df)
stopifnot(nrow(diffs_df) == 0)
summary(df_sql_out)
summary(df_r_out)
```
| github_jupyter |
# Brainiak Tutorials Environment Setup for Google CoLab
## Install Brainiak and code dependencies <i>(Approx install time 1 minute)</i>
```
!pip install deepdish ipython matplotlib nilearn notebook pandas seaborn watchdog
!pip install pip\<10
!pip install git+https://github.com/brainiak/brainiak
```
## Git-clone helper files for tutorials
```
!git clone https://github.com/brainiak/brainiak-tutorials.git
!cd brainiak-tutorials/tutorials/; cp -r 07-searchlight 09-fcma 13-real-time utils.py setup_environment.sh /content/
# Make brainiak_datasets sub-directory in home directory
!mkdir /root/brainiak_datasets
```
## Download datasets for tutorials
Pick one of the following datasets to download for the appropriate tutorial you will work on
```
!mkdir -p /root/brainiak_datasets
```
### Download VDC dataset - (tutorial notebooks 02-05) <i>(Approx runtime 8 minutes)</i>
Dataset size: 5.3 GB, Estimated download time: 2.5 minutes, Estimated unzip time 6 minutes
```
!cd /root/brainiak_datasets; wget -q --show-progress --no-check-certificate -r 'https://drive.google.com/uc?id=1tiEjtp96zwIdnl3p726llj5KMETnNJ4A&export=download' -O '02-data-handling-simulated-dataset.zip'
!cd /root/brainiak_datasets; unzip 02-data-handling-simulated-dataset.zip
!cd /root/brainiak_datasets; wget -q --show-progress --no-check-certificate -r 'https://drive.google.com/uc?export=download&confirm=KlKd&id=1PrnucQ4hXqUY8gl6ysGJJiTreYu7KOLz' -O 'vdc.zip'
!cd /root/brainiak_datasets; unzip vdc.zip
```
### Download NinetySix dataset - (tutorial notebook 06-rsa) <i>(Approx runtime 10 sec)</i>
Dataset size: 150 MB
```
!cd /root/brainiak_datasets; wget -q --show-progress --no-check-certificate -r 'https://drive.google.com/uc?export=download&confirm=Yqoe&id=14m-YY-N3mCjCdHGkeVlTS4uA7WJzbQS0' -O 'NinetySix.zip'
!cd /root/brainiak_datasets/; unzip NinetySix.zip
```
### Download Face-scene dataset - (tutorial notebooks 07-searchlights, 09-fcma*) <i>(Approx runtime 10 sec)</i>
*09-fcma is not supported on CoLab<br>
Dataset size: 255 MB
```
!cd /root/brainiak_datasets; wget -q --show-progress --no-check-certificate -r 'https://drive.google.com/uc?export=download&confirm=GYuk&id=1LBhKWx5NSlndUlBev3jP997wNiM6HA9N' -O 'face_scene.zip'
!cd /root/brainiak_datasets/; unzip face_scene.zip
```
### Download Latatt dataset - (tutorial notebook 08-connectivity) <i>(Approx runtime 15 sec)</i>
Dataset size: 584 MB
```
!cd /root/brainiak_datasets; wget -q --show-progress --no-check-certificate -r 'https://drive.google.com/uc?export=download&confirm=jj9P&id=1iX5nLZvQsWuM5AmKeiBNoP8QkZjlOY7T' -O 'latatt.zip'
!cd /root/brainiak_datasets/; unzip latatt.zip
```
### Download Pieman2 dataset - (tutorial notebook 10-isc, 11-srm) <i>(Approx runtime 3 minutes)</i>
Dataset size: 2.65 GB, Estimated download time: 1 minute, Estimated unzip time: 2 minutes
```
!cd /root/brainiak_datasets; wget -q --show-progress --no-check-certificate -r 'https://drive.google.com/uc?export=download&confirm=5QPf&id=1IBA39ZZjeGS1u_DvZdiw1AZZQMS3K5q0' -O 'Pieman2.zip'
!cd /root/brainiak_datasets/; unzip Pieman2.zip
```
### Download Raider dataset - (tutorial notebook 11-srm) <i>(Approx runtime 5 sec)</i>
Dataset size: 31 MB
```
!cd /root/brainiak_datasets; wget -q --show-progress --no-check-certificate -r 'https://drive.google.com/uc?export=download&confirm=SE4m&id=1zCQoulK_rlzzRb4n6YMVp2cI8vZpxnwx' -O 'raider.zip'
!cd /root/brainiak_datasets/; unzip raider.zip
```
### Download Sherlock_processed dataset - (tutorial notebook 12-hmm) <i>(Approx runtime 10 sec)</i>
Dataset size: 255 MB
```
!cd /root/brainiak_datasets; wget -q --show-progress --no-check-certificate -r 'https://drive.google.com/uc?export=download&confirm=wZk0&id=11y9iQUoNVyVLANllKebFUrqdvQt-vsXm' -O 'Sherlock_processed.zip'
!cd /root/brainiak_datasets/; unzip Sherlock_processed.zip
```
### Download all datasets - (uncomment to run) <i>(Approx runtime 14 minutes)</i>
Dataset size: 9.2 GB, Estimated download time: 5 minutes, Estimated unzip time: 9 minutes
```
#!cd /root; wget -q --show-progress --no-check-certificate -r 'https://docs.google.com/uc?export=download&id=1ZglrmkYw8isBAfsL53n9JgHEucmrnm4E' -O 'tutorials.zip'
#!cd /root; unzip tutorials.zip
```
### Remove downloaded zip files - (uncomment to run)
```
#!cd /root/brainiak_datasets; rm *.zip
```
| github_jupyter |
# Functional data
This notebook links various functional layers to ET cells across GB. Various methods are used based on the nature of input data, from areal interpolation to zonal statistics.
All data are furhter measured within a relevant spatial context.
## Population estimates
Population estimates are linked using area weighted interpolation based on building geometry.
```
import warnings
import geopandas as gpd
import pandas as pd
import numpy as np
import libpysal
import tobler
from time import time
import scipy
import xarray
import rioxarray
import rasterstats
from tqdm.notebook import tqdm
from dask.distributed import Client, LocalCluster, as_completed
import dask.dataframe as dd
warnings.filterwarnings('ignore', message='.*initial implementation of Parquet.*')
population_est = gpd.read_parquet("../../urbangrammar_samba/functional_data/population_estimates/gb_population_estimates.pq")
chunk = gpd.read_parquet("../../urbangrammar_samba/spatial_signatures/tessellation/tess_0.pq")
xmin, ymin, xmax, ymax = chunk.total_bounds
%%time
ests = tobler.area_weighted.area_interpolate(population_est.cx[xmin:xmax, ymin:ymax], chunk.set_geometry("buildings"), extensive_variables=['population'])
for chunk_id in range(103):
s = time()
chunk = gpd.read_parquet(f"../../urbangrammar_samba/spatial_signatures/tessellation/tess_{chunk_id}.pq", columns=["hindex", "buildings"]).set_geometry("buildings")
xmin, ymin, xmax, ymax = chunk.total_bounds
ests = tobler.area_weighted.area_interpolate(population_est.cx[xmin:xmax, ymin:ymax], chunk, extensive_variables=['population'])
pop = pd.DataFrame({'hindex': chunk.hindex.values, "population": ests.population.values})
pop.to_parquet(f"../../urbangrammar_samba/spatial_signatures/functional/population/pop_{chunk_id}")
print(f"Chunk {chunk_id} processed sucessfully in {time() - s} seconds.")
```
## Night lights
Night lights are merged using zonal statistics and parallelisation using `dask`.
```
workers = 8
client = Client(LocalCluster(n_workers=workers, threads_per_worker=1))
client
def _night_lights(chunk_id):
import rioxarray
s = time()
chunk = gpd.read_parquet(f"../../urbangrammar_samba/spatial_signatures/tessellation/tess_{chunk_id}.pq", columns=["hindex", "tessellation"])
nl = xarray.open_rasterio("../../urbangrammar_samba/functional_data/employment/night_lights_osgb.tif")
nl_clip = nl.rio.clip_box(*chunk.total_bounds)
arr = nl_clip.values
affine = nl_clip.rio.transform()
stats_nl = rasterstats.zonal_stats(
chunk.tessellation,
raster=arr[0],
affine=affine,
stats=['mean'],
all_touched=True,
nodata = np.nan,
)
chunk["night_lights"] = [x['mean'] for x in stats_nl]
chunk[["hindex", "night_lights"]].to_parquet(f"../../urbangrammar_samba/spatial_signatures/functional/night_lights/nl_{chunk_id}")
return f"Chunk {chunk_id} processed sucessfully in {time() - s} seconds."
inputs = iter(range(103))
futures = [client.submit(_night_lights, next(inputs)) for i in range(workers)]
ac = as_completed(futures)
for finished_future in ac:
# submit new future
try:
new_future = client.submit(_night_lights, next(inputs))
ac.add(new_future)
except StopIteration:
pass
print(finished_future.result())
```
## Worplace population by industry
Worplace population is linked using area weighted interpolation based on building geometry.
```
wpz = gpd.read_parquet('../../urbangrammar_samba/functional_data/employment/workplace/workplace_by_industry_gb.pq')
for chunk_id in range(103):
s = time()
chunk = gpd.read_parquet(f"../../urbangrammar_samba/spatial_signatures/tessellation/tess_{chunk_id}.pq", columns=["hindex", "buildings"]).set_geometry("buildings")
xmin, ymin, xmax, ymax = chunk.total_bounds
ests = tobler.area_weighted.area_interpolate(wpz.cx[xmin:xmax, ymin:ymax], chunk, extensive_variables=wpz.columns[1:-1].to_list())
ests['hindex'] = chunk.hindex.values
ests.drop(columns="geometry").to_parquet(f"../../urbangrammar_samba/spatial_signatures/functional/workplace/pop_{chunk_id}")
print(f"Chunk {chunk_id} processed sucessfully in {time() - s} seconds.")
```
## CORINE Land cover
CORINE Land cover is linked using area weighted interpolation based on tessellation geometry.
```
corine = gpd.read_parquet("../../urbangrammar_samba/functional_data/land_use/corine/corine_gb.pq")
def _dask_binning(corine, cells, n_chunks=512):
import dask_geopandas as dgpd
from scipy.sparse import coo_matrix
ids_src, ids_tgt = cells.sindex.query_bulk(corine.geometry, predicate="intersects")
df = gpd.GeoDataFrame({'clc': corine.geometry.values[ids_src], 'tess': cells.geometry.values[ids_tgt]})
ddf = dgpd.from_geopandas(df, npartitions=n_chunks)
areas = ddf.clc.intersection(ddf.tess).area.compute()
table = coo_matrix(
(areas, (ids_src, ids_tgt),),
shape=(corine.shape[0], cells.shape[0]),
dtype=np.float32,
)
table = table.todok()
return table
def _dask_area_interpolate(corine, cells, n_chunks=512, categorical_variables=None):
table = _dask_binning(corine, cells, n_chunks)
if categorical_variables:
categorical = {}
for variable in categorical_variables:
unique = corine[variable].unique()
for value in unique:
mask = corine[variable] == value
categorical[f"{variable}_{value}"] = np.asarray(
table[mask].sum(axis=0)
)[0]
categorical = pd.DataFrame(categorical)
categorical = categorical.div(cells.area, axis="rows")
return categorical
for chunk_id in range(103):
s = time()
chunk = gpd.read_parquet(f"../../urbangrammar_samba/spatial_signatures/tessellation/tess_{chunk_id}.pq", columns=["hindex", "tessellation"])
xmin, ymin, xmax, ymax = chunk.total_bounds
ests = _dask_area_interpolate(corine.cx[xmin:xmax, ymin:ymax], chunk, categorical_variables=["Code_18"])
ests['hindex'] = chunk.hindex.values
ests.to_parquet(f"../../urbangrammar_samba/spatial_signatures/functional/corine/corine_{chunk_id}.pq")
print(f"Chunk {chunk_id} processed sucessfully in {time() - s} seconds.")
```
## Retail centres
CDRC Retail centres are linked as a distance to the nearest one.
```
retail = gpd.read_file("../../urbangrammar_samba/functional_data/retail_centres/Pre Release.zip!Retail_Centres_UK.gpkg")
workers = 16
client = Client(LocalCluster(n_workers=workers, threads_per_worker=1))
client
def measure_nearest(chunk):
s = time()
gdf = gpd.read_parquet(f'../../urbangrammar_samba/spatial_signatures/tessellation/tess_{chunk}.pq')
b = gdf.total_bounds
initial_buffer = 500
buffered = gdf.tessellation.buffer(initial_buffer)
distance = []
for orig, geom in zip(gdf.tessellation, buffered.geometry):
query = retail.sindex.query(geom, predicate='intersects')
b = initial_buffer
while query.size == 0:
query = retail.sindex.query(geom.buffer(b), predicate='intersects')
b += initial_buffer
distance.append(retail.iloc[query].distance(orig).min())
gdf['nearest_retail_centre'] = distance
gdf[['hindex', 'nearest_retail_centre']].to_parquet(f'../../urbangrammar_samba/spatial_signatures/functional/retail_centre/retail_{chunk}.pq')
return f"Chunk {chunk} processed sucessfully in {time() - s} seconds."
inputs = iter(range(103))
futures = [client.submit(measure_nearest, next(inputs)) for i in range(workers)]
ac = as_completed(futures)
for finished_future in ac:
# submit new future
try:
new_future = client.submit(measure_nearest, next(inputs))
ac.add(new_future)
except StopIteration:
pass
print(finished_future.result())
```
## Water
Water is measured as a distance to the nearest one.
```
from sqlalchemy import create_engine
from shapely.geometry import box
from shapely.ops import polygonize
user = os.environ.get('DB_USER')
pwd = os.environ.get('DB_PWD')
host = os.environ.get('DB_HOST')
port = os.environ.get('DB_PORT')
db_connection_url = f"postgres+psycopg2://{user}:{pwd}@{host}:{port}/built_env"
def measure_nearest(chunk):
s = time()
gdf = gpd.read_parquet(f'../../urbangrammar_samba/spatial_signatures/tessellation/tess_{chunk}.pq')
b = gdf.total_bounds
engine = create_engine(db_connection_url)
sql = f'SELECT * FROM gb_coastline_2016 WHERE ST_Intersects(geometry, ST_MakeEnvelope({b[0]}, {b[1]}, {b[2]}, {b[3]}, 27700))'
coastline = gpd.read_postgis(sql, engine, geom_col='geometry')
sql = f'SELECT * FROM openmap_surfacewater_area_200824 WHERE ST_Intersects(geometry, ST_MakeEnvelope({b[0]}, {b[1]}, {b[2]}, {b[3]}, 27700))'
water = gpd.read_postgis(sql, engine, geom_col='geometry')
sql = f'SELECT * FROM gb_coastline_2016'
coastline = gpd.read_postgis(sql, engine, geom_col='geometry')
polys = polygonize(coastline.geometry)
land = gpd.GeoSeries(polys, crs=27700)
sea = box(*land.total_bounds).difference(land.geometry.unary_union)
target = water.geometry
target.loc[len(water)] = sea
target = gpd.clip(target, box(*b))
initial_buffer = 500
buffered = gdf.tessellation.buffer(initial_buffer)
distance = []
for orig, geom in zip(gdf.tessellation, buffered.geometry):
query = target.sindex.query(geom, predicate='intersects')
b = initial_buffer
while query.size == 0:
query = target.sindex.query(geom.buffer(b), predicate='intersects')
b += initial_buffer
distance.append(target.iloc[query].distance(orig).min())
gdf['nearest_water'] = distance
gdf[['hindex', 'nearest_water']].to_parquet(f'../../urbangrammar_samba/spatial_signatures/functional/water/water_{chunk}.pq')
return f"Chunk {chunk} processed sucessfully in {time() - s} seconds."
```
## Convolutions
Functional characters which do not express the tendency in the spatial context are contextualised using the same method applied to morphometric data - as the 1st, 2nd and 3rd quartile weigted by inverse distance based on cells within 10th order of contiguity. The metdo is applied to:
- population
- night lights
- workplace population
- CORINE
- NDVI
```
cross_chunk = pd.read_parquet('../../urbangrammar_samba/spatial_signatures/cross-chunk_indices_10.pq')
def convolute(chunk_id):
s = time()
pop = pd.read_parquet(f"../../urbangrammar_samba/spatial_signatures/functional/population/pop_{chunk_id}")
nl = pd.read_parquet(f"../../urbangrammar_samba/spatial_signatures/functional/night_lights/nl_{chunk_id}")
workplace = pd.read_parquet(f"../../urbangrammar_samba/spatial_signatures/functional/workplace/pop_{chunk_id}")
corine = pd.read_parquet(f"../../urbangrammar_samba/spatial_signatures/functional/corine/corine_{chunk_id}.pq")
ndvi = pd.read_parquet(f"../../urbangrammar_samba/functional_data/ndvi/ndvi_tess_{chunk_id}.pq")
combined = pop.merge(nl, on='hindex').merge(workplace, on='hindex').merge(corine, on='hindex').merge(ndvi.rename({'mean': 'ndvi'}), on='hindex')
combined['keep'] = True
# add neighbouring cells from other chunks
cross_chunk_cells = []
for chunk, inds in cross_chunk.loc[chunk_id].indices.iteritems():
add_pop = pd.read_parquet(f"../../urbangrammar_samba/spatial_signatures/functional/population/pop_{chunk}").iloc[inds]
add_nl = pd.read_parquet(f"../../urbangrammar_samba/spatial_signatures/functional/night_lights/nl_{chunk}").iloc[inds]
add_workplace = pd.read_parquet(f"../../urbangrammar_samba/spatial_signatures/functional/workplace/pop_{chunk}").iloc[inds]
add_corine = pd.read_parquet(f"../../urbangrammar_samba/spatial_signatures/functional/corine/corine_{chunk}.pq").iloc[inds]
add_ndvi = pd.read_parquet(f"../../urbangrammar_samba/functional_data/ndvi/ndvi_tess_{chunk}.pq").iloc[inds]
add_combined = add_pop.merge(add_nl, on='hindex').merge(add_workplace, on='hindex').merge(add_corine, on='hindex').merge(add_ndvi.rename({'mean': 'ndvi'}), on='hindex')
add_combined['keep'] = False
cross_chunk_cells.append(add_combined)
df = combined.append(pd.concat(cross_chunk_cells, ignore_index=True), ignore_index=True).set_index('hindex')
# read W
W = libpysal.weights.WSP(scipy.sparse.load_npz(f"../../urbangrammar_samba/spatial_signatures/weights/w10_10_distance_circles_{chunk_id}.npz")).to_W()
characters = df.columns
# prepare dictionary to store results
convolutions = {}
for c in characters:
convolutions[c] = []
# measure convolutions
for i in range(len(df)):
neighbours = W.neighbors[i]
vicinity = df.iloc[neighbours]
distance = W.weights[i]
distance_decay = 1 / np.array(distance)
for c in characters:
values = vicinity[c].values
sorter = np.argsort(values)
values = values[sorter]
nan_mask = np.isnan(values)
if nan_mask.all():
convolutions[c].append(np.array([np.nan] * 3))
else:
sample_weight = distance_decay[sorter][~nan_mask]
weighted_quantiles = np.cumsum(sample_weight) - 0.5 * sample_weight
weighted_quantiles /= np.sum(sample_weight)
interpolate = np.interp([.25, .5, .75], weighted_quantiles, values[~nan_mask])
convolutions[c].append(interpolate)
# save convolutions to parquet file
conv = pd.DataFrame(convolutions, index=df.index)
exploded = pd.concat([pd.DataFrame(conv[c].to_list(), columns=[c + '_q1', c + '_q2',c + '_q3']) for c in characters], axis=1)
convoluted = exploded[df.keep.values]
convoluted['hindex'] = combined['hindex'].values
pois = pd.read_parquet(f"../../urbangrammar_samba/spatial_signatures/functional/accessibility/access_{chunk_id}.pq")
water = pd.read_parquet(f'../../urbangrammar_samba/spatial_signatures/functional/water/water_{chunk_id}.pq')
retail_centres = pd.read_parquet(f'../../urbangrammar_samba/spatial_signatures/functional/retail_centre/retail_{chunk_id}.pq')
functional = convoluted.merge(pois, on='hindex').merge(water, on='hindex').merge(retail_centres, on='hindex').set_index('hindex')
functional.to_parquet(f"../../urbangrammar_samba/spatial_signatures/functional/functional/func_{chunk_id}.pq")
return f"Chunk {chunk_id} processed sucessfully in {time() - s} seconds."
# I am afraid that we would run out of memory if we did this in parallel
for i in tqdm(range(103), total=103):
print(convolute(i))
```
| github_jupyter |
```
import ncbi_genome_download as ngd
import os, re, gzip
from ete3 import NCBITaxa
import os
ncbi = NCBITaxa()
workpath = os.path.join("../" + "NCBITaxa/")
try:
os.mkdir(workpath)
except FileExistsError:
print("File exists:"+workpath)
def getTaxid(namelist):
# Get Taxon id
accessid = []
for i in namelist:
name2taxid = ncbi.get_name_translator([i])
if name2taxid == {}:
print("Wrong Taxon name!")
return
else:
accessid.append(name2taxid)
return accessid
tid = getTaxid(["Mycobacterium tuberculosis typus humanus", "Mycobacterium bovis"])
for i in tid:
print(i)
str(list(tid[0].values())[0][0])
" -T 1773 -F fasta --assembly-level complete bacteria "
workpath
def download_db(taxnamelist, group):
print("Group: " + group)
for i in getTaxid(taxnamelist):
Taxon = list(i.keys())[0]
Taxonid = str(list(i.values())[0][0])
outdir = workpath + "_".join(Taxon.split(" "))
try:
os.mkdir(outdir)
except FileExistsError:
print("File exists: "+ outdir)
print("Downloading complete sequence in fasta from NCBI database...\n" +
"Taxon: " + Taxon + "\n" +
"Taxon id: " + Taxonid + "\n" +
"Directory: " + outdir)
os.system("ncbi-genome-download -T " + Taxonid + " -F fasta --assembly-level complete " +" -o " + outdir + " " + group)
# ngd.args_download("-T " + Taxonid + " -F fasta --assembly-level complete " +" -o " + outdir + " " + group)
print("...Taxon " + Taxon + " downloaded complete!")
download_db(["Mycobacterium Tuberculosis", "Mycobacterium bovis"], "bacteria")
def get_all_fasta(top_dir, desire_dir):
if os.path.isdir(top_dir):
if os.listdir(top_dir) == []:
return
for i in os.listdir(top_dir):
if i.split(".")[-2:] == ["fna", "gz"]:
os.system("gunzip " + top_dir + "/" + i)
os.system()
get_all_fasta(top_dir + "/" + i)
else:
return
get_all_fasta("../NCBITaxa/Mycobacterium_Tuberculosis/")
os.makedirs(directory) # where directory is "\output"
with ZipFile(self.archive_name, "r") as archive:
for id, files in data.items():
if files:
print("Creating", id)
dirpath = os.path.join(directory, id)
os.mkdir(dirpath)
for file in files:
match = pattern.match(filename)
new = match.group(2)
new_filename = os.path.join(dirpath, new)
content = archive.open(file).read()
with open(new_filename, "wb") as outfile:
outfile.write(content)
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.